Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 143
Filtrar
1.
Med Image Anal ; 95: 103199, 2024 May 09.
Artículo en Inglés | MEDLINE | ID: mdl-38759258

RESUMEN

The accurate diagnosis on pathological subtypes for lung cancer is of significant importance for the follow-up treatments and prognosis managements. In this paper, we propose self-generating hybrid feature network (SGHF-Net) for accurately classifying lung cancer subtypes on computed tomography (CT) images. Inspired by studies stating that cross-scale associations exist in the image patterns between the same case's CT images and its pathological images, we innovatively developed a pathological feature synthetic module (PFSM), which quantitatively maps cross-modality associations through deep neural networks, to derive the "gold standard" information contained in the corresponding pathological images from CT images. Additionally, we designed a radiological feature extraction module (RFEM) to directly acquire CT image information and integrated it with the pathological priors under an effective feature fusion framework, enabling the entire classification model to generate more indicative and specific pathologically related features and eventually output more accurate predictions. The superiority of the proposed model lies in its ability to self-generate hybrid features that contain multi-modality image information based on a single-modality input. To evaluate the effectiveness, adaptability, and generalization ability of our model, we performed extensive experiments on a large-scale multi-center dataset (i.e., 829 cases from three hospitals) to compare our model and a series of state-of-the-art (SOTA) classification models. The experimental results demonstrated the superiority of our model for lung cancer subtypes classification with significant accuracy improvements in terms of accuracy (ACC), area under the curve (AUC), positive predictive value (PPV) and F1-score.

2.
Med Image Anal ; 91: 102996, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37857067

RESUMEN

This article discusses the opportunities, applications and future directions of large-scale pretrained models, i.e., foundation models, which promise to significantly improve the analysis of medical images. Medical foundation models have immense potential in solving a wide range of downstream tasks, as they can help to accelerate the development of accurate and robust models, reduce the dependence on large amounts of labeled data, preserve the privacy and confidentiality of patient data. Specifically, we illustrate the "spectrum" of medical foundation models, ranging from general imaging models, modality-specific models, to organ/task-specific models, and highlight their challenges, opportunities and applications. We also discuss how foundation models can be leveraged in downstream medical tasks to enhance the accuracy and efficiency of medical image analysis, leading to more precise diagnosis and treatment decisions.


Asunto(s)
Diagnóstico por Imagen , Humanos , Diagnóstico por Imagen/métodos , Predicción
3.
Med Image Anal ; 91: 102999, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37862866

RESUMEN

Coronary CT angiography (CCTA) is an effective and non-invasive method for coronary artery disease diagnosis. Extracting an accurate coronary artery tree from CCTA image is essential for centerline extraction, plaque detection, and stenosis quantification. In practice, data quality varies. Sometimes, the arteries and veins have similar intensities and locate closely, which may confuse segmentation algorithms, even deep learning based ones, to obtain accurate arteries. However, it is not always feasible to re-scan the patient for better image quality. In this paper, we propose an artery and vein disentanglement network (AVDNet) for robust and accurate segmentation by incorporating the coronary vein into the segmentation task. This is the first work to segment coronary artery and vein at the same time. The AVDNet consists of an image based vessel recognition network (IVRN) and a topology based vessel refinement network (TVRN). IVRN learns to segment the arteries and veins, while TVRN learns to correct the segmentation errors based on topology consistency. We also design a novel inverse distance weighted dice (IDD) loss function to recover more thin vessel branches and preserve the vascular boundaries. Extensive experiments are conducted on a multi-center dataset of 700 patients. Quantitative and qualitative results demonstrate the effectiveness of the proposed method by comparing it with state-of-the-art methods and different variants. Prediction results of the AVDNet on the Automated Segmentation of Coronary Artery Challenge dataset are avaliabel at https://github.com/WennyJJ/Coronary-Artery-Vein-Segmentation for follow-up research.


Asunto(s)
Algoritmos , Vasos Coronarios , Humanos , Vasos Coronarios/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Angiografía Coronaria/métodos , Angiografía por Tomografía Computarizada/métodos , Procesamiento de Imagen Asistido por Computador/métodos
4.
Artículo en Inglés | MEDLINE | ID: mdl-38083011

RESUMEN

Accurate liver tumor segmentation is a prerequisite for data-driven tumor analysis. Multiphase computed tomography (CT) with extensive liver tumor characteristics is typically used as the most crucial diagnostic basis. However, the large variations in contrast, texture, and tumor structure between CT phases limit the generalization capabilities of the associated segmentation algorithms. Inadequate feature integration across phases might also lead to a performance decrease. To address these issues, we present a domain-adversarial transformer (DA-Tran) network for segmenting liver tumors from multiphase CT images. A DA module is designed to generate domain-adapted feature maps from the non-contrast-enhanced (NC) phase, arterial (ART) phase, portal venous (PV) phase, and delay phase (DP) images. These domain-adapted feature maps are then combined with 3D transformer blocks to capture patch-structured similarity and global context attention. The experimental findings show that DA-Tran produces cutting-edge tumor segmentation outcomes, making it an ideal candidate for this co-segmentation challenge.


Asunto(s)
Neoplasias Hepáticas , Humanos , Neoplasias Hepáticas/diagnóstico por imagen , Algoritmos , Arterias , Suministros de Energía Eléctrica , Generalización Psicológica
5.
Artículo en Inglés | MEDLINE | ID: mdl-38082617

RESUMEN

Tooth segmentation from intraoral scans is a crucial part of digital dentistry. Many Deep Learning based tooth segmentation algorithms have been developed for this task. In most of the cases, high accuracy has been achieved, although, most of the available tooth segmentation techniques make an implicit restrictive assumption of full jaw model and they report accuracy based on full jaw models. Medically, however, in certain cases, full jaw tooth scan is not required or may not be available. Given this practical issue, it is important to understand the robustness of currently available widely used Deep Learning based tooth segmentation techniques. For this purpose, we applied available segmentation techniques on partial intraoral scans and we discovered that the available deep Learning techniques under-perform drastically. The analysis and comparison presented in this work would help us in understanding the severity of the problem and allow us to develop robust tooth segmentation technique without strong assumption of full jaw model.Clinical relevance- Deep learning based tooth mesh segmentation algorithms have achieved high accuracy. In the clinical setting, robustness of deep learning based methods is of utmost importance. We discovered that the high performing tooth segmentation methods under-perform when segmenting partial intraoral scans. In our current work, we conduct extensive experiments to show the extent of this problem. We also discuss why adding partial scans to the training data of the tooth segmentation models is non-trivial. An in-depth understanding of this problem can help in developing robust tooth segmentation tenichniques.


Asunto(s)
Aprendizaje Profundo , Diente , Algoritmos , Diente/diagnóstico por imagen , Cintigrafía , Modelos Dentales
6.
Artículo en Inglés | MEDLINE | ID: mdl-38083482

RESUMEN

Lung cancer is a malignant tumor with rapid progression and high fatality rate. According to histological morphology and cell behaviours of cancerous tissues, lung cancer can be classified into a variety of subtypes. Since different cancer subtype corresponds to distinct therapies, the early and accurate diagnosis is critical for following treatments and prognostic managements. In clinical practice, the pathological examination is regarded as the gold standard for cancer subtypes diagnosis, while the disadvantage of invasiveness limits its extensive use, leading the non-invasive and fast-imaging computed tomography (CT) test a more commonly used modality in early cancer diagnosis. However, the diagnostic results of CT test are less accurate due to the relatively low image resolution and the atypical manifestations of cancer subtypes. In this work, we propose a novel automatic classification model to offer the assistance in accurately diagnosing the lung cancer subtypes on CT images. Inspired by the findings of cross-modality associations between CT images and their corresponding pathological images, our proposed model is developed to incorporate general histopathological information into CT imagery-based lung cancer subtypes diagnostic by omitting the invasive tissue sample collection or biopsy, and thereby augmenting the diagnostic accuracy. Experimental results on both internal evaluation datasets and external evaluation datasets demonstrate that our proposed model outputs more accurate lung cancer subtypes diagnostic predictions compared to existing CT-based state-of-the-art (SOTA) classification models, by achieving significant improvements in both accuracy (ACC) and area under the receiver operating characteristic curve (AUC).Clinical Relevance- This work provides a method for automatically classifying the lung cancer subtypes on CT images.


Asunto(s)
Neoplasias Pulmonares , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/patología , Pulmón/patología , Tomografía Computarizada por Rayos X/métodos , Tórax , Curva ROC
8.
Nat Commun ; 14(1): 5510, 2023 09 07.
Artículo en Inglés | MEDLINE | ID: mdl-37679325

RESUMEN

Overcoming barriers on the use of multi-center data for medical analytics is challenging due to privacy protection and data heterogeneity in the healthcare system. In this study, we propose the Distributed Synthetic Learning (DSL) architecture to learn across multiple medical centers and ensure the protection of sensitive personal information. DSL enables the building of a homogeneous dataset with entirely synthetic medical images via a form of GAN-based synthetic learning. The proposed DSL architecture has the following key functionalities: multi-modality learning, missing modality completion learning, and continual learning. We systematically evaluate the performance of DSL on different medical applications using cardiac computed tomography angiography (CTA), brain tumor MRI, and histopathology nuclei datasets. Extensive experiments demonstrate the superior performance of DSL as a high-quality synthetic medical image provider by the use of an ideal synthetic quality metric called Dist-FID. We show that DSL can be adapted to heterogeneous data and remarkably outperforms the real misaligned modalities segmentation model by 55% and the temporal datasets segmentation model by 8%.


Asunto(s)
Neoplasias Encefálicas , Aprendizaje , Humanos , Angiografía , Núcleo Celular , Angiografía por Tomografía Computarizada
9.
Front Cell Dev Biol ; 11: 1242481, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37635874

RESUMEN

Intra-thymic T cell development is coordinated by the regulatory actions of SATB1 genome organizer. In this report, we show that SATB1 is involved in the regulation of transcription and splicing, both of which displayed deregulation in Satb1 knockout murine thymocytes. More importantly, we characterized a novel SATB1 protein isoform and described its distinct biophysical behavior, implicating potential functional differences compared to the commonly studied isoform. SATB1 utilized its prion-like domains to transition through liquid-like states to aggregated structures. This behavior was dependent on protein concentration as well as phosphorylation and interaction with nuclear RNA. Notably, the long SATB1 isoform was more prone to aggregate following phase separation. Thus, the tight regulation of SATB1 isoforms expression levels alongside with protein post-translational modifications, are imperative for SATB1's mode of action in T cell development. Our data indicate that deregulation of these processes may also be linked to disorders such as cancer.

10.
Med Image Anal ; 89: 102904, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37506556

RESUMEN

Generalization to previously unseen images with potential domain shifts is essential for clinically applicable medical image segmentation. Disentangling domain-specific and domain-invariant features is key for Domain Generalization (DG). However, existing DG methods struggle to achieve effective disentanglement. To address this problem, we propose an efficient framework called Contrastive Domain Disentanglement and Style Augmentation (CDDSA) for generalizable medical image segmentation. First, a disentangle network decomposes the image into domain-invariant anatomical representation and domain-specific style code, where the former is sent for further segmentation that is not affected by domain shift, and the disentanglement is regularized by a decoder that combines the anatomical representation and style code to reconstruct the original image. Second, to achieve better disentanglement, a contrastive loss is proposed to encourage the style codes from the same domain and different domains to be compact and divergent, respectively. Finally, to further improve generalizability, we propose a style augmentation strategy to synthesize images with various unseen styles in real time while maintaining anatomical information. Comprehensive experiments on a public multi-site fundus image dataset and an in-house multi-site Nasopharyngeal Carcinoma Magnetic Resonance Image (NPC-MRI) dataset show that the proposed CDDSA achieved remarkable generalizability across different domains, and it outperformed several state-of-the-art methods in generalizable segmentation. Code is available at https://github.com/HiLab-git/DAG4MIA.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Humanos , Fondo de Ojo
11.
IEEE J Biomed Health Inform ; 27(7): 3302-3313, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37067963

RESUMEN

In recent years, several deep learning models have been proposed to accurately quantify and diagnose cardiac pathologies. These automated tools heavily rely on the accurate segmentation of cardiac structures in MRI images. However, segmentation of the right ventricle is challenging due to its highly complex shape and ill-defined borders. Hence, there is a need for new methods to handle such structure's geometrical and textural complexities, notably in the presence of pathologies such as Dilated Right Ventricle, Tricuspid Regurgitation, Arrhythmogenesis, Tetralogy of Fallot, and Inter-atrial Communication. The last MICCAI challenge on right ventricle segmentation was held in 2012 and included only 48 cases from a single clinical center. As part of the 12th Workshop on Statistical Atlases and Computational Models of the Heart (STACOM 2021), the M&Ms-2 challenge was organized to promote the interest of the research community around right ventricle segmentation in multi-disease, multi-view, and multi-center cardiac MRI. Three hundred sixty CMR cases, including short-axis and long-axis 4-chamber views, were collected from three Spanish hospitals using nine different scanners from three different vendors, and included a diverse set of right and left ventricle pathologies. The solutions provided by the participants show that nnU-Net achieved the best results overall. However, multi-view approaches were able to capture additional information, highlighting the need to integrate multiple cardiac diseases, views, scanners, and acquisition protocols to produce reliable automatic cardiac segmentation algorithms.


Asunto(s)
Aprendizaje Profundo , Ventrículos Cardíacos , Humanos , Ventrículos Cardíacos/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Algoritmos , Atrios Cardíacos
12.
IEEE Trans Pattern Anal Mach Intell ; 45(8): 10409-10426, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-37022840

RESUMEN

Modern medical imaging techniques, such as ultrasound (US) and cardiac magnetic resonance (MR) imaging, have enabled the evaluation of myocardial deformation directly from an image sequence. While many traditional cardiac motion tracking methods have been developed for the automated estimation of the myocardial wall deformation, they are not widely used in clinical diagnosis, due to their lack of accuracy and efficiency. In this paper, we propose a novel deep learning-based fully unsupervised method, SequenceMorph, for in vivo motion tracking in cardiac image sequences. In our method, we introduce the concept of motion decomposition and recomposition. We first estimate the inter-frame (INF) motion field between any two consecutive frames, by a bi-directional generative diffeomorphic registration neural network. Using this result, we then estimate the Lagrangian motion field between the reference frame and any other frame, through a differentiable composition layer. Our framework can be extended to incorporate another registration network, to further reduce the accumulated errors introduced in the INF motion tracking step, and to refine the Lagrangian motion estimation. By utilizing temporal information to perform reasonable estimations of spatio-temporal motion fields, this novel method provides a useful solution for image sequence motion tracking. Our method has been applied to US (echocardiographic) and cardiac MR (untagged and tagged cine) image sequences; the results show that SequenceMorph is significantly superior to conventional motion tracking methods, in terms of the cardiac motion tracking accuracy and inference efficiency.


Asunto(s)
Algoritmos , Aprendizaje Automático no Supervisado , Corazón/diagnóstico por imagen , Movimiento (Física) , Imagen por Resonancia Magnética
13.
Sci Data ; 10(1): 231, 2023 04 21.
Artículo en Inglés | MEDLINE | ID: mdl-37085533

RESUMEN

The success of training computer-vision models heavily relies on the support of large-scale, real-world images with annotations. Yet such an annotation-ready dataset is difficult to curate in pathology due to the privacy protection and excessive annotation burden. To aid in computational pathology, synthetic data generation, curation, and annotation present a cost-effective means to quickly enable data diversity that is required to boost model performance at different stages. In this study, we introduce a large-scale synthetic pathological image dataset paired with the annotation for nuclei semantic segmentation, termed as Synthetic Nuclei and annOtation Wizard (SNOW). The proposed SNOW is developed via a standardized workflow by applying the off-the-shelf image generator and nuclei annotator. The dataset contains overall 20k image tiles and 1,448,522 annotated nuclei with the CC-BY license. We show that SNOW can be used in both supervised and semi-supervised training scenarios. Extensive results suggest that synthetic-data-trained models are competitive under a variety of model training settings, expanding the scope of better using synthetic images for enhancing downstream data-driven clinical tasks.


Asunto(s)
Neoplasias de la Mama , Aprendizaje Profundo , Privacidad , Flujo de Trabajo , Procesamiento de Imagen Asistido por Computador , Semántica , Humanos , Femenino
15.
Med Image Anal ; 82: 102642, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36223682

RESUMEN

Whole abdominal organ segmentation is important in diagnosing abdomen lesions, radiotherapy, and follow-up. However, oncologists' delineating all abdominal organs from 3D volumes is time-consuming and very expensive. Deep learning-based medical image segmentation has shown the potential to reduce manual delineation efforts, but it still requires a large-scale fine annotated dataset for training, and there is a lack of large-scale datasets covering the whole abdomen region with accurate and detailed annotations for the whole abdominal organ segmentation. In this work, we establish a new large-scale Whole abdominal ORgan Dataset (WORD) for algorithm research and clinical application development. This dataset contains 150 abdominal CT volumes (30495 slices). Each volume has 16 organs with fine pixel-level annotations and scribble-based sparse annotations, which may be the largest dataset with whole abdominal organ annotation. Several state-of-the-art segmentation methods are evaluated on this dataset. And we also invited three experienced oncologists to revise the model predictions to measure the gap between the deep learning method and oncologists. Afterwards, we investigate the inference-efficient learning on the WORD, as the high-resolution image requires large GPU memory and a long inference time in the test stage. We further evaluate the scribble-based annotation-efficient learning on this dataset, as the pixel-wise manual annotation is time-consuming and expensive. The work provided a new benchmark for the abdominal multi-organ segmentation task, and these experiments can serve as the baseline for future research and clinical application development.


Asunto(s)
Benchmarking , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Algoritmos , Abdomen , Procesamiento de Imagen Asistido por Computador/métodos
16.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 4758-4763, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-36086601

RESUMEN

Multi-modality images have been widely used and provide comprehensive information for medical image analysis. However, acquiring all modalities among all institutes is costly and often impossible in clinical settings. To leverage more comprehensive multi-modality information, we propose privacy secured decentralized multi-modality adaptive learning architecture named ModalityBank. Our method could learn a set of effective domain-specific modulation parameters plugged into a common domain-agnostic network. We demonstrate by switching different sets of configurations, the generator could output high-quality images for a specific modality. Our method could also complete the missing modalities across all data centers, thus could be used for modality completion purposes. The downstream task trained from the synthesized multi-modality samples could achieve higher performance than learning from one real data center and achieve close- to- real performance compare with all real images.


Asunto(s)
Imagen por Resonancia Magnética , Imagen Multimodal , Imagen por Resonancia Magnética/métodos , Imagen Multimodal/métodos
17.
Front Cardiovasc Med ; 9: 919810, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35859582

RESUMEN

Recent advances in magnetic resonance imaging are enabling the efficient creation of high-dimensional, multiparametric images, containing a wealth of potential information about the structure and function of many organs, including the cardiovascular system. However, the sizes of these rich data sets are so large that they are outstripping our ability to adequately visualize and analyze them, thus limiting their clinical impact. While there are some intrinsic limitations of human perception and of conventional display devices which hamper our ability to effectively use these data, newer computational methods for handling the data may aid our ability to extract and visualize the salient components of these high-dimensional data sets.

18.
Med Image Anal ; 80: 102485, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-35679692

RESUMEN

Examination of pathological images is the golden standard for diagnosing and screening many kinds of cancers. Multiple datasets, benchmarks, and challenges have been released in recent years, resulting in significant improvements in computer-aided diagnosis (CAD) of related diseases. However, few existing works focus on the digestive system. We released two well-annotated benchmark datasets and organized challenges for the digestive-system pathological cell detection and tissue segmentation, in conjunction with the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). This paper first introduces the two released datasets, i.e., signet ring cell detection and colonoscopy tissue segmentation, with the descriptions of data collection, annotation, and potential uses. We also report the set-up, evaluation metrics, and top-performing methods and results of two challenge tasks for cell detection and tissue segmentation. In particular, the challenge received 234 effective submissions from 32 participating teams, where top-performing teams developed advancing approaches and tools for the CAD of digestive pathology. To the best of our knowledge, these are the first released publicly available datasets with corresponding challenges for the digestive-system pathological detection and segmentation. The related datasets and results provide new opportunities for the research and application of digestive pathology.


Asunto(s)
Benchmarking , Diagnóstico por Computador , Colonoscopía , Humanos , Procesamiento de Imagen Asistido por Computador/métodos
19.
Med Image Anal ; 80: 102517, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-35732106

RESUMEN

Despite that Convolutional Neural Networks (CNNs) have achieved promising performance in many medical image segmentation tasks, they rely on a large set of labeled images for training, which is expensive and time-consuming to acquire. Semi-supervised learning has shown the potential to alleviate this challenge by learning from a large set of unlabeled images and limited labeled samples. In this work, we present a simple yet efficient consistency regularization approach for semi-supervised medical image segmentation, called Uncertainty Rectified Pyramid Consistency (URPC). Inspired by the pyramid feature network, we chose a pyramid-prediction network that obtains a set of segmentation predictions at different scales. For semi-supervised learning, URPC learns from unlabeled data by minimizing the discrepancy between each of the pyramid predictions and their average. We further present multi-scale uncertainty rectification to boost the pyramid consistency regularization, where the rectification seeks to temper the consistency loss at outlier pixels that may have substantially different predictions than the average, potentially due to upsampling errors or lack of enough labeled data. Experiments on two public datasets and an in-house clinical dataset showed that: 1) URPC can achieve large performance improvement by utilizing unlabeled data and 2) Compared with five existing semi-supervised methods, URPC achieved better or comparable results with a simpler pipeline. Furthermore, we build a semi-supervised medical image segmentation codebase to boost research on this topic: https://github.com/HiLab-git/SSL4MIS.


Asunto(s)
Redes Neurales de la Computación , Aprendizaje Automático Supervisado , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Incertidumbre
20.
Sci Rep ; 12(1): 183, 2022 01 07.
Artículo en Inglés | MEDLINE | ID: mdl-34997025

RESUMEN

Signet ring cell carcinoma (SRCC) is a malignant tumor of the digestive system. This tumor has long been considered to be poorly differentiated and highly invasive because it has a higher rate of metastasis than well-differentiated adenocarcinoma. But some studies in recent years have shown that the prognosis of some SRCC is more favorable than other poorly differentiated adenocarcinomas, which suggests that SRCC has different degrees of biological behavior. Therefore, we need to find a histological stratification that can predict the biological behavior of SRCC. Some studies indicate that the morphological status of cells can be linked to the invasiveness potential of cells, however, the traditional histopathological examination can not objectively define and evaluate them. Recent improvements in biomedical image analysis using deep learning (DL) based neural networks could be exploited to identify and analyze SRCC. In this study, we used DL to identify each cancer cell of SRCC in whole slide images (WSIs) and quantify their morphological characteristics and atypia. Our results show that the biological behavior of SRCC can be predicted by quantifying the morphology of cancer cells by DL. This technique could be used to predict the biological behavior and may change the stratified treatment of SRCC.


Asunto(s)
Carcinoma de Células en Anillo de Sello/patología , Forma de la Célula , Neoplasias Colorrectales/patología , Aprendizaje Profundo , Diagnóstico por Computador , Interpretación de Imagen Asistida por Computador , Microscopía , Neoplasias Gástricas/patología , Biopsia , Humanos , Valor Predictivo de las Pruebas , Pronóstico , Reproducibilidad de los Resultados
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...