Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 44
Filtrar
1.
Gastrointest Endosc ; 2024 Jun 26.
Artículo en Inglés | MEDLINE | ID: mdl-38942330

RESUMEN

BACKGROUND AND AIMS: Computer-aided diagnosis (CADx) for optical diagnosis of colorectal polyps is thoroughly investigated. However, studies on human-artificial intelligence (AI) interaction are lacking. Aim was to investigate endoscopists' trust in CADx by evaluating whether communicating a calibrated algorithm confidence improved trust. METHODS: Endoscopists optically diagnosed 60 colorectal polyps. Initially, endoscopists diagnosed the polyps without CADx assistance (initial diagnosis). Immediately afterwards, the same polyp was again shown with CADx prediction; either only a prediction (benign or pre-malignant) or a prediction accompanied by a calibrated confidence score (0-100). A confidence score of 0 indicated a benign prediction, 100 a (pre-)malignant prediction. In half of the polyps CADx was mandatory, for the other half CADx was optional. After reviewing the CADx prediction, endoscopists made a final diagnosis. Histopathology was used as gold standard. Endoscopists' trust in CADx was measured as CADx prediction utilization; the willingness to follow CADx predictions when the endoscopists initially disagreed with the CADx prediction. RESULTS: Twenty-three endoscopists participated. Presenting CADx predictions increased the endoscopists' diagnostic accuracy (69.3% initial vs 76.6% final diagnosis, p<0.001). The CADx prediction was utilized in 36.5% (n=183/501) disagreements. Adding a confidence score led to a lower CADx prediction utilization, except when the confidence score surpassed 60. A mandatory CADx decreased CADx prediction utilization compared to an optional CADx. Appropriate trust, utilizing correct or disregarding incorrect CADx predictions was 48.7% (n=244/501). CONCLUSIONS: Appropriate trust was common and CADx prediction utilization was highest for the optional CADx without confidence scores. These results express the importance of a better understanding of human-AI interaction.

2.
J Endourol ; 38(7): 690-696, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38613819

RESUMEN

Objective: To construct a convolutional neural network (CNN) model that can recognize and delineate anatomic structures on intraoperative video frames of robot-assisted radical prostatectomy (RARP) and to use these annotations to predict the surgical urethral length (SUL). Background: Urethral dissection during RARP impacts patient urinary incontinence (UI) outcomes, and requires extensive training. Large differences exist between incontinence outcomes of different urologists and hospitals. Also, surgeon experience and education are critical toward optimal outcomes. Therefore, new approaches are warranted. SUL is associated with UI. Artificial intelligence (AI) surgical image segmentation using a CNN could automate SUL estimation and contribute toward future AI-assisted RARP and surgeon guidance. Methods: Eighty-eight intraoperative RARP videos between June 2009 and September 2014 were collected from a single center. Two hundred sixty-four frames were annotated according to prostate, urethra, ligated plexus, and catheter. Thirty annotated images from different RARP videos were used as a test data set. The dice (similarity) coefficient (DSC) and 95th percentile Hausdorff distance (Hd95) were used to determine model performance. SUL was calculated using the catheter as a reference. Results: The DSC of the best performing model were 0.735 and 0.755 for the catheter and urethra classes, respectively, with a Hd95 of 29.27 and 72.62, respectively. The model performed moderately on the ligated plexus and prostate. The predicted SUL showed a mean difference of 0.64 to 1.86 mm difference vs human annotators, but with significant deviation (standard deviation = 3.28-3.56). Conclusion: This study shows that an AI image segmentation model can predict vital structures during RARP urethral dissection with moderate to fair accuracy. SUL estimation derived from it showed large deviations and outliers when compared with human annotators, but with a small mean difference (<2 mm). This is a promising development for further research on AI-assisted RARP.


Asunto(s)
Inteligencia Artificial , Prostatectomía , Procedimientos Quirúrgicos Robotizados , Uretra , Humanos , Prostatectomía/métodos , Masculino , Uretra/cirugía , Uretra/diagnóstico por imagen , Procedimientos Quirúrgicos Robotizados/métodos , Redes Neurales de la Computación , Procesamiento de Imagen Asistido por Computador/métodos , Próstata/cirugía , Próstata/diagnóstico por imagen
3.
Gastrointest Endosc ; 2024 Apr 09.
Artículo en Inglés | MEDLINE | ID: mdl-38604297

RESUMEN

BACKGROUND AND AIMS: This pilot study evaluated the performance of a recently developed computer-aided detection (CADe) system for Barrett's neoplasia during live endoscopic procedures. METHODS: Fifteen patients with a visible lesion and 15 without were included in this study. A CAD-assisted workflow was used that included a slow pullback video recording of the entire Barrett's segment with live CADe assistance, followed by CADe-assisted level-based video recordings every 2 cm of the Barrett's segment. Outcomes were per-patient and per-level diagnostic accuracy of the CAD-assisted workflow, in which the primary outcome was per-patient in vivo CADe sensitivity. RESULTS: In the per-patient analyses, the CADe system detected all visible lesions (sensitivity 100%). Per-patient CADe specificity was 53%. Per-level sensitivity and specificity of the CADe assisted workflow were 100% and 73%, respectively. CONCLUSIONS: In this pilot study, detection by the CADe system of all potentially neoplastic lesions in Barrett's esophagus was comparable to that of an expert endoscopist. Continued refinement of the system may improve specificity. External validation in larger multicenter studies is planned. (Clinical trial registration number: NCT05628441.).

4.
Med Image Anal ; 94: 103157, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38574544

RESUMEN

Computer-aided detection and diagnosis systems (CADe/CADx) in endoscopy are commonly trained using high-quality imagery, which is not representative for the heterogeneous input typically encountered in clinical practice. In endoscopy, the image quality heavily relies on both the skills and experience of the endoscopist and the specifications of the system used for screening. Factors such as poor illumination, motion blur, and specific post-processing settings can significantly alter the quality and general appearance of these images. This so-called domain gap between the data used for developing the system and the data it encounters after deployment, and the impact it has on the performance of deep neural networks (DNNs) supportive endoscopic CAD systems remains largely unexplored. As many of such systems, for e.g. polyp detection, are already being rolled out in clinical practice, this poses severe patient risks in particularly community hospitals, where both the imaging equipment and experience are subject to considerable variation. Therefore, this study aims to evaluate the impact of this domain gap on the clinical performance of CADe/CADx for various endoscopic applications. For this, we leverage two publicly available data sets (KVASIR-SEG and GIANA) and two in-house data sets. We investigate the performance of commonly-used DNN architectures under synthetic, clinically calibrated image degradations and on a prospectively collected dataset including 342 endoscopic images of lower subjective quality. Additionally, we assess the influence of DNN architecture and complexity, data augmentation, and pretraining techniques for improved robustness. The results reveal a considerable decline in performance of 11.6% (±1.5) as compared to the reference, within the clinically calibrated boundaries of image degradations. Nevertheless, employing more advanced DNN architectures and self-supervised in-domain pre-training effectively mitigate this drop to 7.7% (±2.03). Additionally, these enhancements yield the highest performance on the manually collected test set including images with lower subjective quality. By comprehensively assessing the robustness of popular DNN architectures and training strategies across multiple datasets, this study provides valuable insights into their performance and limitations for endoscopic applications. The findings highlight the importance of including robustness evaluation when developing DNNs for endoscopy applications and propose strategies to mitigate performance loss.


Asunto(s)
Diagnóstico por Computador , Redes Neurales de la Computación , Humanos , Diagnóstico por Computador/métodos , Endoscopía Gastrointestinal , Procesamiento de Imagen Asistido por Computador/métodos
5.
6.
IEEE Trans Image Process ; 33: 2462-2476, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38517715

RESUMEN

Accurate 6-DoF pose estimation of surgical instruments during minimally invasive surgeries can substantially improve treatment strategies and eventual surgical outcome. Existing deep learning methods have achieved accurate results, but they require custom approaches for each object and laborious setup and training environments often stretching to extensive simulations, whilst lacking real-time computation. We propose a general-purpose approach of data acquisition for 6-DoF pose estimation tasks in X-ray systems, a novel and general purpose YOLOv5-6D pose architecture for accurate and fast object pose estimation and a complete method for surgical screw pose estimation under acquisition geometry consideration from a monocular cone-beam X-ray image. The proposed YOLOv5-6D pose model achieves competitive results on public benchmarks whilst being considerably faster at 42 FPS on GPU. In addition, the method generalizes across varying X-ray acquisition geometry and semantic image complexity to enable accurate pose estimation over different domains. Finally, the proposed approach is tested for bone-screw pose estimation for computer-aided guidance during spine surgeries. The model achieves a 92.41% by the 0.1·d ADD-S metric, demonstrating a promising approach for enhancing surgical precision and patient outcomes. The code for YOLOv5-6D is publicly available at https://github.com/cviviers/YOLOv5-6D-Pose.

7.
Diagnostics (Basel) ; 13(20)2023 Oct 13.
Artículo en Inglés | MEDLINE | ID: mdl-37892019

RESUMEN

The preoperative prediction of resectability pancreatic ductal adenocarcinoma (PDAC) is challenging. This retrospective single-center study examined tumor and vessel radiomics to predict the resectability of PDAC in chemo-naïve patients. The tumor and adjacent arteries and veins were segmented in the portal-venous phase of contrast-enhanced CT scans, and radiomic features were extracted. Features were selected via stability and collinearity testing, and least absolute shrinkage and selection operator application (LASSO). Three models, using tumor features, vessel features, and a combination of both, were trained with the training set (N = 86) to predict resectability. The results were validated with the test set (N = 15) and compared to the multidisciplinary team's (MDT) performance. The vessel-features-only model performed best, with an AUC of 0.92 and sensitivity and specificity of 97% and 73%, respectively. Test set validation showed a sensitivity and specificity of 100% and 88%, respectively. The combined model was as good as the vessel model (AUC = 0.91), whereas the tumor model showed poor performance (AUC = 0.76). The MDT's prediction reached a sensitivity and specificity of 97% and 84% for the training set and 88% and 100% for the test set, respectively. Our clinician-independent vessel-based radiomics model can aid in predicting resectability and shows performance comparable to that of the MDT. With these encouraging results, improved, automated, and generalizable models can be developed that reduce workload and can be applied in non-expert hospitals.

8.
J Clin Med ; 12(13)2023 Jun 22.
Artículo en Inglés | MEDLINE | ID: mdl-37445243

RESUMEN

Radiological imaging plays a crucial role in the detection and treatment of pancreatic ductal adenocarcinoma (PDAC). However, there are several challenges associated with the use of these techniques in daily clinical practice. Determination of the presence or absence of cancer using radiological imaging is difficult and requires specific expertise, especially after neoadjuvant therapy. Early detection and characterization of tumors would potentially increase the number of patients who are eligible for curative treatment. Over the last decades, artificial intelligence (AI)-based computer-aided detection (CAD) has rapidly evolved as a means for improving the radiological detection of cancer and the assessment of the extent of disease. Although the results of AI applications seem promising, widespread adoption in clinical practice has not taken place. This narrative review provides an overview of current radiological CAD systems in pancreatic cancer, highlights challenges that are pertinent to clinical practice, and discusses potential solutions for these challenges.

10.
Endosc Int Open ; 11(5): E513-E518, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-37206697

RESUMEN

Computer-aided diagnosis systems (CADx) can improve colorectal polyp (CRP) optical diagnosis. For integration into clinical practice, better understanding of artificial intelligence (AI) by endoscopists is needed. We aimed to develop an explainable AI CADx capable of automatically generating textual descriptions of CRPs. For training and testing of this CADx, textual descriptions of CRP size and features according to the Blue Light Imaging (BLI) Adenoma Serrated International Classification (BASIC) were used, describing CRP surface, pit pattern, and vessels. CADx was tested using BLI images of 55 CRPs. Reference descriptions with agreement by at least five out of six expert endoscopists were used as gold standard. CADx performance was analyzed by calculating agreement between the CADx generated descriptions and reference descriptions. CADx development for automatic textual description of CRP features succeeded. Gwet's AC1 values comparing the reference and generated descriptions per CRP feature were: size 0.496, surface-mucus 0.930, surface-regularity 0.926, surface-depression 0.940, pits-features 0.921, pits-type 0.957, pits-distribution 0.167, and vessels 0.778. CADx performance differed per CRP feature and was particularly high for surface descriptors while size and pits-distribution description need improvement. Explainable AI can help comprehend reasoning behind CADx diagnoses and therefore facilitate integration into clinical practice and increase trust in AI.

11.
J Clin Med ; 12(10)2023 May 18.
Artículo en Inglés | MEDLINE | ID: mdl-37240643

RESUMEN

To reduce the number of missed or misdiagnosed lung nodules on CT scans by radiologists, many Artificial Intelligence (AI) algorithms have been developed. Some algorithms are currently being implemented in clinical practice, but the question is whether radiologists and patients really benefit from the use of these novel tools. This study aimed to review how AI assistance for lung nodule assessment on CT scans affects the performances of radiologists. We searched for studies that evaluated radiologists' performances in the detection or malignancy prediction of lung nodules with and without AI assistance. Concerning detection, radiologists achieved with AI assistance a higher sensitivity and AUC, while the specificity was slightly lower. Concerning malignancy prediction, radiologists achieved with AI assistance generally a higher sensitivity, specificity and AUC. The radiologists' workflows of using the AI assistance were often only described in limited detail in the papers. As recent studies showed improved performances of radiologists with AI assistance, AI assistance for lung nodule assessment holds great promise. To achieve added value of AI tools for lung nodule assessment in clinical practice, more research is required on the clinical validation of AI tools, impact on follow-up recommendations and ways of using AI tools.

12.
Cancers (Basel) ; 15(7)2023 Mar 24.
Artículo en Inglés | MEDLINE | ID: mdl-37046611

RESUMEN

Optical biopsy in Barrett's oesophagus (BE) using endocytoscopy (EC) could optimize endoscopic screening. However, the identification of dysplasia is challenging due to the complex interpretation of the highly detailed images. Therefore, we assessed whether using artificial intelligence (AI) as second assessor could help gastroenterologists in interpreting endocytoscopic BE images. First, we prospectively videotaped 52 BE patients with EC. Then we trained and tested the AI pm distinct datasets drawn from 83,277 frames, developed an endocytoscopic BE classification system, and designed online training and testing modules. We invited two successive cohorts for these online modules: 10 endoscopists to validate the classification system and 12 gastroenterologists to evaluate AI as second assessor by providing six of them with the option to request AI assistance. Training the endoscopists in the classification system established an improved sensitivity of 90.0% (+32.67%, p < 0.001) and an accuracy of 77.67% (+13.0%, p = 0.020) compared with the baseline. However, these values deteriorated at follow-up (-16.67%, p < 0.001 and -8.0%, p = 0.009). Contrastingly, AI-assisted gastroenterologists maintained high sensitivity and accuracy at follow-up, subsequently outperforming the unassisted gastroenterologists (+20.0%, p = 0.025 and +12.22%, p = 0.05). Thus, best diagnostic scores for the identification of dysplasia emerged through human-machine collaboration between trained gastroenterologists with AI as the second assessor. Therefore, AI could support clinical implementation of optical biopsies through EC.

13.
United European Gastroenterol J ; 11(4): 324-336, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-37095718

RESUMEN

INTRODUCTION: Endoscopic detection of early neoplasia in Barrett's esophagus is difficult. Computer Aided Detection (CADe) systems may assist in neoplasia detection. The aim of this study was to report the first steps in the development of a CADe system for Barrett's neoplasia and to evaluate its performance when compared with endoscopists. METHODS: This CADe system was developed by a consortium, consisting of the Amsterdam University Medical Center, Eindhoven University of Technology, and 15 international hospitals. After pretraining, the system was trained and validated using 1.713 neoplastic (564 patients) and 2.707 non-dysplastic Barrett's esophagus (NDBE; 665 patients) images. Neoplastic lesions were delineated by 14 experts. The performance of the CADe system was tested on three independent test sets. Test set 1 (50 neoplastic and 150 NDBE images) contained subtle neoplastic lesions representing challenging cases and was benchmarked by 52 general endoscopists. Test set 2 (50 neoplastic and 50 NDBE images) contained a heterogeneous case-mix of neoplastic lesions, representing distribution in clinical practice. Test set 3 (50 neoplastic and 150 NDBE images) contained prospectively collected imagery. The main outcome was correct classification of the images in terms of sensitivity. RESULTS: The sensitivity of the CADe system on test set 1 was 84%. For general endoscopists, sensitivity was 63%, corresponding to a neoplasia miss-rate of one-third of neoplastic lesions and a potential relative increase in neoplasia detection of 33% for CADe-assisted detection. The sensitivity of the CADe system on test sets 2 and 3 was 100% and 88%, respectively. The specificity of the CADe system varied for the three test sets between 64% and 66%. CONCLUSION: This study describes the first steps towards the establishment of an unprecedented data infrastructure for using machine learning to improve the endoscopic detection of Barrett's neoplasia. The CADe system detected neoplasia reliably and outperformed a large group of endoscopists in terms of sensitivity.


Asunto(s)
Esófago de Barrett , Aprendizaje Profundo , Neoplasias Esofágicas , Humanos , Esófago de Barrett/diagnóstico , Esófago de Barrett/patología , Neoplasias Esofágicas/diagnóstico , Neoplasias Esofágicas/patología , Esofagoscopía/métodos , Estudios Retrospectivos , Sensibilidad y Especificidad
14.
Insights Imaging ; 14(1): 34, 2023 Feb 15.
Artículo en Inglés | MEDLINE | ID: mdl-36790570

RESUMEN

OBJECTIVES: Different noninvasive imaging methods to predict the chance of malignancy of ovarian tumors are available. However, their predictive value is limited due to subjectivity of the reviewer. Therefore, more objective prediction models are needed. Computer-aided diagnostics (CAD) could be such a model, since it lacks bias that comes with currently used models. In this study, we evaluated the available data on CAD in predicting the chance of malignancy of ovarian tumors. METHODS: We searched for all published studies investigating diagnostic accuracy of CAD based on ultrasound, CT and MRI in pre-surgical patients with an ovarian tumor compared to reference standards. RESULTS: In thirty-one included studies, extracted features from three different imaging techniques were used in different mathematical models. All studies assessed CAD based on machine learning on ultrasound, CT scan and MRI scan images. Per imaging method, subsequently ultrasound, CT and MRI, sensitivities ranged from 40.3 to 100%; 84.6-100% and 66.7-100% and specificities ranged from 76.3-100%; 69-100% and 77.8-100%. Results could not be pooled, due to broad heterogeneity. Although the majority of studies report high performances, they are at considerable risk of overfitting due to the absence of an independent test set. CONCLUSION: Based on this literature review, different CAD for ultrasound, CT scans and MRI scans seem promising to aid physicians in assessing ovarian tumors through their objective and potentially cost-effective character. However, performance should be evaluated per imaging technique. Prospective and larger datasets with external validation are desired to make their results generalizable.

16.
Sci Rep ; 12(1): 16779, 2022 10 06.
Artículo en Inglés | MEDLINE | ID: mdl-36202957

RESUMEN

Artificial intelligence (AI) is entering into daily life and has the potential to play a significant role in healthcare. Aim was to investigate the perspectives (knowledge, experience, and opinion) on AI in healthcare among patients with gastrointestinal (GI) disorders, gastroenterologists, and GI-fellows. In this prospective questionnaire study 377 GI-patients, 35 gastroenterologists, and 45 GI-fellows participated. Of GI-patients, 62.5% reported to be familiar with AI and 25.0% of GI-physicians had work-related experience with AI. GI-patients preferred their physicians to use AI (mean 3.9) and GI-physicians were willing to use AI (mean 4.4, on 5-point Likert-scale). More GI-physicians believed in an increase in quality of care (81.3%) than GI-patients (64.9%, χ2(2) = 8.2, p = 0.017). GI-fellows expected AI implementation within 6.0 years, gastroenterologists within 4.2 years (t(76) = - 2.6, p = 0.011), and GI-patients within 6.1 years (t(193) = - 2.0, p = 0.047). GI-patients and GI-physicians agreed on the most important advantages of AI in healthcare: improving quality of care, time saving, and faster diagnostics and shorter waiting times. The most important disadvantage for GI-patients was the potential loss of personal contact, for GI-physicians this was insufficiently developed IT infrastructures. GI-patients and GI-physicians hold positive perspectives towards AI in healthcare. Patients were significantly more reserved compared to GI-fellows and GI-fellows were more reserved compared to gastroenterologists.


Asunto(s)
Gastroenterólogos , Enfermedades Gastrointestinales , Médicos , Inteligencia Artificial , Atención a la Salud , Humanos , Estudios Prospectivos
19.
IEEE Trans Med Imaging ; 41(8): 2048-2066, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-35201984

RESUMEN

Encoding-decoding (ED) CNNs have demonstrated state-of-the-art performance for noise reduction over the past years. This has triggered the pursuit of better understanding the inner workings of such architectures, which has led to the theory of deep convolutional framelets (TDCF), revealing important links between signal processing and CNNs. Specifically, the TDCF demonstrates that ReLU CNNs induce low-rankness, since these models often do not satisfy the necessary redundancy to achieve perfect reconstruction (PR). In contrast, this paper explores CNNs that do meet the PR conditions. We demonstrate that in these type of CNNs soft shrinkage and PR can be assumed. Furthermore, based on our explorations we propose the learned wavelet-frame shrinkage network, or LWFSN and its residual counterpart, the rLWFSN. The ED path of the (r)LWFSN complies with the PR conditions, while the shrinkage stage is based on the linear expansion of thresholds proposed Blu and Luisier. In addition, the LWFSN has only a fraction of the training parameters (<1%) of conventional CNNs, very small inference times, low memory footprint, while still achieving performance close to state-of-the-art alternatives, such as the tight frame (TF) U-Net and FBPConvNet, in low-dose CT denoising.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Relación Señal-Ruido , Tomografía Computarizada por Rayos X
20.
Endoscopy ; 54(4): 403-411, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-33951743

RESUMEN

BACKGROUND: Estimates on miss rates for upper gastrointestinal neoplasia (UGIN) rely on registry data or old studies. Quality assurance programs for upper GI endoscopy are not fully established owing to the lack of infrastructure to measure endoscopists' competence. We aimed to assess endoscopists' accuracy for the recognition of UGIN exploiting the framework of artificial intelligence (AI) validation studies. METHODS: Literature searches of databases (PubMed/MEDLINE, EMBASE, Scopus) up to August 2020 were performed to identify articles evaluating the accuracy of individual endoscopists for the recognition of UGIN within studies validating AI against a histologically verified expert-annotated ground-truth. The main outcomes were endoscopists' pooled sensitivity, specificity, positive and negative predictive value (PPV/NPV), and area under the curve (AUC) for all UGIN, for esophageal squamous cell neoplasia (ESCN), Barrett esophagus-related neoplasia (BERN), and gastric adenocarcinoma (GAC). RESULTS: Seven studies (2 ESCN, 3 BERN, 1 GAC, 1 UGIN overall) with 122 endoscopists were included. The pooled endoscopists' sensitivity and specificity for UGIN were 82 % (95 % confidence interval [CI] 80 %-84 %) and 79 % (95 %CI 76 %-81 %), respectively. Endoscopists' accuracy was higher for GAC detection (AUC 0.95 [95 %CI 0.93-0.98]) than for ESCN (AUC 0.90 [95 %CI 0.88-0.92]) and BERN detection (AUC 0.86 [95 %CI 0.84-0.88]). Sensitivity was higher for Eastern vs. Western endoscopists (87 % [95 %CI 84 %-89 %] vs. 75 % [95 %CI 72 %-78 %]), and for expert vs. non-expert endoscopists (85 % [95 %CI 83 %-87 %] vs. 71 % [95 %CI 67 %-75 %]). CONCLUSION: We show suboptimal accuracy of endoscopists for the recognition of UGIN even within a framework that included a higher prevalence and disease awareness. Future AI validation studies represent a framework to assess endoscopist competence.


Asunto(s)
Esófago de Barrett , Neoplasias Gastrointestinales , Inteligencia Artificial , Esófago de Barrett/patología , Neoplasias Gastrointestinales/diagnóstico , Humanos , Sensibilidad y Especificidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...