Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
1.
Comput Med Imaging Graph ; 115: 102390, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38714018

ABSTRACT

Colonoscopy is the choice procedure to diagnose, screening, and treat the colon and rectum cancer, from early detection of small precancerous lesions (polyps), to confirmation of malign masses. However, the high variability of the organ appearance and the complex shape of both the colon wall and structures of interest make this exploration difficult. Learned visuospatial and perceptual abilities mitigate technical limitations in clinical practice by proper estimation of the intestinal depth. This work introduces a novel methodology to estimate colon depth maps in single frames from monocular colonoscopy videos. The generated depth map is inferred from the shading variation of the colon wall with respect to the light source, as learned from a realistic synthetic database. Briefly, a classic convolutional neural network architecture is trained from scratch to estimate the depth map, improving sharp depth estimations in haustral folds and polyps by a custom loss function that minimizes the estimation error in edges and curvatures. The network was trained by a custom synthetic colonoscopy database herein constructed and released, composed of 248400 frames (47 videos), with depth annotations at the level of pixels. This collection comprehends 5 subsets of videos with progressively higher levels of visual complexity. Evaluation of the depth estimation with the synthetic database reached a threshold accuracy of 95.65%, and a mean-RMSE of 0.451cm, while a qualitative assessment with a real database showed consistent depth estimations, visually evaluated by the expert gastroenterologist coauthoring this paper. Finally, the method achieved competitive performance with respect to another state-of-the-art method using a public synthetic database and comparable results in a set of images with other five state-of-the-art methods. Additionally, three-dimensional reconstructions demonstrated useful approximations of the gastrointestinal tract geometry. Code for reproducing the reported results and the dataset are available at https://github.com/Cimalab-unal/ColonDepthEstimation.


Subject(s)
Colon , Colonoscopy , Databases, Factual , Humans , Colonoscopy/methods , Colon/diagnostic imaging , Neural Networks, Computer , Colonic Polyps/diagnostic imaging , Image Processing, Computer-Assisted/methods
2.
Glob Heart ; 17(1): 84, 2022.
Article in English | MEDLINE | ID: mdl-36578915

ABSTRACT

Background: Acute coronary syndromes (ACS) include ST-segment elevation myocardial infarction (STEMI), non-ST-segment elevation myocardial infarction (NSTEMI), and unstable angina (UA). The leading cause of mortality in Guatemala is acute myocardial infarction (AMI) and there is no established national policy nor current standard of care. Objective: Describe the factors that influence ACS outcome, evaluating the national healthcare system's quality of care based on the Donabedian health model. Methods: The ACS-Gt study is an observational, multicentre, and prospective national registry. A total of 109 ACS adult patients admitted at six hospitals from Guatemala's National Healthcare System were included. These represent six out of the country's eight geographic regions. Data enrolment took place from February 2020 to January 2021. Data was assessed using chi-square test, Student's t-test, or Mann-Whitney U test, whichever applied. A p-value < 0.05 was considered statistically significant. Results: One hundred and nine patients met inclusion criteria (80.7% STEMI, 19.3% NSTEMI/UA). The population was predominantly male, (68%) hypertensive (49.5%), and diabetic (45.9%). Fifty-nine percent of STEMI patients received fibrinolysis (alteplase 65.4%) and none for primary Percutaneous Coronary Intervention (pPCI). Reperfusion success rate was 65%, and none were taken to PCI afterwards in the recommended time period (2-24 hours). Prognostic delays in STEMI were significantly prolonged in comparison with European guidelines goals. Optimal in-hospital medical therapy was 8.3%, and in-hospital mortality was 20.4%. Conclusions: There is poor access to ACS pharmacological treatment, low reperfusion rate, and no primary, urgent, or rescue PCI available. No patient fulfilled the recommended time period between successful fibrinolysis and PCI. Resources are limited and inefficiently used.


Subject(s)
Acute Coronary Syndrome , Myocardial Infarction , Non-ST Elevated Myocardial Infarction , Percutaneous Coronary Intervention , ST Elevation Myocardial Infarction , Adult , Female , Humans , Male , Acute Coronary Syndrome/epidemiology , Acute Coronary Syndrome/therapy , Angina, Unstable/therapy , Angina, Unstable/drug therapy , Delivery of Health Care , Guatemala/epidemiology , Prospective Studies , Registries , ST Elevation Myocardial Infarction/epidemiology , ST Elevation Myocardial Infarction/therapy , Treatment Outcome
3.
Ultrasound Med Biol ; 48(8): 1602-1614, 2022 08.
Article in English | MEDLINE | ID: mdl-35613973

ABSTRACT

Pancreatic cancer (PC) has a reported mortality of 98% and a 5-y survival rate of 6.7%. Experienced gastroenterologists detect 80% of those with early-stage PC by endoscopic ultrasonography (EUS). Here we propose an automatic second reader strategy to detect PC in an entire EUS procedure, rather than focusing on pre-selected frames, as the state-of-the-art methods do. The method unmasks echo tumoral patterns in frames with a high probability of tumor. First, speeded up robust features define a set of interest points with correlated heterogeneities among different filtering scales. Afterward, intensity gradients of each interest point are summarized by 64 features at certain locations and scales. A frame feature vector is built by concatenating statistics of each feature of the 15 groups of scales. Then, binary classification is performed by Support Vector Machine and Adaboost models. Evaluation was performed using a data set comprising 55 participants, 18 of PC class (16,585 frames) and 37 subjects of non-PC class (49,664 frames), randomly splitting 10 times. The proposed method reached an accuracy of 92.1%, sensitivity of 96.3% and specificity of 87.8.3%. The observed results are also stable in noisy experiments while deep learning approaches fail to maintain similar performance.


Subject(s)
Endosonography , Pancreatic Neoplasms , Endosonography/methods , Humans , Pancreas/diagnostic imaging , Pancreas/pathology , Pancreatic Neoplasms/diagnostic imaging , Pancreatic Neoplasms/pathology , Support Vector Machine , Pancreatic Neoplasms
4.
Biomedica ; 42(1): 170-183, 2022 03 01.
Article in English, Spanish | MEDLINE | ID: mdl-35471179

ABSTRACT

INTRODUCTION: The coronavirus disease 2019 (COVID-19) has become a significant public health problem worldwide. In this context, CT-scan automatic analysis has emerged as a COVID-19 complementary diagnosis tool allowing for radiological finding characterization, patient categorization, and disease follow-up. However, this analysis depends on the radiologist's expertise, which may result in subjective evaluations. OBJECTIVE: To explore deep learning representations, trained from thoracic CT-slices, to automatically distinguish COVID-19 disease from control samples. MATERIALS AND METHODS: Two datasets were used: SARS-CoV-2 CT Scan (Set-1) and FOSCAL clinic's dataset (Set-2). The deep representations took advantage of supervised learning models previously trained on the natural image domain, which were adjusted following a transfer learning scheme. The deep classification was carried out: (a) via an end-to-end deep learning approach and (b) via random forest and support vector machine classifiers by feeding the deep representation embedding vectors into these classifiers. RESULTS: The end-to-end classification achieved an average accuracy of 92.33% (89.70% precision) for Set-1 and 96.99% (96.62% precision) for Set-2. The deep feature embedding with a support vector machine achieved an average accuracy of 91.40% (95.77% precision) and 96.00% (94.74% precision) for Set-1 and Set-2, respectively. CONCLUSION: Deep representations have achieved outstanding performance in the identification of COVID-19 cases on CT scans demonstrating good characterization of the COVID-19 radiological patterns. These representations could potentially support the COVID-19 diagnosis in clinical settings.


Introducción. La enfermedad por coronavirus (COVID-19) es actualmente el principal problema de salud pública en el mundo. En este contexto, el análisis automático de tomografías computarizadas (TC) surge como una herramienta diagnóstica complementaria que permite caracterizar hallazgos radiológicos, y categorizar y hacer el seguimiento de pacientes con COVID-19. Sin embargo, este análisis depende de la experiencia de los radiólogos, por lo que las valoraciones pueden ser subjetivas. Objetivo. Explorar representaciones de aprendizaje profundo entrenadas con cortes de TC torácica para diferenciar automáticamente entre los casos de COVID-19 y personas no infectadas. Materiales y métodos. Se usaron dos conjuntos de datos de TC: de SARS-CoV-2 CT (conjunto 1) y de la clínica FOSCAL (conjunto 2). Los modelos de aprendizaje supervisados y previamente entrenados en imágenes naturales, se ajustaron usando aprendizaje por transferencia. La clasificación se llevó a cabo mediante aprendizaje de extremo a extremo y clasificadores tales como los árboles de decisiones y las máquinas de soporte vectorial, alimentados por la representación profunda previamente aprendida. Resultados. El enfoque de extremo a extremo alcanzó una exactitud promedio de 92,33 % (89,70 % de precisión) para el conjunto 1 y de 96,99 % (96,62 % de precisión) para el conjunto-2. La máquina de soporte vectorial alcanzó una exactitud promedio de 91,40 % (precisión del 95,77 %) para el conjunto-1 y del 96,00 % (precisión del 94,74 %) para el conjunto 2. Conclusión. Las representaciones profundas lograron resultados sobresalientes al caracterizar patrones radiológicos usados en la detección de casos de COVID-19 a partir de estudios de TC y demostraron ser una potencial herramienta de apoyo del diagnóstico.


Subject(s)
COVID-19 , Deep Learning , COVID-19 Testing , Humans , Neural Networks, Computer , SARS-CoV-2 , Tomography, X-Ray Computed
5.
Biomédica (Bogotá) ; 42(1): 170-183, ene.-mar. 2022. tab, graf
Article in English | LILACS | ID: biblio-1374516

ABSTRACT

Introduction: The coronavirus disease 2019 (COVID-19) has become a significant public health problem worldwide. In this context, CT-scan automatic analysis has emerged as a COVID-19 complementary diagnosis tool allowing for radiological finding characterization, patient categorization, and disease follow-up. However, this analysis depends on the radiologist's expertise, which may result in subjective evaluations. Objective: To explore deep learning representations, trained from thoracic CT-slices, to automatically distinguish COVID-19 disease from control samples. Materials and methods: Two datasets were used: SARS-CoV-2 CT Scan (Set-1) and FOSCAL clinic's dataset (Set-2). The deep representations took advantage of supervised learning models previously trained on the natural image domain, which were adjusted following a transfer learning scheme. The deep classification was carried out: (a) via an end-to-end deep learning approach and (b) via random forest and support vector machine classifiers by feeding the deep representation embedding vectors into these classifiers. Results: The end-to-end classification achieved an average accuracy of 92.33% (89.70% precision) for Set-1 and 96.99% (96.62% precision) for Set-2. The deep feature embedding with a support vector machine achieved an average accuracy of 91.40% (95.77% precision) and 96.00% (94.74% precision) for Set-1 and Set-2, respectively. Conclusion: Deep representations have achieved outstanding performance in the identification of COVID-19 cases on CT scans demonstrating good characterization of the COVID-19 radiological patterns. These representations could potentially support the COVID-19 diagnosis in clinical settings.


Introducción. La enfermedad por coronavirus (COVID-19) es actualmente el principal problema de salud pública en el mundo. En este contexto, el análisis automático de tomografías computarizadas (TC) surge como una herramienta diagnóstica complementaria que permite caracterizar hallazgos radiológicos, y categorizar y hacer el seguimiento de pacientes con COVID-19. Sin embargo, este análisis depende de la experiencia de los radiólogos, por lo que las valoraciones pueden ser subjetivas. Objetivo. Explorar representaciones de aprendizaje profundo entrenadas con cortes de TC torácica para diferenciar automáticamente entre los casos de COVID-19 y personas no infectadas. Materiales y métodos. Se usaron dos conjuntos de datos de TC: de SARS-CoV-2 CT (conjunto 1) y de la clínica FOSCAL (conjunto 2). Los modelos de aprendizaje supervisados y previamente entrenados en imágenes naturales, se ajustaron usando aprendizaje por transferencia. La clasificación se llevó a cabo mediante aprendizaje de extremo a extremo y clasificadores tales como los árboles de decisiones y las máquinas de soporte vectorial, alimentados por la representación profunda previamente aprendida. Resultados. El enfoque de extremo a extremo alcanzó una exactitud promedio de 92,33 % (89,70 % de precisión) para el conjunto 1 y de 96,99 % (96,62 % de precisión) para el conjunto-2. La máquina de soporte vectorial alcanzó una exactitud promedio de 91,40 % (precisión del 95,77 %) para el conjunto-1 y del 96,00 % (precisión del 94,74 %) para el conjunto 2. Conclusión. Las representaciones profundas lograron resultados sobresalientes al caracterizar patrones radiológicos usados en la detección de casos de COVID-19 a partir de estudios de TC y demostraron ser una potencial herramienta de apoyo del diagnóstico.


Subject(s)
Coronavirus Infections/diagnosis , Deep Learning , Tomography, X-Ray Computed
7.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 5945-5948, 2019 Jul.
Article in English | MEDLINE | ID: mdl-31947202

ABSTRACT

Early screening in Colorectal Cancer consists in finding and removing small precancerous masses or neoplastic lesions developed from the mucosa, usually lesions smaller than 10 mm. Localization of small neoplastic lesions is a very challenging task since colon exploration is highly dependent on the expert training and colon preparation. Several strategies have attempted to locate neoplasias, but usually they are huge lesions that a trained gastroenterologist could hardly miss. This work presents a saliency-based strategy to localize polypoid and non-polypoid neoplastic lesions smaller than 10 mm in colonoscopy videos by combining spatio-temporal descriptors. For doing so, a per-frame-multi-scale representation is computed and edge, texture and motion features are extracted. Each of these features is used to construct a primary saliency map which are then combined to obtain a coarse saliency map. Finally, the neoplasia is localized as the bounding box of a circular region, approximated by the Hough transform, with the largest salience. The proposed approach was evaluated in 8 short colonoscopy videos obtaining an average of Annotated Area Covered of 0.75 and a precision of 0.82.


Subject(s)
Colonoscopy , Colorectal Neoplasms/diagnostic imaging , Humans , Mass Screening
8.
Comput Med Imaging Graph ; 43: 130-6, 2015 Jul.
Article in English | MEDLINE | ID: mdl-25670148

ABSTRACT

Colorectal cancer usually appears in polyps developed from the mucosa. Carcinoma is frequently found in those polyps larger than 10mm and therefore only this kind of polyps is sent for pathology examination. In consequence, accurate estimation of a polyp size determines the surveillance interval after polypectomy. The follow up consists in a periodic colonoscopy whose frequency depends on the estimation of the size polyp. Typically, this polyp measure is achieved by examining the lesion with a calibrated endoscopy tool. However, measurement is very challenging because it must be performed during a procedure subjected to a complex mix of noise sources, namely anatomical variability, drastic illumination changes and abrupt camera movements. This work introduces a semi-automatic method that estimates a polyp size by propagating an initial manual delineation in a single frame to the whole video sequence using a spatio-temporal characterization of the lesion, during a routine endoscopic examination. The proposed approach achieved a Dice Score of 0.7 in real endoscopy video-sequences, when comparing with an expert. In addition, the method obtained a root mean square error (RMSE) of 0.87mm in videos artificially captured in a cylindric structure with spheres of known size that simulated the polyps. Finally, in real endoscopy sequences, the diameter estimation was compared with measures obtained by a group of four experts with similar experience, obtaining a RMSE of 4.7mm for a set of polyps measuring from 5 to 20mm. An ANOVA test performed for the five groups of measurements (four experts and the method) showed no significant differences (p<0.01).


Subject(s)
Colonic Polyps/pathology , Colonoscopy/methods , Colorectal Neoplasms/pathology , Image Enhancement/methods , Pattern Recognition, Automated/methods , Calibration , Humans , Phantoms, Imaging , Video Recording
SELECTION OF CITATIONS
SEARCH DETAIL
...