Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
3.
IEEE J Biomed Health Inform ; 28(3): 1161-1172, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37878422

RESUMEN

We introduce LYSTO, the Lymphocyte Assessment Hackathon, which was held in conjunction with the MICCAI 2019 Conference in Shenzhen (China). The competition required participants to automatically assess the number of lymphocytes, in particular T-cells, in images of colon, breast, and prostate cancer stained with CD3 and CD8 immunohistochemistry. Differently from other challenges setup in medical image analysis, LYSTO participants were solely given a few hours to address this problem. In this paper, we describe the goal and the multi-phase organization of the hackathon; we describe the proposed methods and the on-site results. Additionally, we present post-competition results where we show how the presented methods perform on an independent set of lung cancer slides, which was not part of the initial competition, as well as a comparison on lymphocyte assessment between presented methods and a panel of pathologists. We show that some of the participants were capable to achieve pathologist-level performance at lymphocyte assessment. After the hackathon, LYSTO was left as a lightweight plug-and-play benchmark dataset on grand-challenge website, together with an automatic evaluation platform.


Asunto(s)
Benchmarking , Neoplasias de la Próstata , Masculino , Humanos , Linfocitos , Mama , China
4.
J Clin Med ; 12(21)2023 Oct 29.
Artículo en Inglés | MEDLINE | ID: mdl-37959298

RESUMEN

Artificial intelligence (AI) has gained prominence in medical imaging, particularly in obstetrics and gynecology (OB/GYN), where ultrasound (US) is the preferred method. It is considered cost effective and easily accessible but is time consuming and hindered by the need for specialized training. To overcome these limitations, AI models have been proposed for automated plane acquisition, anatomical measurements, and pathology detection. This study aims to overview recent literature on AI applications in OB/GYN US imaging, highlighting their benefits and limitations. For the methodology, a systematic literature search was performed in the PubMed and Cochrane Library databases. Matching abstracts were screened based on the PICOS (Participants, Intervention or Exposure, Comparison, Outcome, Study type) scheme. Articles with full text copies were distributed to the sections of OB/GYN and their research topics. As a result, this review includes 189 articles published from 1994 to 2023. Among these, 148 focus on obstetrics and 41 on gynecology. AI-assisted US applications span fetal biometry, echocardiography, or neurosonography, as well as the identification of adnexal and breast masses, and assessment of the endometrium and pelvic floor. To conclude, the applications for AI-assisted US in OB/GYN are abundant, especially in the subspecialty of obstetrics. However, while most studies focus on common application fields such as fetal biometry, this review outlines emerging and still experimental fields to promote further research.

5.
IEEE Trans Med Imaging ; 42(12): 3895-3906, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37698963

RESUMEN

Chemical staining of the blood smears is one of the crucial components of blood analysis. It is an expensive, lengthy and sensitive process, often prone to produce slight variations in colour and seen structures due to a lack of unified protocols across laboratories. Even though the current developments in deep generative modeling offer an opportunity to replace the chemical process with a digital one, there are specific safety-ensuring requirements due to the severe consequences of mistakes in a medical setting. Therefore digital staining system would profit from an additional confidence estimation quantifying the quality of the digitally stained white blood cell. To this aim, during the staining generation, we disentangle the latent space of the Generative Adversarial Network, obtaining separate representation s of the white blood cell and the staining. We estimate the generated image's confidence of white blood cell structure and staining quality by corrupting these representations with noise and quantifying the information retained between multiple outputs. We show that confidence estimated in this way correlates with image quality measured in terms of LPIPS values calculated for the generated and ground truth stained images. We validate our method by performing digital staining of images captured with a Differential Inference Contrast microscope on a dataset composed of white blood cells of 24 patients. The high absolute value of the correlation between our confidence score and LPIPS demonstrates the effectiveness of our method, opening the possibility of predicting the quality of generated output and ensuring trustworthiness in medical safety-critical setup.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Microscopía , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Coloración y Etiquetado , Leucocitos
6.
Radiol Artif Intell ; 3(3): e190169, 2021 May.
Artículo en Inglés | MEDLINE | ID: mdl-34136814

RESUMEN

PURPOSE: To develop an unsupervised deep learning model on MR images of normal brain anatomy to automatically detect deviations indicative of pathologic states on abnormal MR images. MATERIALS AND METHODS: In this retrospective study, spatial autoencoders with skip-connections (which can learn to compress and reconstruct data) were leveraged to learn the normal variability of the brain from MR scans of healthy individuals. A total of 100 normal, in-house MR scans were used for training. Subsequently, as the model was unable to reconstruct anomalies well, this characteristic was exploited for detecting and delineating various diseases by computing the difference between the input data and their reconstruction. The unsupervised model was compared with a supervised U-Net- and threshold-based classifier trained on data from 50 patients with multiple sclerosis (in-house dataset) and 50 patients from The Cancer Imaging Archive. Both the unsupervised and supervised U-Net models were tested on five different datasets containing MR images of microangiopathy, glioblastoma, and multiple sclerosis. Precision-recall statistics and derivations thereof (mean area under the precision-recall curve, Dice score) were used to quantify lesion detection and segmentation performance. RESULTS: The unsupervised approach outperformed the naive thresholding approach in lesion detection (mean F1 scores ranging from 17% to 62% vs 6.4% to 15% across the five different datasets) and performed similarly to the supervised U-Net (20%-64%) across a variety of pathologic conditions. This outperformance was mostly driven by improved precision compared with the thresholding approach (mean precisions, 15%-59% vs 3.4%-10%). The model was also developed to create an anomaly heatmap display. CONCLUSION: The unsupervised deep learning model was able to automatically detect anomalies on brain MR images with high performance. Supplemental material is available for this article. Keywords: Brain/Brain Stem Computer Aided Diagnosis (CAD), Convolutional Neural Network (CNN), Experimental Investigations, Head/Neck, MR-Imaging, Quantification, Segmentation, Stacked Auto-Encoders, Technology Assessment, Tissue Characterization © RSNA, 2021.

7.
Med Image Anal ; 69: 101952, 2021 04.
Artículo en Inglés | MEDLINE | ID: mdl-33454602

RESUMEN

Deep unsupervised representation learning has recently led to new approaches in the field of Unsupervised Anomaly Detection (UAD) in brain MRI. The main principle behind these works is to learn a model of normal anatomy by learning to compress and recover healthy data. This allows to spot abnormal structures from erroneous recoveries of compressed, potentially anomalous samples. The concept is of great interest to the medical image analysis community as it i) relieves from the need of vast amounts of manually segmented training data-a necessity for and pitfall of current supervised Deep Learning-and ii) theoretically allows to detect arbitrary, even rare pathologies which supervised approaches might fail to find. To date, the experimental design of most works hinders a valid comparison, because i) they are evaluated against different datasets and different pathologies, ii) use different image resolutions and iii) different model architectures with varying complexity. The intent of this work is to establish comparability among recent methods by utilizing a single architecture, a single resolution and the same dataset(s). Besides providing a ranking of the methods, we also try to answer questions like i) how many healthy training subjects are needed to model normality and ii) if the reviewed approaches are also sensitive to domain shift. Further, we identify open challenges and provide suggestions for future community efforts and research directions.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Encéfalo/diagnóstico por imagen , Humanos
8.
IEEE J Biomed Health Inform ; 25(2): 403-411, 2021 02.
Artículo en Inglés | MEDLINE | ID: mdl-32086223

RESUMEN

Stain virtualization is an application with growing interest in digital pathology allowing simulation of stained tissue images thus saving lab and tissue resources. Thanks to the success of Generative Adversarial Networks (GANs) and the progress of unsupervised learning, unsupervised style transfer GANs have been successfully used to generate realistic, clinically meaningful and interpretable images. The large size of high resolution Whole Slide Images (WSIs) presents an additional computational challenge. This makes tilewise processing necessary during training and inference of deep learning networks. Instance normalization has a substantial positive effect in style transfer GAN applications but with tilewise inference, it has the tendency to cause a tiling artifact in reconstructed WSIs. In this paper we propose a novel perceptual embedding consistency (PEC) loss forcing the network to learn color, contrast and brightness invariant features in the latent space and hence substantially reducing the aforementioned tiling artifact. Our approach results in more seamless reconstruction of the virtual WSIs. We validate our method quantitatively by comparing the virtually generated images to their corresponding consecutive real stained images. We compare our results to state-of-the-art unsupervised style transfer methods and to the measures obtained from consecutive real stained tissue slide images. We demonstrate our hypothesis about the effect of the PEC loss by comparing model robustness to color, contrast and brightness perturbations and visualizing bottleneck embeddings. We validate the robustness of the bottleneck feature maps by measuring their sensitivity to the different perturbations and using them in a tumor segmentation task. Additionally, we propose a preliminary validation of the virtual staining application by comparing interpretation of 2 pathologists on real and virtual tiles and inter-pathologist agreement.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Humanos
9.
IEEE Trans Med Imaging ; 40(10): 2897-2910, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-33347406

RESUMEN

This paper addresses digital staining and classification of the unstained white blood cell images obtained with a differential contrast microscope. We have data coming from multiple domains that are partially labeled and partially matching across the domains. Using unstained images removes time-consuming staining procedures and could facilitate and automatize comprehensive diagnostics. To this aim, we propose a method that translates unstained images to realistically looking stained images preserving the inter-cellular structures, crucial for the medical experts to perform classification. We achieve better structure preservation by adding auxiliary tasks of segmentation and direct reconstruction. Segmentation enforces that the network learns to generate correct nucleus and cytoplasm shape, while direct reconstruction enforces reliable translation between the matching images across domains. Besides, we build a robust domain agnostic latent space by injecting the target domain label directly to the generator, i.e., bypassing the encoder. It allows the encoder to extract features independently of the target domain and enables an automated domain invariant classification of the white blood cells. We validated our method on a large dataset composed of leukocytes of 24 patients, achieving state-of-the-art performance on both digital staining and classification tasks.


Asunto(s)
Leucocitos , Microscopía , Citoplasma , Humanos , Coloración y Etiquetado
10.
NPJ Digit Med ; 3: 119, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33015372

RESUMEN

Data-driven machine learning (ML) has emerged as a promising approach for building accurate and robust statistical models from medical data, which is collected in huge volumes by modern healthcare systems. Existing medical data is not fully exploited by ML primarily because it sits in data silos and privacy concerns restrict access to this data. However, without access to sufficient data, ML will be prevented from reaching its full potential and, ultimately, from making the transition from research to clinical practice. This paper considers key factors contributing to this issue, explores how federated learning (FL) may provide a solution for the future of digital health and highlights the challenges and considerations that need to be addressed.

11.
Int J Comput Assist Radiol Surg ; 15(5): 847-857, 2020 May.
Artículo en Inglés | MEDLINE | ID: mdl-32335786

RESUMEN

PURPOSE: Demonstrate the feasibility of a fully automatic computer-aided diagnosis (CAD) tool, based on deep learning, that localizes and classifies proximal femur fractures on X-ray images according to the AO classification. The proposed framework aims to improve patient treatment planning and provide support for the training of trauma surgeon residents. MATERIAL AND METHODS: A database of 1347 clinical radiographic studies was collected. Radiologists and trauma surgeons annotated all fractures with bounding boxes and provided a classification according to the AO standard. In all experiments, the dataset was split patient-wise in three with the ratio 70%:10%:20% to build the training, validation and test sets, respectively. ResNet-50 and AlexNet architectures were implemented as deep learning classification and localization models, respectively. Accuracy, precision, recall and [Formula: see text]-score were reported as classification metrics. Retrieval of similar cases was evaluated in terms of precision and recall. RESULTS: The proposed CAD tool for the classification of radiographs into types "A," "B" and "not-fractured" reaches a [Formula: see text]-score of 87% and AUC of 0.95. When classifying fractures versus not-fractured cases it improves up to 94% and 0.98. Prior localization of the fracture results in an improvement with respect to full-image classification. In total, 100% of the predicted centers of the region of interest are contained in the manually provided bounding boxes. The system retrieves on average 9 relevant images (from the same class) out of 10 cases. CONCLUSION: Our CAD scheme localizes, detects and further classifies proximal femur fractures achieving results comparable to expert-level and state-of-the-art performance. Our auxiliary localization model was highly accurate predicting the region of interest in the radiograph. We further investigated several strategies of verification for its adoption into the daily clinical routine. A sensitivity analysis of the size of the ROI and image retrieval as a clinical use case were presented.


Asunto(s)
Diagnóstico por Computador , Fracturas del Fémur/diagnóstico por imagen , Bases de Datos Factuales , Aprendizaje Profundo , Fracturas del Fémur/clasificación , Fracturas del Fémur/cirugía , Humanos , Radiografía
12.
Sci Rep ; 10(1): 2748, 2020 02 17.
Artículo en Inglés | MEDLINE | ID: mdl-32066744

RESUMEN

We present a comprehensive analysis of the submissions to the first edition of the Endoscopy Artefact Detection challenge (EAD). Using crowd-sourcing, this initiative is a step towards understanding the limitations of existing state-of-the-art computer vision methods applied to endoscopy and promoting the development of new approaches suitable for clinical translation. Endoscopy is a routine imaging technique for the detection, diagnosis and treatment of diseases in hollow-organs; the esophagus, stomach, colon, uterus and the bladder. However the nature of these organs prevent imaged tissues to be free of imaging artefacts such as bubbles, pixel saturation, organ specularity and debris, all of which pose substantial challenges for any quantitative analysis. Consequently, the potential for improved clinical outcomes through quantitative assessment of abnormal mucosal surface observed in endoscopy videos is presently not realized accurately. The EAD challenge promotes awareness of and addresses this key bottleneck problem by investigating methods that can accurately classify, localize and segment artefacts in endoscopy frames as critical prerequisite tasks. Using a diverse curated multi-institutional, multi-modality, multi-organ dataset of video frames, the accuracy and performance of 23 algorithms were objectively ranked for artefact detection and segmentation. The ability of methods to generalize to unseen datasets was also evaluated. The best performing methods (top 15%) propose deep learning strategies to reconcile variabilities in artefact appearance with respect to size, modality, occurrence and organ type. However, no single method outperformed across all tasks. Detailed analyses reveal the shortcomings of current training strategies and highlight the need for developing new optimal metrics to accurately quantify the clinical applicability of methods.


Asunto(s)
Algoritmos , Artefactos , Endoscopía/normas , Interpretación de Imagen Asistida por Computador/normas , Imagenología Tridimensional/normas , Redes Neurales de la Computación , Colon/diagnóstico por imagen , Colon/patología , Conjuntos de Datos como Asunto , Endoscopía/estadística & datos numéricos , Esófago/diagnóstico por imagen , Esófago/patología , Femenino , Humanos , Interpretación de Imagen Asistida por Computador/estadística & datos numéricos , Imagenología Tridimensional/estadística & datos numéricos , Cooperación Internacional , Masculino , Estómago/diagnóstico por imagen , Estómago/patología , Vejiga Urinaria/diagnóstico por imagen , Vejiga Urinaria/patología , Útero/diagnóstico por imagen , Útero/patología
13.
Artículo en Inglés | MEDLINE | ID: mdl-32078541

RESUMEN

Chest X-ray radiography is one of the earliest medical imaging technologies and remains one of the most widely-used for diagnosis, screening, and treatment follow up of diseases related to lungs and heart. The literature in this field of research reports many interesting studies dealing with the challenging tasks of bone suppression and organ segmentation but performed separately, limiting any learning that comes with the consolidation of parameters that could optimize both processes. This study, and for the first time, introduces a multitask deep learning model that generates simultaneously the bone-suppressed image and the organ-segmented image, enhancing the accuracy of tasks, minimizing the number of parameters needed by the model and optimizing the processing time, all by exploiting the interplay between the network parameters to benefit the performance of both tasks. The architectural design of this model, which relies on a conditional generative adversarial network, reveals the process on how the wellestablished pix2pix network (image-to-image network) is modified to fit the need for multitasking and extending it to the new image-to-images architecture. The developed source code of this multitask model is shared publicly on Github as the first attempt for providing the two-task pix2pix extension, a supervised/paired/aligned/registered image-to-images translation which would be useful in many multitask applications. Dilated convolutions are also used to improve the results through a more effective receptive field assessment. The comparison with state-of-the-art al-gorithms along with ablation study and a demonstration video1 are provided to evaluate the efficacy and gauge the merits of the proposed approach.

14.
Biomed Phys Eng Express ; 6(1): 015038, 2020 01 30.
Artículo en Inglés | MEDLINE | ID: mdl-33438626

RESUMEN

PURPOSE: To evaluate the benefit of the additional available information present in spectral CT datasets, as compared to conventional CT datasets, when utilizing convolutional neural networks for fully automatic localisation and classification of liver lesions in CT images. MATERIALS AND METHODS: Conventional and spectral CT images (iodine maps, virtual monochromatic images (VMI)) were obtained from a spectral dual-layer CT system. Patient diagnosis were known from the clinical reports and classified into healthy, cyst and hypodense metastasis. In order to compare the value of spectral versus conventional datasets when being passed as input to machine learning algorithms, we implemented a weakly-supervised convolutional neural network (CNN) that learns liver lesion localisation without pixel-level ground truth annotations. Regions-of-interest are selected automatically based on the localisation results and are used to train a second CNN for liver lesion classification (healthy, cyst, hypodense metastasis). The accuracy of lesion localisation was evaluated using the Euclidian distances between the ground truth centres of mass and the predicted centres of mass. Lesion classification was evaluated by precision, recall, accuracy and F1-Score. RESULTS: Lesion localisation showed the best results for spectral information with distances of 8.22 ± 10.72 mm, 8.78 ± 15.21 mm and 8.29 ± 12.97 mm for iodine maps, 40 keV and 70 keV VMIs, respectively. With conventional data distances of 10.58 ± 17.65 mm were measured. For lesion classification, the 40 keV VMIs achieved the highest overall accuracy of 0.899 compared to 0.854 for conventional data. CONCLUSION: An enhanced localisation and classification is reported for spectral CT data, which demonstrates that combining machine-learning technology with spectral CT information may in the future improve the clinical workflow as well as the diagnostic accuracy.


Asunto(s)
Algoritmos , Hepatopatías/patología , Redes Neurales de la Computación , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Imagen Radiográfica por Emisión de Doble Fotón/métodos , Relación Señal-Ruido , Tomografía Computarizada por Rayos X/métodos , Humanos , Hepatopatías/clasificación , Aprendizaje Automático
15.
Artif Intell Med ; 109: 101938, 2020 09.
Artículo en Inglés | MEDLINE | ID: mdl-34756215

RESUMEN

Generative adversarial networks (GANs) and their extensions have carved open many exciting ways to tackle well known and challenging medical image analysis problems such as medical image de-noising, reconstruction, segmentation, data simulation, detection or classification. Furthermore, their ability to synthesize images at unprecedented levels of realism also gives hope that the chronic scarcity of labeled data in the medical field can be resolved with the help of these generative models. In this review paper, a broad overview of recent literature on GANs for medical applications is given, the shortcomings and opportunities of the proposed methods are thoroughly discussed, and potential future work is elaborated. We review the most relevant papers published until the submission date. For quick access, essential details such as the underlying method, datasets, and performance are tabulated. An interactive visualization that categorizes all papers to keep the review alive is available at http://livingreview.in.tum.de/GANs_for_Medical_Applications/.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Simulación por Computador
16.
Int J Comput Assist Radiol Surg ; 14(7): 1117-1126, 2019 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-30977093

RESUMEN

PURPOSE: 2D digital subtraction angiography (DSA) has become an important technique for interventional neuroradiology tasks, such as detection and subsequent treatment of aneurysms. In order to provide high-quality DSA images, usually undiluted contrast agent and a high X-ray dose are used. The iodinated contrast agent puts a burden on the patients' kidneys while the use of high-dose X-rays expose both patients and medical staff to a considerable amount of radiation. Unfortunately, reducing either the X-ray dose or the contrast agent concentration usually results in a sacrifice of image quality. MATERIALS AND METHODS: To denoise a frame, the proposed spatiotemporal denoising method utilizes the low-rank nature of a spatially aligned temporal sequence where variation is introduced by the flow of contrast agent through a vessel tree of interest. That is, a constrained weighted rank-1 approximation of the stack comprising the frame to be denoised and its temporal neighbors is computed where the weights are used to prevent the contribution of non-similar pixels toward the low-rank approximation. The method has been evaluated using a vascular flow phantom emulating cranial arteries into which contrast agent can be manually injected (Vascular Simulations Replicator, Vascular Simulations, Stony Brook NY, USA). For the evaluation, image sequences acquired at different dose levels as well as different contrast agent concentrations have been used. RESULTS: Qualitative and quantitative analyses have shown that with the proposed approach, the dose and the concentration of the contrast agent could both be reduced by about 75%, while maintaining the required image quality. Most importantly, it has been observed that the DSA images obtained using the proposed method have the closest resemblance to typical DSA images, i.e., they preserve the typical image characteristics best. CONCLUSION: Using the proposed denoising approach, it is possible to improve the image quality of low-dose DSA images. This improvement could enable both a reduction in contrast agent and radiation dose when acquiring DSA images, thereby benefiting patients as well as clinicians. Since the resulting images are free from artifacts and as the inherent characteristics of the images are also preserved, the proposed method seems to be well suited for clinical images as well.


Asunto(s)
Angiografía de Substracción Digital/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Fantasmas de Imagen , Arterias , Artefactos , Medios de Contraste , Humanos
17.
IEEE Pulse ; 9(5): 21, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30273139

RESUMEN

One of the major challenges currently facing researchers in applying deep learning (DL) models to medical image analysis is the limited amount of annotated data. Collecting such ground-truth annotations requires domain knowledge, cost, and time, making it infeasible for large-scale databases. Albarqouni et al. [S5] presented a novel concept for learning DL models from noisy annotations collected through crowdsourcing platforms (e.g., Amazon Mechanical Turk and Crowdflower) by introducing a robust aggregation layer to the convolutional neural networks (Figure S2). Their proposed method was validated on a publicly available database on breast cancer histology images, showing astonishing results of their robust aggregation method compared to the baseline of majority voting. In follow-up work, Albarqouni et al. [S6] introduced the novel concept of a translation from an image to a video game object for biomedical images. This technique allows medical images to be represented as star-shaped objects that can be easily embedded into a readily available game canvas. The proposed method reduces the necessity of domain knowledge for annotations. Exciting and promising results were reported compared to the conventional crowdsourcing platforms.


Asunto(s)
Colaboración de las Masas , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático , Modelos Teóricos , Humanos
18.
Int J Comput Assist Radiol Surg ; 13(8): 1221-1231, 2018 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-29779153

RESUMEN

PURPOSE: Fusion of preoperative data with intraoperative X-ray images has proven the potential to reduce radiation exposure and contrast agent, especially for complex endovascular aortic repair (EVAR). Due to patient movement and introduced devices that deform the vasculature, the fusion can become inaccurate. This is usually detected by comparing the preoperative information with the contrasted vessel. To avoid repeated use of iodine, comparison with an implanted stent can be used to adjust the fusion. However, detecting the stent automatically without the use of contrast is challenging as only thin stent wires are visible. METHOD: We propose a fast, learning-based method to segment aortic stents in single uncontrasted X-ray images. To this end, we employ a fully convolutional network with residual units. Additionally, we investigate whether incorporation of prior knowledge improves the segmentation. RESULTS: We use 36 X-ray images acquired during EVAR for training and evaluate the segmentation on 27 additional images. We achieve a Dice coefficient of 0.933 (AUC 0.996) when using X-ray alone, and 0.918 (AUC 0.993) and 0.888 (AUC 0.99) when adding the preoperative model, and information about the expected wire width, respectively. CONCLUSION: The proposed method is fully automatic, fast and segments aortic stent grafts in fluoroscopic images with high accuracy. The quality and performance of the segmentation will allow for an intraoperative comparison with the preoperative information to assess the accuracy of the fusion.


Asunto(s)
Aorta/diagnóstico por imagen , Aorta/cirugía , Prótesis Vascular , Procedimientos Endovasculares/métodos , Stents , Animales , Fluoroscopía/métodos , Humanos , Tomografía Computarizada por Rayos X , Resultado del Tratamiento
19.
Int J Comput Assist Radiol Surg ; 13(6): 847-854, 2018 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-29637486

RESUMEN

PURPOSE: Clinical procedures that make use of fluoroscopy may expose patients as well as the clinical staff (throughout their career) to non-negligible doses of radiation. The potential consequences of such exposures fall under two categories, namely stochastic (mostly cancer) and deterministic risks (skin injury). According to the "as low as reasonably achievable" principle, the radiation dose can be lowered only if the necessary image quality can be maintained. METHODS: Our work improves upon the existing patch-based denoising algorithms by utilizing a more sophisticated noise model to exploit non-local self-similarity better and this in turn improves the performance of low-rank approximation. The novelty of the proposed approach lies in its properly designed and parameterized noise model and the elimination of initial estimates. This reduces the computational cost significantly. RESULTS: The algorithm has been evaluated on 500 clinical images (7 patients, 20 sequences, 3 clinical sites), taken at ultra-low dose levels, i.e. 50% of the standard low dose level, during electrophysiology procedures. An average improvement in the contrast-to-noise ratio (CNR) by a factor of around 3.5 has been found. This is associated with an image quality achieved at around 12 (square of 3.5) times the ultra-low dose level. Qualitative evaluation by X-ray image quality experts suggests that the method produces denoised images that comply with the required image quality criteria. CONCLUSION: The results are consistent with the number of patches used, and they demonstrate that it is possible to use motion estimation techniques and "recycle" photons from previous frames to improve the image quality of the current frame. Our results are comparable in terms of CNR to Video Block Matching 3D-a state-of-the-art denoising method. But qualitative analysis by experts confirms that the denoised ultra-low dose X-ray images obtained using our method are more realistic with respect to appearance.


Asunto(s)
Algoritmos , Fantasmas de Imagen , Radiografía/métodos , Cirugía Asistida por Computador/métodos , Humanos , Fotones , Dosis de Radiación , Relación Señal-Ruido , Rayos X
20.
IEEE Trans Med Imaging ; 35(8): 1962-71, 2016 08.
Artículo en Inglés | MEDLINE | ID: mdl-27164577

RESUMEN

Staining and scanning of tissue samples for microscopic examination is fraught with undesirable color variations arising from differences in raw materials and manufacturing techniques of stain vendors, staining protocols of labs, and color responses of digital scanners. When comparing tissue samples, color normalization and stain separation of the tissue images can be helpful for both pathologists and software. Techniques that are used for natural images fail to utilize structural properties of stained tissue samples and produce undesirable color distortions. The stain concentration cannot be negative. Tissue samples are stained with only a few stains and most tissue regions are characterized by at most one effective stain. We model these physical phenomena that define the tissue structure by first decomposing images in an unsupervised manner into stain density maps that are sparse and non-negative. For a given image, we combine its stain density maps with stain color basis of a pathologist-preferred target image, thus altering only its color while preserving its structure described by the maps. Stain density correlation with ground truth and preference by pathologists were higher for images normalized using our method when compared to other alternatives. We also propose a computationally faster extension of this technique for large whole-slide images that selects an appropriate patch sample instead of using the entire image to compute the stain color basis.


Asunto(s)
Colorantes/química , Color , Microscopía , Programas Informáticos , Coloración y Etiquetado
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...