Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
2.
IEEE J Biomed Health Inform ; 28(3): 1161-1172, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37878422

RESUMO

We introduce LYSTO, the Lymphocyte Assessment Hackathon, which was held in conjunction with the MICCAI 2019 Conference in Shenzhen (China). The competition required participants to automatically assess the number of lymphocytes, in particular T-cells, in images of colon, breast, and prostate cancer stained with CD3 and CD8 immunohistochemistry. Differently from other challenges setup in medical image analysis, LYSTO participants were solely given a few hours to address this problem. In this paper, we describe the goal and the multi-phase organization of the hackathon; we describe the proposed methods and the on-site results. Additionally, we present post-competition results where we show how the presented methods perform on an independent set of lung cancer slides, which was not part of the initial competition, as well as a comparison on lymphocyte assessment between presented methods and a panel of pathologists. We show that some of the participants were capable to achieve pathologist-level performance at lymphocyte assessment. After the hackathon, LYSTO was left as a lightweight plug-and-play benchmark dataset on grand-challenge website, together with an automatic evaluation platform.


Assuntos
Benchmarking , Neoplasias da Próstata , Masculino , Humanos , Linfócitos , Mama , China
3.
Radiol Artif Intell ; 3(3): e190169, 2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-34136814

RESUMO

PURPOSE: To develop an unsupervised deep learning model on MR images of normal brain anatomy to automatically detect deviations indicative of pathologic states on abnormal MR images. MATERIALS AND METHODS: In this retrospective study, spatial autoencoders with skip-connections (which can learn to compress and reconstruct data) were leveraged to learn the normal variability of the brain from MR scans of healthy individuals. A total of 100 normal, in-house MR scans were used for training. Subsequently, as the model was unable to reconstruct anomalies well, this characteristic was exploited for detecting and delineating various diseases by computing the difference between the input data and their reconstruction. The unsupervised model was compared with a supervised U-Net- and threshold-based classifier trained on data from 50 patients with multiple sclerosis (in-house dataset) and 50 patients from The Cancer Imaging Archive. Both the unsupervised and supervised U-Net models were tested on five different datasets containing MR images of microangiopathy, glioblastoma, and multiple sclerosis. Precision-recall statistics and derivations thereof (mean area under the precision-recall curve, Dice score) were used to quantify lesion detection and segmentation performance. RESULTS: The unsupervised approach outperformed the naive thresholding approach in lesion detection (mean F1 scores ranging from 17% to 62% vs 6.4% to 15% across the five different datasets) and performed similarly to the supervised U-Net (20%-64%) across a variety of pathologic conditions. This outperformance was mostly driven by improved precision compared with the thresholding approach (mean precisions, 15%-59% vs 3.4%-10%). The model was also developed to create an anomaly heatmap display. CONCLUSION: The unsupervised deep learning model was able to automatically detect anomalies on brain MR images with high performance. Supplemental material is available for this article. Keywords: Brain/Brain Stem Computer Aided Diagnosis (CAD), Convolutional Neural Network (CNN), Experimental Investigations, Head/Neck, MR-Imaging, Quantification, Segmentation, Stacked Auto-Encoders, Technology Assessment, Tissue Characterization © RSNA, 2021.

4.
IEEE J Biomed Health Inform ; 25(2): 403-411, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-32086223

RESUMO

Stain virtualization is an application with growing interest in digital pathology allowing simulation of stained tissue images thus saving lab and tissue resources. Thanks to the success of Generative Adversarial Networks (GANs) and the progress of unsupervised learning, unsupervised style transfer GANs have been successfully used to generate realistic, clinically meaningful and interpretable images. The large size of high resolution Whole Slide Images (WSIs) presents an additional computational challenge. This makes tilewise processing necessary during training and inference of deep learning networks. Instance normalization has a substantial positive effect in style transfer GAN applications but with tilewise inference, it has the tendency to cause a tiling artifact in reconstructed WSIs. In this paper we propose a novel perceptual embedding consistency (PEC) loss forcing the network to learn color, contrast and brightness invariant features in the latent space and hence substantially reducing the aforementioned tiling artifact. Our approach results in more seamless reconstruction of the virtual WSIs. We validate our method quantitatively by comparing the virtually generated images to their corresponding consecutive real stained images. We compare our results to state-of-the-art unsupervised style transfer methods and to the measures obtained from consecutive real stained tissue slide images. We demonstrate our hypothesis about the effect of the PEC loss by comparing model robustness to color, contrast and brightness perturbations and visualizing bottleneck embeddings. We validate the robustness of the bottleneck feature maps by measuring their sensitivity to the different perturbations and using them in a tumor segmentation task. Additionally, we propose a preliminary validation of the virtual staining application by comparing interpretation of 2 pathologists on real and virtual tiles and inter-pathologist agreement.


Assuntos
Processamento de Imagem Assistida por Computador , Humanos
5.
Int J Comput Assist Radiol Surg ; 15(5): 847-857, 2020 May.
Artigo em Inglês | MEDLINE | ID: mdl-32335786

RESUMO

PURPOSE: Demonstrate the feasibility of a fully automatic computer-aided diagnosis (CAD) tool, based on deep learning, that localizes and classifies proximal femur fractures on X-ray images according to the AO classification. The proposed framework aims to improve patient treatment planning and provide support for the training of trauma surgeon residents. MATERIAL AND METHODS: A database of 1347 clinical radiographic studies was collected. Radiologists and trauma surgeons annotated all fractures with bounding boxes and provided a classification according to the AO standard. In all experiments, the dataset was split patient-wise in three with the ratio 70%:10%:20% to build the training, validation and test sets, respectively. ResNet-50 and AlexNet architectures were implemented as deep learning classification and localization models, respectively. Accuracy, precision, recall and [Formula: see text]-score were reported as classification metrics. Retrieval of similar cases was evaluated in terms of precision and recall. RESULTS: The proposed CAD tool for the classification of radiographs into types "A," "B" and "not-fractured" reaches a [Formula: see text]-score of 87% and AUC of 0.95. When classifying fractures versus not-fractured cases it improves up to 94% and 0.98. Prior localization of the fracture results in an improvement with respect to full-image classification. In total, 100% of the predicted centers of the region of interest are contained in the manually provided bounding boxes. The system retrieves on average 9 relevant images (from the same class) out of 10 cases. CONCLUSION: Our CAD scheme localizes, detects and further classifies proximal femur fractures achieving results comparable to expert-level and state-of-the-art performance. Our auxiliary localization model was highly accurate predicting the region of interest in the radiograph. We further investigated several strategies of verification for its adoption into the daily clinical routine. A sensitivity analysis of the size of the ROI and image retrieval as a clinical use case were presented.


Assuntos
Diagnóstico por Computador , Fraturas do Fêmur/diagnóstico por imagem , Bases de Dados Factuais , Aprendizado Profundo , Fraturas do Fêmur/classificação , Fraturas do Fêmur/cirurgia , Humanos , Radiografia
6.
Biomed Phys Eng Express ; 6(1): 015038, 2020 01 30.
Artigo em Inglês | MEDLINE | ID: mdl-33438626

RESUMO

PURPOSE: To evaluate the benefit of the additional available information present in spectral CT datasets, as compared to conventional CT datasets, when utilizing convolutional neural networks for fully automatic localisation and classification of liver lesions in CT images. MATERIALS AND METHODS: Conventional and spectral CT images (iodine maps, virtual monochromatic images (VMI)) were obtained from a spectral dual-layer CT system. Patient diagnosis were known from the clinical reports and classified into healthy, cyst and hypodense metastasis. In order to compare the value of spectral versus conventional datasets when being passed as input to machine learning algorithms, we implemented a weakly-supervised convolutional neural network (CNN) that learns liver lesion localisation without pixel-level ground truth annotations. Regions-of-interest are selected automatically based on the localisation results and are used to train a second CNN for liver lesion classification (healthy, cyst, hypodense metastasis). The accuracy of lesion localisation was evaluated using the Euclidian distances between the ground truth centres of mass and the predicted centres of mass. Lesion classification was evaluated by precision, recall, accuracy and F1-Score. RESULTS: Lesion localisation showed the best results for spectral information with distances of 8.22 ± 10.72 mm, 8.78 ± 15.21 mm and 8.29 ± 12.97 mm for iodine maps, 40 keV and 70 keV VMIs, respectively. With conventional data distances of 10.58 ± 17.65 mm were measured. For lesion classification, the 40 keV VMIs achieved the highest overall accuracy of 0.899 compared to 0.854 for conventional data. CONCLUSION: An enhanced localisation and classification is reported for spectral CT data, which demonstrates that combining machine-learning technology with spectral CT information may in the future improve the clinical workflow as well as the diagnostic accuracy.


Assuntos
Algoritmos , Hepatopatias/patologia , Redes Neurais de Computação , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Imagem Radiográfica a Partir de Emissão de Duplo Fóton/métodos , Razão Sinal-Ruído , Tomografia Computadorizada por Raios X/métodos , Humanos , Hepatopatias/classificação , Aprendizado de Máquina
7.
Int J Comput Assist Radiol Surg ; 14(7): 1117-1126, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-30977093

RESUMO

PURPOSE: 2D digital subtraction angiography (DSA) has become an important technique for interventional neuroradiology tasks, such as detection and subsequent treatment of aneurysms. In order to provide high-quality DSA images, usually undiluted contrast agent and a high X-ray dose are used. The iodinated contrast agent puts a burden on the patients' kidneys while the use of high-dose X-rays expose both patients and medical staff to a considerable amount of radiation. Unfortunately, reducing either the X-ray dose or the contrast agent concentration usually results in a sacrifice of image quality. MATERIALS AND METHODS: To denoise a frame, the proposed spatiotemporal denoising method utilizes the low-rank nature of a spatially aligned temporal sequence where variation is introduced by the flow of contrast agent through a vessel tree of interest. That is, a constrained weighted rank-1 approximation of the stack comprising the frame to be denoised and its temporal neighbors is computed where the weights are used to prevent the contribution of non-similar pixels toward the low-rank approximation. The method has been evaluated using a vascular flow phantom emulating cranial arteries into which contrast agent can be manually injected (Vascular Simulations Replicator, Vascular Simulations, Stony Brook NY, USA). For the evaluation, image sequences acquired at different dose levels as well as different contrast agent concentrations have been used. RESULTS: Qualitative and quantitative analyses have shown that with the proposed approach, the dose and the concentration of the contrast agent could both be reduced by about 75%, while maintaining the required image quality. Most importantly, it has been observed that the DSA images obtained using the proposed method have the closest resemblance to typical DSA images, i.e., they preserve the typical image characteristics best. CONCLUSION: Using the proposed denoising approach, it is possible to improve the image quality of low-dose DSA images. This improvement could enable both a reduction in contrast agent and radiation dose when acquiring DSA images, thereby benefiting patients as well as clinicians. Since the resulting images are free from artifacts and as the inherent characteristics of the images are also preserved, the proposed method seems to be well suited for clinical images as well.


Assuntos
Angiografia Digital/métodos , Processamento de Imagem Assistida por Computador/métodos , Imagens de Fantasmas , Artérias , Artefatos , Meios de Contraste , Humanos
8.
IEEE Pulse ; 9(5): 21, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30273139

RESUMO

One of the major challenges currently facing researchers in applying deep learning (DL) models to medical image analysis is the limited amount of annotated data. Collecting such ground-truth annotations requires domain knowledge, cost, and time, making it infeasible for large-scale databases. Albarqouni et al. [S5] presented a novel concept for learning DL models from noisy annotations collected through crowdsourcing platforms (e.g., Amazon Mechanical Turk and Crowdflower) by introducing a robust aggregation layer to the convolutional neural networks (Figure S2). Their proposed method was validated on a publicly available database on breast cancer histology images, showing astonishing results of their robust aggregation method compared to the baseline of majority voting. In follow-up work, Albarqouni et al. [S6] introduced the novel concept of a translation from an image to a video game object for biomedical images. This technique allows medical images to be represented as star-shaped objects that can be easily embedded into a readily available game canvas. The proposed method reduces the necessity of domain knowledge for annotations. Exciting and promising results were reported compared to the conventional crowdsourcing platforms.


Assuntos
Crowdsourcing , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Modelos Teóricos , Humanos
9.
Int J Comput Assist Radiol Surg ; 13(8): 1221-1231, 2018 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-29779153

RESUMO

PURPOSE: Fusion of preoperative data with intraoperative X-ray images has proven the potential to reduce radiation exposure and contrast agent, especially for complex endovascular aortic repair (EVAR). Due to patient movement and introduced devices that deform the vasculature, the fusion can become inaccurate. This is usually detected by comparing the preoperative information with the contrasted vessel. To avoid repeated use of iodine, comparison with an implanted stent can be used to adjust the fusion. However, detecting the stent automatically without the use of contrast is challenging as only thin stent wires are visible. METHOD: We propose a fast, learning-based method to segment aortic stents in single uncontrasted X-ray images. To this end, we employ a fully convolutional network with residual units. Additionally, we investigate whether incorporation of prior knowledge improves the segmentation. RESULTS: We use 36 X-ray images acquired during EVAR for training and evaluate the segmentation on 27 additional images. We achieve a Dice coefficient of 0.933 (AUC 0.996) when using X-ray alone, and 0.918 (AUC 0.993) and 0.888 (AUC 0.99) when adding the preoperative model, and information about the expected wire width, respectively. CONCLUSION: The proposed method is fully automatic, fast and segments aortic stent grafts in fluoroscopic images with high accuracy. The quality and performance of the segmentation will allow for an intraoperative comparison with the preoperative information to assess the accuracy of the fusion.


Assuntos
Aorta/diagnóstico por imagem , Aorta/cirurgia , Prótese Vascular , Procedimentos Endovasculares/métodos , Stents , Animais , Fluoroscopia/métodos , Humanos , Tomografia Computadorizada por Raios X , Resultado do Tratamento
10.
Int J Comput Assist Radiol Surg ; 13(6): 847-854, 2018 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-29637486

RESUMO

PURPOSE: Clinical procedures that make use of fluoroscopy may expose patients as well as the clinical staff (throughout their career) to non-negligible doses of radiation. The potential consequences of such exposures fall under two categories, namely stochastic (mostly cancer) and deterministic risks (skin injury). According to the "as low as reasonably achievable" principle, the radiation dose can be lowered only if the necessary image quality can be maintained. METHODS: Our work improves upon the existing patch-based denoising algorithms by utilizing a more sophisticated noise model to exploit non-local self-similarity better and this in turn improves the performance of low-rank approximation. The novelty of the proposed approach lies in its properly designed and parameterized noise model and the elimination of initial estimates. This reduces the computational cost significantly. RESULTS: The algorithm has been evaluated on 500 clinical images (7 patients, 20 sequences, 3 clinical sites), taken at ultra-low dose levels, i.e. 50% of the standard low dose level, during electrophysiology procedures. An average improvement in the contrast-to-noise ratio (CNR) by a factor of around 3.5 has been found. This is associated with an image quality achieved at around 12 (square of 3.5) times the ultra-low dose level. Qualitative evaluation by X-ray image quality experts suggests that the method produces denoised images that comply with the required image quality criteria. CONCLUSION: The results are consistent with the number of patches used, and they demonstrate that it is possible to use motion estimation techniques and "recycle" photons from previous frames to improve the image quality of the current frame. Our results are comparable in terms of CNR to Video Block Matching 3D-a state-of-the-art denoising method. But qualitative analysis by experts confirms that the denoised ultra-low dose X-ray images obtained using our method are more realistic with respect to appearance.


Assuntos
Algoritmos , Imagens de Fantasmas , Radiografia/métodos , Cirurgia Assistida por Computador/métodos , Humanos , Fótons , Doses de Radiação , Razão Sinal-Ruído , Raios X
11.
Int J Comput Assist Radiol Surg ; 11(6): 873-80, 2016 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-26984555

RESUMO

PURPOSE: X-ray imaging is widely used for guiding minimally invasive surgeries. Despite ongoing efforts in particular toward advanced visualization incorporating mixed reality concepts, correct depth perception from X-ray imaging is still hampered due to its projective nature. METHODS: In this paper, we introduce a new concept for predicting depth information from single-view X-ray images. Patient-specific training data for depth and corresponding X-ray attenuation information are constructed using readily available preoperative 3D image information. The corresponding depth model is learned employing a novel label-consistent dictionary learning method incorporating atlas and spatial prior constraints to allow for efficient reconstruction performance. RESULTS: We have validated our algorithm on patient data acquired for different anatomy focus (abdomen and thorax). Of 100 image pairs per each of 6 experimental instances, 80 images have been used for training and 20 for testing. Depth estimation results have been compared to ground truth depth values. CONCLUSION: We have achieved around [Formula: see text] and [Formula: see text] mean squared error on abdomen and thorax datasets, respectively, and visual results of our proposed method are very promising. We have therefore presented a new concept for enhancing depth perception for image-guided interventions.


Assuntos
Imageamento Tridimensional/métodos , Procedimentos Cirúrgicos Minimamente Invasivos/métodos , Radiografia Abdominal/métodos , Radiografia Torácica/métodos , Cirurgia Assistida por Computador/métodos , Abdome , Algoritmos , Humanos
12.
IEEE Trans Med Imaging ; 35(5): 1313-21, 2016 05.
Artigo em Inglês | MEDLINE | ID: mdl-26891484

RESUMO

The lack of publicly available ground-truth data has been identified as the major challenge for transferring recent developments in deep learning to the biomedical imaging domain. Though crowdsourcing has enabled annotation of large scale databases for real world images, its application for biomedical purposes requires a deeper understanding and hence, more precise definition of the actual annotation task. The fact that expert tasks are being outsourced to non-expert users may lead to noisy annotations introducing disagreement between users. Despite being a valuable resource for learning annotation models from crowdsourcing, conventional machine-learning methods may have difficulties dealing with noisy annotations during training. In this manuscript, we present a new concept for learning from crowds that handle data aggregation directly as part of the learning process of the convolutional neural network (CNN) via additional crowdsourcing layer (AggNet). Besides, we present an experimental study on learning from crowds designed to answer the following questions. 1) Can deep CNN be trained with data collected from crowdsourcing? 2) How to adapt the CNN to train on multiple types of annotation datasets (ground truth and crowd-based)? 3) How does the choice of annotation and aggregation affect the accuracy? Our experimental setup involved Annot8, a self-implemented web-platform based on Crowdflower API realizing image annotation tasks for a publicly available biomedical image database. Our results give valuable insights into the functionality of deep CNN learning from crowd annotations and prove the necessity of data aggregation integration.


Assuntos
Neoplasias da Mama/diagnóstico por imagem , Crowdsourcing/métodos , Histocitoquímica , Interpretação de Imagem Assistida por Computador/métodos , Mitose/fisiologia , Redes Neurais de Computação , Feminino , Humanos , Internet , Aprendizado de Máquina , Jogos de Vídeo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA