Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 37
Filtrar
1.
Phys Med Biol ; 69(11)2024 May 20.
Artigo em Inglês | MEDLINE | ID: mdl-38697200

RESUMO

Minimally invasive ablation techniques for renal cancer are becoming more popular due to their low complication rate and rapid recovery period. Despite excellent visualisation, one drawback of the use of computed tomography (CT) in these procedures is the requirement for iodine-based contrast agents, which are associated with adverse reactions and require a higher x-ray dose. The purpose of this work is to examine the use of time information to generate synthetic contrast enhanced images at arbitrary points after contrast agent injection from non-contrast CT images acquired during renal cryoablation cases. To achieve this, we propose a new method of conditioning generative adversarial networks with normalised time stamps and demonstrate that the use of a HyperNetwork is feasible for this task, generating images of competitive quality compared to standard generative modelling techniques. We also show that reducing the receptive field can help tackle challenges in interventional CT data, offering significantly better image quality as well as better performance when generating images for a downstream segmentation task. Lastly, we show that all proposed models are robust enough to perform inference on unseen intra-procedural data, while also improving needle artefacts and generalising contrast enhancement to other clinically relevant regions and features.


Assuntos
Meios de Contraste , Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Humanos , Processamento de Imagem Assistida por Computador/métodos , Fatores de Tempo , Neoplasias Renais/diagnóstico por imagem , Neoplasias Renais/cirurgia
2.
Med Image Anal ; 95: 103181, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38640779

RESUMO

Supervised machine learning-based medical image computing applications necessitate expert label curation, while unlabelled image data might be relatively abundant. Active learning methods aim to prioritise a subset of available image data for expert annotation, for label-efficient model training. We develop a controller neural network that measures priority of images in a sequence of batches, as in batch-mode active learning, for multi-class segmentation tasks. The controller is optimised by rewarding positive task-specific performance gain, within a Markov decision process (MDP) environment that also optimises the task predictor. In this work, the task predictor is a segmentation network. A meta-reinforcement learning algorithm is proposed with multiple MDPs, such that the pre-trained controller can be adapted to a new MDP that contains data from different institutes and/or requires segmentation of different organs or structures within the abdomen. We present experimental results using multiple CT datasets from more than one thousand patients, with segmentation tasks of nine different abdominal organs, to demonstrate the efficacy of the learnt prioritisation controller function and its cross-institute and cross-organ adaptability. We show that the proposed adaptable prioritisation metric yields converging segmentation accuracy for a new kidney segmentation task, unseen in training, using between approximately 40% to 60% of labels otherwise required with other heuristic or random prioritisation metrics. For clinical datasets of limited size, the proposed adaptable prioritisation offers a performance improvement of 22.6% and 10.2% in Dice score, for tasks of kidney and liver vessel segmentation, respectively, compared to random prioritisation and alternative active sampling strategies.


Assuntos
Algoritmos , Humanos , Tomografia Computadorizada por Raios X , Redes Neurais de Computação , Aprendizado de Máquina , Cadeias de Markov , Aprendizado de Máquina Supervisionado , Radiografia Abdominal/métodos
3.
Int J Comput Assist Radiol Surg ; 19(6): 1003-1012, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38451359

RESUMO

PURPOSE: Magnetic resonance (MR) imaging targeted prostate cancer (PCa) biopsy enables precise sampling of MR-detected lesions, establishing its importance in recommended clinical practice. Planning for the ultrasound-guided procedure involves pre-selecting needle sampling positions. However, performing this procedure is subject to a number of factors, including MR-to-ultrasound registration, intra-procedure patient movement and soft tissue motions. When a fixed pre-procedure planning is carried out without intra-procedure adaptation, these factors will lead to sampling errors which could cause false positives and false negatives. Reinforcement learning (RL) has been proposed for procedure plannings on similar applications such as this one, because intelligent agents can be trained for both pre-procedure and intra-procedure planning. However, it is not clear if RL is beneficial when it comes to addressing these intra-procedure errors. METHODS: In this work, we develop and compare imitation learning (IL), supervised by demonstrations of predefined sampling strategy, and RL approaches, under varying degrees of intra-procedure motion and registration error, to represent sources of targeting errors likely to occur in an intra-operative procedure. RESULTS: Based on results using imaging data from 567 PCa patients, we demonstrate the efficacy and value in adopting RL algorithms to provide intelligent intra-procedure action suggestions, compared to IL-based planning supervised by commonly adopted policies. CONCLUSIONS: The improvement in biopsy sampling performance for intra-procedure planning has not been observed in experiments with only pre-procedure planning. These findings suggest a strong role for RL in future prospective studies which adopt intra-procedure planning. Our open source code implementation is available here .


Assuntos
Biópsia Guiada por Imagem , Neoplasias da Próstata , Humanos , Masculino , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia , Neoplasias da Próstata/cirurgia , Biópsia Guiada por Imagem/métodos , Imageamento por Ressonância Magnética/métodos , Próstata/diagnóstico por imagem , Próstata/patologia , Próstata/cirurgia , Ultrassonografia de Intervenção/métodos , Aprendizado de Máquina
4.
Med Image Anal ; 91: 103030, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37995627

RESUMO

One of the distinct characteristics of radiologists reading multiparametric prostate MR scans, using reporting systems like PI-RADS v2.1, is to score individual types of MR modalities, including T2-weighted, diffusion-weighted, and dynamic contrast-enhanced, and then combine these image-modality-specific scores using standardised decision rules to predict the likelihood of clinically significant cancer. This work aims to demonstrate that it is feasible for low-dimensional parametric models to model such decision rules in the proposed Combiner networks, without compromising the accuracy of predicting radiologic labels. First, we demonstrate that either a linear mixture model or a nonlinear stacking model is sufficient to model PI-RADS decision rules for localising prostate cancer. Second, parameters of these combining models are proposed as hyperparameters, weighing independent representations of individual image modalities in the Combiner network training, as opposed to end-to-end modality ensemble. A HyperCombiner network is developed to train a single image segmentation network that can be conditioned on these hyperparameters during inference for much-improved efficiency. Experimental results based on 751 cases from 651 patients compare the proposed rule-modelling approaches with other commonly-adopted end-to-end networks, in this downstream application of automating radiologist labelling on multiparametric MR. By acquiring and interpreting the modality combining rules, specifically the linear-weights or odds ratios associated with individual image modalities, three clinical applications are quantitatively presented and contextualised in the prostate cancer segmentation application, including modality availability assessment, importance quantification and rule discovery.


Assuntos
Neoplasias da Próstata , Radiologia , Masculino , Humanos , Neoplasias da Próstata/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Próstata , Imagem Multimodal
5.
IEEE Trans Biomed Eng ; PP2023 Oct 19.
Artigo em Inglês | MEDLINE | ID: mdl-37856260

RESUMO

OBJECTIVE: Reconstructing freehand ultrasound in 3D without any external tracker has been a long-standing challenge in ultrasound-assisted procedures. We aim to define new ways of parameterising long-term dependencies, and evaluate the performance. METHODS: First, long-term dependency is encoded by transformation positions within a frame sequence. This is achieved by combining a sequence model with a multi-transformation prediction. Second, two dependency factors are proposed, anatomical image content and scanning protocol, for contributing towards accurate reconstruction. Each factor is quantified experimentally by reducing respective training variances. RESULTS: 1) The added long-term dependency up to 400 frames at 20 frames per second (fps) indeed improved reconstruction, with an up to 82.4% lowered accumulated error, compared with the baseline performance. The improvement was found to be dependent on sequence length, transformation interval and scanning protocol and, unexpectedly, not on the use of recurrent networks with long-short term modules; 2) Decreasing either anatomical or protocol variance in training led to poorer reconstruction accuracy. Interestingly, greater performance was gained from representative protocol patterns, than from representative anatomical features. CONCLUSION: The proposed algorithm uses hyperparameter tuning to effectively utilise long-term dependency. The proposed dependency factors are of practical significance in collecting diverse training data, regulating scanning protocols and developing efficient networks. SIGNIFICANCE: The proposed new methodology with publicly available volunteer data and code for parametersing the long-term dependency, experimentally shown to be valid sources of performance improvement, which could potentially lead to better model development and practical optimisation of the reconstruction application.

6.
Med Image Anal ; 90: 102935, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37716198

RESUMO

The prowess that makes few-shot learning desirable in medical image analysis is the efficient use of the support image data, which are labelled to classify or segment new classes, a task that otherwise requires substantially more training images and expert annotations. This work describes a fully 3D prototypical few-shot segmentation algorithm, such that the trained networks can be effectively adapted to clinically interesting structures that are absent in training, using only a few labelled images from a different institute. First, to compensate for the widely recognised spatial variability between institutions in episodic adaptation of novel classes, a novel spatial registration mechanism is integrated into prototypical learning, consisting of a segmentation head and an spatial alignment module. Second, to assist the training with observed imperfect alignment, support mask conditioning module is proposed to further utilise the annotation available from the support images. Extensive experiments are presented in an application of segmenting eight anatomical structures important for interventional planning, using a data set of 589 pelvic T2-weighted MR images, acquired at seven institutes. The results demonstrate the efficacy in each of the 3D formulation, the spatial registration, and the support mask conditioning, all of which made positive contributions independently or collectively. Compared with the previously proposed 2D alternatives, the few-shot segmentation performance was improved with statistical significance, regardless whether the support data come from the same or different institutes.

7.
Med Phys ; 50(9): 5489-5504, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-36938883

RESUMO

BACKGROUND: Targeted prostate biopsy guided by multiparametric magnetic resonance imaging (mpMRI) detects more clinically significant lesions than conventional systemic biopsy. Lesion segmentation is required for planning MRI-targeted biopsies. The requirement for integrating image features available in T2-weighted and diffusion-weighted images poses a challenge in prostate lesion segmentation from mpMRI. PURPOSE: A flexible and efficient multistream fusion encoder is proposed in this work to facilitate the multiscale fusion of features from multiple imaging streams. A patch-based loss function is introduced to improve the accuracy in segmenting small lesions. METHODS: The proposed multistream encoder fuses features extracted in the three imaging streams at each layer of the network, thereby allowing improved feature maps to propagate downstream and benefit segmentation performance. The fusion is achieved through a spatial attention map generated by optimally weighting the contribution of the convolution outputs from each stream. This design provides flexibility for the network to highlight image modalities according to their relative influence on the segmentation performance. The encoder also performs multiscale integration by highlighting the input feature maps (low-level features) with the spatial attention maps generated from convolution outputs (high-level features). The Dice similarity coefficient (DSC), serving as a cost function, is less sensitive to incorrect segmentation for small lesions. We address this issue by introducing a patch-based loss function that provides an average of the DSCs obtained from local image patches. This local average DSC is equally sensitive to large and small lesions, as the patch-based DSCs associated with small and large lesions have equal weights in this average DSC. RESULTS: The framework was evaluated in 931 sets of images acquired in several clinical studies at two centers in Hong Kong and the United Kingdom. In particular, the training, validation, and test sets contain 615, 144, and 172 sets of images, respectively. The proposed framework outperformed single-stream networks and three recently proposed multistream networks, attaining F1 scores of 82.2 and 87.6% in the lesion and patient levels, respectively. The average inference time for an axial image was 11.8 ms. CONCLUSION: The accuracy and efficiency afforded by the proposed framework would accelerate the MRI interpretation workflow of MRI-targeted biopsy and focal therapies.


Assuntos
Neoplasias da Próstata , Masculino , Humanos , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia , Imageamento por Ressonância Magnética/métodos , Próstata/patologia , Algoritmos , Biópsia , Processamento de Imagem Assistida por Computador/métodos
8.
Int J Comput Assist Radiol Surg ; 18(8): 1437-1449, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-36790674

RESUMO

PURPOSE: Minimally invasive treatments for renal carcinoma offer a low rate of complications and quick recovery. One drawback of the use of computed tomography (CT) for needle guidance is the use of iodinated contrast agents, which require an increased X-ray dose and can potentially cause adverse reactions. The purpose of this work is to generalise the problem of synthetic contrast enhancement to allow the generation of multiple phases on non-contrast CT data from a real-world, clinical dataset without training multiple convolutional neural networks. METHODS: A framework for switching between contrast phases by conditioning the network on the phase information is proposed and compared with separately trained networks. We then examine how the degree of supervision affects the generated contrast by evaluating three established architectures: U-Net (fully supervised), Pix2Pix (adversarial with supervision), and CycleGAN (fully adversarial). RESULTS: We demonstrate that there is no performance loss when testing the proposed method against separately trained networks. Of the training paradigms investigated, the fully adversarial CycleGAN performs the worst, while the fully supervised U-Net generates more realistic voxel intensities and performed better than Pix2Pix in generating contrast images for use in a downstream segmentation task. Lastly, two models are shown to generalise to intra-procedural data not seen during the training process, also enhancing features such as needles and ice balls relevant to interventional radiological procedures. CONCLUSION: The proposed contrast switching framework is a feasible option for generating multiple contrast phases without the overhead of training multiple neural networks, while also being robust towards unseen data and enhancing contrast in features relevant to clinical practice.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Radiografia , Crioterapia
9.
IEEE Trans Med Imaging ; 42(3): 823-833, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36322502

RESUMO

We present a meta-learning framework for interactive medical image registration. Our proposed framework comprises three components: a learning-based medical image registration algorithm, a form of user interaction that refines registration at inference, and a meta-learning protocol that learns a rapidly adaptable network initialization. This paper describes a specific algorithm that implements the registration, interaction and meta-learning protocol for our exemplar clinical application: registration of magnetic resonance (MR) imaging to interactively acquired, sparsely-sampled transrectal ultrasound (TRUS) images. Our approach obtains comparable registration error (4.26 mm) to the best-performing non-interactive learning-based 3D-to-3D method (3.97 mm) while requiring only a fraction of the data, and occurring in real-time during acquisition. Applying sparsely sampled data to non-interactive methods yields higher registration errors (6.26 mm), demonstrating the effectiveness of interactive MR-TRUS registration, which may be applied intraoperatively given the real-time nature of the adaptation process.


Assuntos
Imageamento Tridimensional , Próstata , Masculino , Humanos , Próstata/diagnóstico por imagem , Imageamento Tridimensional/métodos , Ultrassonografia/métodos , Algoritmos , Imageamento por Ressonância Magnética
10.
Med Image Anal ; 82: 102620, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36148705

RESUMO

Prostate biopsy and image-guided treatment procedures are often performed under the guidance of ultrasound fused with magnetic resonance images (MRI). Accurate image fusion relies on accurate segmentation of the prostate on ultrasound images. Yet, the reduced signal-to-noise ratio and artifacts (e.g., speckle and shadowing) in ultrasound images limit the performance of automated prostate segmentation techniques and generalizing these methods to new image domains is inherently difficult. In this study, we address these challenges by introducing a novel 2.5D deep neural network for prostate segmentation on ultrasound images. Our approach addresses the limitations of transfer learning and finetuning methods (i.e., drop in performance on the original training data when the model weights are updated) by combining a supervised domain adaptation technique and a knowledge distillation loss. The knowledge distillation loss allows the preservation of previously learned knowledge and reduces the performance drop after model finetuning on new datasets. Furthermore, our approach relies on an attention module that considers model feature positioning information to improve the segmentation accuracy. We trained our model on 764 subjects from one institution and finetuned our model using only ten subjects from subsequent institutions. We analyzed the performance of our method on three large datasets encompassing 2067 subjects from three different institutions. Our method achieved an average Dice Similarity Coefficient (Dice) of 94.0±0.03 and Hausdorff Distance (HD95) of 2.28 mm in an independent set of subjects from the first institution. Moreover, our model generalized well in the studies from the other two institutions (Dice: 91.0±0.03; HD95: 3.7 mm and Dice: 82.0±0.03; HD95: 7.1 mm). We introduced an approach that successfully segmented the prostate on ultrasound images in a multi-center study, suggesting its clinical potential to facilitate the accurate fusion of ultrasound and MRI images to drive biopsy and image-guided treatments.


Assuntos
Redes Neurais de Computação , Próstata , Humanos , Masculino , Próstata/diagnóstico por imagem , Ultrassonografia , Imageamento por Ressonância Magnética/métodos , Pelve
11.
IEEE Trans Med Imaging ; 41(11): 3421-3431, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-35788452

RESUMO

In this work, we consider the task of pairwise cross-modality image registration, which may benefit from exploiting additional images available only at training time from an additional modality that is different to those being registered. As an example, we focus on aligning intra-subject multiparametric Magnetic Resonance (mpMR) images, between T2-weighted (T2w) scans and diffusion-weighted scans with high b-value (DWI [Formula: see text]). For the application of localising tumours in mpMR images, diffusion scans with zero b-value (DWI [Formula: see text]) are considered easier to register to T2w due to the availability of corresponding features. We propose a learning from privileged modality algorithm, using a training-only imaging modality DWI [Formula: see text], to support the challenging multi-modality registration problems. We present experimental results based on 369 sets of 3D multiparametric MRI images from 356 prostate cancer patients and report, with statistical significance, a lowered median target registration error of 4.34 mm, when registering the holdout DWI [Formula: see text] and T2w image pairs, compared with that of 7.96 mm before registration. Results also show that the proposed learning-based registration networks enabled efficient registration with comparable or better accuracy, compared with a classical iterative algorithm and other tested learning-based methods with/without the additional modality. These compared algorithms also failed to produce any significantly improved alignment between DWI [Formula: see text] and T2w in this challenging application.


Assuntos
Imageamento por Ressonância Magnética Multiparamétrica , Neoplasias da Próstata , Masculino , Humanos , Imagem de Difusão por Ressonância Magnética/métodos , Imageamento por Ressonância Magnética/métodos , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia , Algoritmos
12.
Med Image Anal ; 78: 102427, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35344824

RESUMO

In this paper, we consider image quality assessment (IQA) as a measure of how images are amenable with respect to a given downstream task, or task amenability. When the task is performed using machine learning algorithms, such as a neural-network-based task predictor for image classification or segmentation, the performance of the task predictor provides an objective estimate of task amenability. In this work, we use an IQA controller to predict the task amenability which, itself being parameterised by neural networks, can be trained simultaneously with the task predictor. We further develop a meta-reinforcement learning framework to improve the adaptability for both IQA controllers and task predictors, such that they can be fine-tuned efficiently on new datasets or meta-tasks. We demonstrate the efficacy of the proposed task-specific, adaptable IQA approach, using two clinical applications for ultrasound-guided prostate intervention and pneumonia detection on X-ray images.


Assuntos
Aprendizado de Máquina , Redes Neurais de Computação , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Masculino , Ultrassonografia
13.
IEEE Trans Med Imaging ; 41(6): 1311-1319, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-34962866

RESUMO

Ultrasound imaging is a commonly used technology for visualising patient anatomy in real-time during diagnostic and therapeutic procedures. High operator dependency and low reproducibility make ultrasound imaging and interpretation challenging with a steep learning curve. Automatic image classification using deep learning has the potential to overcome some of these challenges by supporting ultrasound training in novices, as well as aiding ultrasound image interpretation in patient with complex pathology for more experienced practitioners. However, the use of deep learning methods requires a large amount of data in order to provide accurate results. Labelling large ultrasound datasets is a challenging task because labels are retrospectively assigned to 2D images without the 3D spatial context available in vivo or that would be inferred while visually tracking structures between frames during the procedure. In this work, we propose a multi-modal convolutional neural network (CNN) architecture that labels endoscopic ultrasound (EUS) images from raw verbal comments provided by a clinician during the procedure. We use a CNN composed of two branches, one for voice data and another for image data, which are joined to predict image labels from the spoken names of anatomical landmarks. The network was trained using recorded verbal comments from expert operators. Our results show a prediction accuracy of 76% at image level on a dataset with 5 different labels. We conclude that the addition of spoken commentaries can increase the performance of ultrasound image classification, and eliminate the burden of manually labelling large EUS datasets necessary for deep learning applications.


Assuntos
Redes Neurais de Computação , Humanos , Reprodutibilidade dos Testes , Estudos Retrospectivos , Ultrassonografia
14.
Med Image Anal ; 74: 102231, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34583240

RESUMO

We present Free Point Transformer (FPT) - a deep neural network architecture for non-rigid point-set registration. Consisting of two modules, a global feature extraction module and a point transformation module, FPT does not assume explicit constraints based on point vicinity, thereby overcoming a common requirement of previous learning-based point-set registration methods. FPT is designed to accept unordered and unstructured point-sets with a variable number of points and uses a "model-free" approach without heuristic constraints. Training FPT is flexible and involves minimizing an intuitive unsupervised loss function, but supervised, semi-supervised, and partially- or weakly-supervised training are also supported. This flexibility makes FPT amenable to multimodal image registration problems where the ground-truth deformations are difficult or impossible to measure. In this paper, we demonstrate the application of FPT to non-rigid registration of prostate magnetic resonance (MR) imaging and sparsely-sampled transrectal ultrasound (TRUS) images. The registration errors were 4.71 mm and 4.81 mm for complete TRUS imaging and sparsely-sampled TRUS imaging, respectively. The results indicate superior accuracy to the alternative rigid and non-rigid registration algorithms tested and substantially lower computation time. The rapid inference possible with FPT makes it particularly suitable for applications where real-time registration is beneficial.


Assuntos
Neoplasias da Próstata , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Próstata/diagnóstico por imagem , Ultrassonografia
15.
Med Image Anal ; 58: 101558, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31526965

RESUMO

Convolutional neural networks (CNNs) have recently led to significant advances in automatic segmentations of anatomical structures in medical images, and a wide variety of network architectures are now available to the research community. For applications such as segmentation of the prostate in magnetic resonance images (MRI), the results of the PROMISE12 online algorithm evaluation platform have demonstrated differences between the best-performing segmentation algorithms in terms of numerical accuracy using standard metrics such as the Dice score and boundary distance. These small differences in the segmented regions/boundaries outputted by different algorithms may potentially have an unsubstantial impact on the results of downstream image analysis tasks, such as estimating organ volume and multimodal image registration, which inform clinical decisions. This impact has not been previously investigated. In this work, we quantified the accuracy of six different CNNs in segmenting the prostate in 3D patient T2-weighted MRI scans and compared the accuracy of organ volume estimation and MRI-ultrasound (US) registration errors using the prostate segmentations produced by different networks. Networks were trained and tested using a set of 232 patient MRIs with labels provided by experienced clinicians. A statistically significant difference was found among the Dice scores and boundary distances produced by these networks in a non-parametric analysis of variance (p < 0.001 and p < 0.001, respectively), where the following multiple comparison tests revealed that the statistically significant difference in segmentation errors were caused by at least one tested network. Gland volume errors (GVEs) and target registration errors (TREs) were then estimated using the CNN-generated segmentations. Interestingly, there was no statistical difference found in either GVEs or TREs among different networks, (p = 0.34 and p = 0.26, respectively). This result provides a real-world example that these networks with different segmentation performances may potentially provide indistinguishably adequate registration accuracies to assist prostate cancer imaging applications. We conclude by recommending that the differences in the accuracy of downstream image analysis tasks that make use of data output by automatic segmentation methods, such as CNNs, within a clinical pipeline should be taken into account when selecting between different network architectures, in addition to reporting the segmentation accuracy.


Assuntos
Imageamento por Ressonância Magnética , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão/métodos , Neoplasias da Próstata/diagnóstico por imagem , Ultrassonografia , Humanos , Masculino , Carga Tumoral
16.
Med Image Anal ; 49: 1-13, 2018 10.
Artigo em Inglês | MEDLINE | ID: mdl-30007253

RESUMO

One of the fundamental challenges in supervised learning for multimodal image registration is the lack of ground-truth for voxel-level spatial correspondence. This work describes a method to infer voxel-level transformation from higher-level correspondence information contained in anatomical labels. We argue that such labels are more reliable and practical to obtain for reference sets of image pairs than voxel-level correspondence. Typical anatomical labels of interest may include solid organs, vessels, ducts, structure boundaries and other subject-specific ad hoc landmarks. The proposed end-to-end convolutional neural network approach aims to predict displacement fields to align multiple labelled corresponding structures for individual image pairs during the training, while only unlabelled image pairs are used as the network input for inference. We highlight the versatility of the proposed strategy, for training, utilising diverse types of anatomical labels, which need not to be identifiable over all training image pairs. At inference, the resulting 3D deformable image registration algorithm runs in real-time and is fully-automated without requiring any anatomical labels or initialisation. Several network architecture variants are compared for registering T2-weighted magnetic resonance images and 3D transrectal ultrasound images from prostate cancer patients. A median target registration error of 3.6 mm on landmark centroids and a median Dice of 0.87 on prostate glands are achieved from cross-validation experiments, in which 108 pairs of multimodal images from 76 patients were tested with high-quality anatomical labels.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Redes Neurais de Computação , Neoplasias da Próstata/diagnóstico por imagem , Ultrassonografia , Pontos de Referência Anatômicos , Humanos , Imageamento Tridimensional , Masculino , Próstata/anatomia & histologia , Próstata/diagnóstico por imagem
17.
IEEE Trans Med Imaging ; 37(8): 1822-1834, 2018 08.
Artigo em Inglês | MEDLINE | ID: mdl-29994628

RESUMO

Automatic segmentation of abdominal anatomy on computed tomography (CT) images can support diagnosis, treatment planning, and treatment delivery workflows. Segmentation methods using statistical models and multi-atlas label fusion (MALF) require inter-subject image registrations, which are challenging for abdominal images, but alternative methods without registration have not yet achieved higher accuracy for most abdominal organs. We present a registration-free deep-learning-based segmentation algorithm for eight organs that are relevant for navigation in endoscopic pancreatic and biliary procedures, including the pancreas, the gastrointestinal tract (esophagus, stomach, and duodenum) and surrounding organs (liver, spleen, left kidney, and gallbladder). We directly compared the segmentation accuracy of the proposed method to the existing deep learning and MALF methods in a cross-validation on a multi-centre data set with 90 subjects. The proposed method yielded significantly higher Dice scores for all organs and lower mean absolute distances for most organs, including Dice scores of 0.78 versus 0.71, 0.74, and 0.74 for the pancreas, 0.90 versus 0.85, 0.87, and 0.83 for the stomach, and 0.76 versus 0.68, 0.69, and 0.66 for the esophagus. We conclude that the deep-learning-based segmentation represents a registration-free method for multi-organ abdominal CT segmentation whose accuracy can surpass current methods, potentially supporting image-guided navigation in gastrointestinal endoscopy procedures.


Assuntos
Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Radiografia Abdominal/métodos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Sistema Digestório/diagnóstico por imagem , Humanos , Rim/diagnóstico por imagem , Baço/diagnóstico por imagem
18.
Int J Comput Assist Radiol Surg ; 13(6): 875-883, 2018 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-29663274

RESUMO

PURPOSE: Navigation of endoscopic ultrasound (EUS)-guided procedures of the upper gastrointestinal (GI) system can be technically challenging due to the small fields-of-view of ultrasound and optical devices, as well as the anatomical variability and limited number of orienting landmarks during navigation. Co-registration of an EUS device and a pre-procedure 3D image can enhance the ability to navigate. However, the fidelity of this contextual information depends on the accuracy of registration. The purpose of this study was to develop and test the feasibility of a simulation-based planning method for pre-selecting patient-specific EUS-visible anatomical landmark locations to maximise the accuracy and robustness of a feature-based multimodality registration method. METHODS: A registration approach was adopted in which landmarks are registered to anatomical structures segmented from the pre-procedure volume. The predicted target registration errors (TREs) of EUS-CT registration were estimated using simulated visible anatomical landmarks and a Monte Carlo simulation of landmark localisation error. The optimal planes were selected based on the 90th percentile of TREs, which provide a robust and more accurate EUS-CT registration initialisation. The method was evaluated by comparing the accuracy and robustness of registrations initialised using optimised planes versus non-optimised planes using manually segmented CT images and simulated ([Formula: see text]) or retrospective clinical ([Formula: see text]) EUS landmarks. RESULTS: The results show a lower 90th percentile TRE when registration is initialised using the optimised planes compared with a non-optimised initialisation approach (p value [Formula: see text]). CONCLUSIONS: The proposed simulation-based method to find optimised EUS planes and landmarks for EUS-guided procedures may have the potential to improve registration accuracy. Further work will investigate applying the technique in a clinical setting.


Assuntos
Endossonografia/métodos , Imageamento Tridimensional/métodos , Pâncreas/diagnóstico por imagem , Pancreatectomia/métodos , Neoplasias Pancreáticas/cirurgia , Cirurgia Assistida por Computador/métodos , Humanos , Pâncreas/cirurgia , Neoplasias Pancreáticas/diagnóstico por imagem , Estudos Retrospectivos , Tomografia Computadorizada por Raios X/métodos
19.
Comput Methods Programs Biomed ; 158: 113-122, 2018 May.
Artigo em Inglês | MEDLINE | ID: mdl-29544777

RESUMO

BACKGROUND AND OBJECTIVES: Medical image analysis and computer-assisted intervention problems are increasingly being addressed with deep-learning-based solutions. Established deep-learning platforms are flexible but do not provide specific functionality for medical image analysis and adapting them for this domain of application requires substantial implementation effort. Consequently, there has been substantial duplication of effort and incompatible infrastructure developed across many research groups. This work presents the open-source NiftyNet platform for deep learning in medical imaging. The ambition of NiftyNet is to accelerate and simplify the development of these solutions, and to provide a common mechanism for disseminating research outputs for the community to use, adapt and build upon. METHODS: The NiftyNet infrastructure provides a modular deep-learning pipeline for a range of medical imaging applications including segmentation, regression, image generation and representation learning applications. Components of the NiftyNet pipeline including data loading, data augmentation, network architectures, loss functions and evaluation metrics are tailored to, and take advantage of, the idiosyncracies of medical image analysis and computer-assisted intervention. NiftyNet is built on the TensorFlow framework and supports features such as TensorBoard visualization of 2D and 3D images and computational graphs by default. RESULTS: We present three illustrative medical image analysis applications built using NiftyNet infrastructure: (1) segmentation of multiple abdominal organs from computed tomography; (2) image regression to predict computed tomography attenuation maps from brain magnetic resonance images; and (3) generation of simulated ultrasound images for specified anatomical poses. CONCLUSIONS: The NiftyNet infrastructure enables researchers to rapidly develop and distribute deep learning solutions for segmentation, regression, image generation and representation learning applications, or extend the platform to new applications.


Assuntos
Diagnóstico por Imagem/métodos , Aprendizado de Máquina , Abdome/diagnóstico por imagem , Encéfalo/diagnóstico por imagem , Simulação por Computador , Bases de Dados Factuais , Diagnóstico por Imagem/instrumentação , Humanos , Processamento de Imagem Assistida por Computador/instrumentação , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética , Redes Neurais de Computação , Ultrassonografia
20.
Med Phys ; 45(4): 1408-1414, 2018 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-29443386

RESUMO

PURPOSE: Image-guided systems that fuse magnetic resonance imaging (MRI) with three-dimensional (3D) ultrasound (US) images for performing targeted prostate needle biopsy and minimally invasive treatments for prostate cancer are of increasing clinical interest. To date, a wide range of different accuracy estimation procedures and error metrics have been reported, which makes comparing the performance of different systems difficult. METHODS: A set of nine measures are presented to assess the accuracy of MRI-US image registration, needle positioning, needle guidance, and overall system error, with the aim of providing a methodology for estimating the accuracy of instrument placement using a MR/US-guided transperineal approach. RESULTS: Using the SmartTarget fusion system, an MRI-US image alignment error was determined to be 2.0 ± 1.0 mm (mean ± SD), and an overall system instrument targeting error of 3.0 ± 1.2 mm. Three needle deployments for each target phantom lesion was found to result in a 100% lesion hit rate and a median predicted cancer core length of 5.2 mm. CONCLUSIONS: The application of a comprehensive, unbiased validation assessment for MR/US guided systems can provide useful information on system performance for quality assurance and system comparison. Furthermore, such an analysis can be helpful in identifying relationships between these errors, providing insight into the technical behavior of these systems.


Assuntos
Biópsia por Agulha/instrumentação , Biópsia Guiada por Imagem/instrumentação , Imageamento por Ressonância Magnética , Próstata/diagnóstico por imagem , Próstata/patologia , Projetos de Pesquisa , Humanos , Processamento de Imagem Assistida por Computador , Masculino , Ultrassonografia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA