Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
J Big Data ; 11(1): 104, 2024.
Article in English | MEDLINE | ID: mdl-39109339

ABSTRACT

The morphology and distribution of airway tree abnormalities enable diagnosis and disease characterisation across a variety of chronic respiratory conditions. In this regard, airway segmentation plays a critical role in the production of the outline of the entire airway tree to enable estimation of disease extent and severity. Furthermore, the segmentation of a complete airway tree is challenging as the intensity, scale/size and shape of airway segments and their walls change across generations. The existing classical techniques either provide an undersegmented or oversegmented airway tree, and manual intervention is required for optimal airway tree segmentation. The recent development of deep learning methods provides a fully automatic way of segmenting airway trees; however, these methods usually require high GPU memory usage and are difficult to implement in low computational resource environments. Therefore, in this study, we propose a data-centric deep learning technique with big interpolated data, Interpolation-Split, to boost the segmentation performance of the airway tree. The proposed technique utilises interpolation and image split to improve data usefulness and quality. Then, an ensemble learning strategy is implemented to aggregate the segmented airway segments at different scales. In terms of average segmentation performance (dice similarity coefficient, DSC), our method (A) achieves 90.55%, 89.52%, and 85.80%; (B) outperforms the baseline models by 2.89%, 3.86%, and 3.87% on average; and (C) produces maximum segmentation performance gain by 14.11%, 9.28%, and 12.70% for individual cases when (1) nnU-Net with instant normalisation and leaky ReLU; (2) nnU-Net with batch normalisation and ReLU; and (3) modified dilated U-Net are used respectively. Our proposed method outperformed the state-of-the-art airway segmentation approaches. Furthermore, our proposed technique has low RAM and GPU memory usage, and it is GPU memory-efficient and highly flexible, enabling it to be deployed on any 2D deep learning model.

2.
Phys Med Biol ; 69(11)2024 May 20.
Article in English | MEDLINE | ID: mdl-38697200

ABSTRACT

Minimally invasive ablation techniques for renal cancer are becoming more popular due to their low complication rate and rapid recovery period. Despite excellent visualisation, one drawback of the use of computed tomography (CT) in these procedures is the requirement for iodine-based contrast agents, which are associated with adverse reactions and require a higher x-ray dose. The purpose of this work is to examine the use of time information to generate synthetic contrast enhanced images at arbitrary points after contrast agent injection from non-contrast CT images acquired during renal cryoablation cases. To achieve this, we propose a new method of conditioning generative adversarial networks with normalised time stamps and demonstrate that the use of a HyperNetwork is feasible for this task, generating images of competitive quality compared to standard generative modelling techniques. We also show that reducing the receptive field can help tackle challenges in interventional CT data, offering significantly better image quality as well as better performance when generating images for a downstream segmentation task. Lastly, we show that all proposed models are robust enough to perform inference on unseen intra-procedural data, while also improving needle artefacts and generalising contrast enhancement to other clinically relevant regions and features.


Subject(s)
Contrast Media , Image Processing, Computer-Assisted , Tomography, X-Ray Computed , Humans , Image Processing, Computer-Assisted/methods , Time Factors , Kidney Neoplasms/diagnostic imaging , Kidney Neoplasms/surgery
3.
Med Image Anal ; 95: 103181, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38640779

ABSTRACT

Supervised machine learning-based medical image computing applications necessitate expert label curation, while unlabelled image data might be relatively abundant. Active learning methods aim to prioritise a subset of available image data for expert annotation, for label-efficient model training. We develop a controller neural network that measures priority of images in a sequence of batches, as in batch-mode active learning, for multi-class segmentation tasks. The controller is optimised by rewarding positive task-specific performance gain, within a Markov decision process (MDP) environment that also optimises the task predictor. In this work, the task predictor is a segmentation network. A meta-reinforcement learning algorithm is proposed with multiple MDPs, such that the pre-trained controller can be adapted to a new MDP that contains data from different institutes and/or requires segmentation of different organs or structures within the abdomen. We present experimental results using multiple CT datasets from more than one thousand patients, with segmentation tasks of nine different abdominal organs, to demonstrate the efficacy of the learnt prioritisation controller function and its cross-institute and cross-organ adaptability. We show that the proposed adaptable prioritisation metric yields converging segmentation accuracy for a new kidney segmentation task, unseen in training, using between approximately 40% to 60% of labels otherwise required with other heuristic or random prioritisation metrics. For clinical datasets of limited size, the proposed adaptable prioritisation offers a performance improvement of 22.6% and 10.2% in Dice score, for tasks of kidney and liver vessel segmentation, respectively, compared to random prioritisation and alternative active sampling strategies.


Subject(s)
Algorithms , Humans , Tomography, X-Ray Computed , Neural Networks, Computer , Machine Learning , Markov Chains , Supervised Machine Learning , Radiography, Abdominal/methods
4.
Int J Comput Assist Radiol Surg ; 19(6): 1003-1012, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38451359

ABSTRACT

PURPOSE: Magnetic resonance (MR) imaging targeted prostate cancer (PCa) biopsy enables precise sampling of MR-detected lesions, establishing its importance in recommended clinical practice. Planning for the ultrasound-guided procedure involves pre-selecting needle sampling positions. However, performing this procedure is subject to a number of factors, including MR-to-ultrasound registration, intra-procedure patient movement and soft tissue motions. When a fixed pre-procedure planning is carried out without intra-procedure adaptation, these factors will lead to sampling errors which could cause false positives and false negatives. Reinforcement learning (RL) has been proposed for procedure plannings on similar applications such as this one, because intelligent agents can be trained for both pre-procedure and intra-procedure planning. However, it is not clear if RL is beneficial when it comes to addressing these intra-procedure errors. METHODS: In this work, we develop and compare imitation learning (IL), supervised by demonstrations of predefined sampling strategy, and RL approaches, under varying degrees of intra-procedure motion and registration error, to represent sources of targeting errors likely to occur in an intra-operative procedure. RESULTS: Based on results using imaging data from 567 PCa patients, we demonstrate the efficacy and value in adopting RL algorithms to provide intelligent intra-procedure action suggestions, compared to IL-based planning supervised by commonly adopted policies. CONCLUSIONS: The improvement in biopsy sampling performance for intra-procedure planning has not been observed in experiments with only pre-procedure planning. These findings suggest a strong role for RL in future prospective studies which adopt intra-procedure planning. Our open source code implementation is available here .


Subject(s)
Image-Guided Biopsy , Prostatic Neoplasms , Humans , Male , Prostatic Neoplasms/diagnostic imaging , Prostatic Neoplasms/pathology , Prostatic Neoplasms/surgery , Image-Guided Biopsy/methods , Magnetic Resonance Imaging/methods , Prostate/diagnostic imaging , Prostate/pathology , Prostate/surgery , Ultrasonography, Interventional/methods , Machine Learning
5.
Med Image Anal ; 94: 103125, 2024 May.
Article in English | MEDLINE | ID: mdl-38428272

ABSTRACT

In this paper, we study pseudo-labelling. Pseudo-labelling employs raw inferences on unlabelled data as pseudo-labels for self-training. We elucidate the empirical successes of pseudo-labelling by establishing a link between this technique and the Expectation Maximisation algorithm. Through this, we realise that the original pseudo-labelling serves as an empirical estimation of its more comprehensive underlying formulation. Following this insight, we present a full generalisation of pseudo-labels under Bayes' theorem, termed Bayesian Pseudo Labels. Subsequently, we introduce a variational approach to generate these Bayesian Pseudo Labels, involving the learning of a threshold to automatically select high-quality pseudo labels. In the remainder of the paper, we showcase the applications of pseudo-labelling and its generalised form, Bayesian Pseudo-Labelling, in the semi-supervised segmentation of medical images. Specifically, we focus on: (1) 3D binary segmentation of lung vessels from CT volumes; (2) 2D multi-class segmentation of brain tumours from MRI volumes; (3) 3D binary segmentation of whole brain tumours from MRI volumes; and (4) 3D binary segmentation of prostate from MRI volumes. We further demonstrate that pseudo-labels can enhance the robustness of the learned representations. The code is released in the following GitHub repository: https://github.com/moucheng2017/EMSSL.


Subject(s)
Brain Neoplasms , Motivation , Male , Humans , Bayes Theorem , Algorithms , Brain , Image Processing, Computer-Assisted
6.
Med Image Anal ; 93: 103098, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38320370

ABSTRACT

Characterising clinically-relevant vascular features, such as vessel density and fractal dimension, can benefit biomarker discovery and disease diagnosis for both ophthalmic and systemic diseases. In this work, we explicitly encode vascular features into an end-to-end loss function for multi-class vessel segmentation, categorising pixels into artery, vein, uncertain pixels, and background. This clinically-relevant feature optimised loss function (CF-Loss) regulates networks to segment accurate multi-class vessel maps that produce precise vascular features. Our experiments first verify that CF-Loss significantly improves both multi-class vessel segmentation and vascular feature estimation, with two standard segmentation networks, on three publicly available datasets. We reveal that pixel-based segmentation performance is not always positively correlated with accuracy of vascular features, thus highlighting the importance of optimising vascular features directly via CF-Loss. Finally, we show that improved vascular features from CF-Loss, as biomarkers, can yield quantitative improvements in the prediction of ischaemic stroke, a real-world clinical downstream task. The code is available at https://github.com/rmaphoh/feature-loss.


Subject(s)
Brain Ischemia , Stroke , Humans , Algorithms , Image Processing, Computer-Assisted/methods , Fundus Oculi
7.
Med Image Anal ; 91: 103030, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37995627

ABSTRACT

One of the distinct characteristics of radiologists reading multiparametric prostate MR scans, using reporting systems like PI-RADS v2.1, is to score individual types of MR modalities, including T2-weighted, diffusion-weighted, and dynamic contrast-enhanced, and then combine these image-modality-specific scores using standardised decision rules to predict the likelihood of clinically significant cancer. This work aims to demonstrate that it is feasible for low-dimensional parametric models to model such decision rules in the proposed Combiner networks, without compromising the accuracy of predicting radiologic labels. First, we demonstrate that either a linear mixture model or a nonlinear stacking model is sufficient to model PI-RADS decision rules for localising prostate cancer. Second, parameters of these combining models are proposed as hyperparameters, weighing independent representations of individual image modalities in the Combiner network training, as opposed to end-to-end modality ensemble. A HyperCombiner network is developed to train a single image segmentation network that can be conditioned on these hyperparameters during inference for much-improved efficiency. Experimental results based on 751 cases from 651 patients compare the proposed rule-modelling approaches with other commonly-adopted end-to-end networks, in this downstream application of automating radiologist labelling on multiparametric MR. By acquiring and interpreting the modality combining rules, specifically the linear-weights or odds ratios associated with individual image modalities, three clinical applications are quantitatively presented and contextualised in the prostate cancer segmentation application, including modality availability assessment, importance quantification and rule discovery.


Subject(s)
Prostatic Neoplasms , Radiology , Male , Humans , Prostatic Neoplasms/diagnostic imaging , Magnetic Resonance Imaging/methods , Prostate , Multimodal Imaging
SELECTION OF CITATIONS
SEARCH DETAIL