Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 69
Filtrar
1.
Phys Med Biol ; 69(11)2024 May 20.
Artigo em Inglês | MEDLINE | ID: mdl-38697200

RESUMO

Minimally invasive ablation techniques for renal cancer are becoming more popular due to their low complication rate and rapid recovery period. Despite excellent visualisation, one drawback of the use of computed tomography (CT) in these procedures is the requirement for iodine-based contrast agents, which are associated with adverse reactions and require a higher x-ray dose. The purpose of this work is to examine the use of time information to generate synthetic contrast enhanced images at arbitrary points after contrast agent injection from non-contrast CT images acquired during renal cryoablation cases. To achieve this, we propose a new method of conditioning generative adversarial networks with normalised time stamps and demonstrate that the use of a HyperNetwork is feasible for this task, generating images of competitive quality compared to standard generative modelling techniques. We also show that reducing the receptive field can help tackle challenges in interventional CT data, offering significantly better image quality as well as better performance when generating images for a downstream segmentation task. Lastly, we show that all proposed models are robust enough to perform inference on unseen intra-procedural data, while also improving needle artefacts and generalising contrast enhancement to other clinically relevant regions and features.


Assuntos
Meios de Contraste , Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Humanos , Processamento de Imagem Assistida por Computador/métodos , Fatores de Tempo , Neoplasias Renais/diagnóstico por imagem , Neoplasias Renais/cirurgia
2.
Med Image Anal ; 95: 103181, 2024 Apr 16.
Artigo em Inglês | MEDLINE | ID: mdl-38640779

RESUMO

Supervised machine learning-based medical image computing applications necessitate expert label curation, while unlabelled image data might be relatively abundant. Active learning methods aim to prioritise a subset of available image data for expert annotation, for label-efficient model training. We develop a controller neural network that measures priority of images in a sequence of batches, as in batch-mode active learning, for multi-class segmentation tasks. The controller is optimised by rewarding positive task-specific performance gain, within a Markov decision process (MDP) environment that also optimises the task predictor. In this work, the task predictor is a segmentation network. A meta-reinforcement learning algorithm is proposed with multiple MDPs, such that the pre-trained controller can be adapted to a new MDP that contains data from different institutes and/or requires segmentation of different organs or structures within the abdomen. We present experimental results using multiple CT datasets from more than one thousand patients, with segmentation tasks of nine different abdominal organs, to demonstrate the efficacy of the learnt prioritisation controller function and its cross-institute and cross-organ adaptability. We show that the proposed adaptable prioritisation metric yields converging segmentation accuracy for a new kidney segmentation task, unseen in training, using between approximately 40% to 60% of labels otherwise required with other heuristic or random prioritisation metrics. For clinical datasets of limited size, the proposed adaptable prioritisation offers a performance improvement of 22.6% and 10.2% in Dice score, for tasks of kidney and liver vessel segmentation, respectively, compared to random prioritisation and alternative active sampling strategies.

3.
Artigo em Inglês | MEDLINE | ID: mdl-38451359

RESUMO

PURPOSE: Magnetic resonance (MR) imaging targeted prostate cancer (PCa) biopsy enables precise sampling of MR-detected lesions, establishing its importance in recommended clinical practice. Planning for the ultrasound-guided procedure involves pre-selecting needle sampling positions. However, performing this procedure is subject to a number of factors, including MR-to-ultrasound registration, intra-procedure patient movement and soft tissue motions. When a fixed pre-procedure planning is carried out without intra-procedure adaptation, these factors will lead to sampling errors which could cause false positives and false negatives. Reinforcement learning (RL) has been proposed for procedure plannings on similar applications such as this one, because intelligent agents can be trained for both pre-procedure and intra-procedure planning. However, it is not clear if RL is beneficial when it comes to addressing these intra-procedure errors. METHODS: In this work, we develop and compare imitation learning (IL), supervised by demonstrations of predefined sampling strategy, and RL approaches, under varying degrees of intra-procedure motion and registration error, to represent sources of targeting errors likely to occur in an intra-operative procedure. RESULTS: Based on results using imaging data from 567 PCa patients, we demonstrate the efficacy and value in adopting RL algorithms to provide intelligent intra-procedure action suggestions, compared to IL-based planning supervised by commonly adopted policies. CONCLUSIONS: The improvement in biopsy sampling performance for intra-procedure planning has not been observed in experiments with only pre-procedure planning. These findings suggest a strong role for RL in future prospective studies which adopt intra-procedure planning. Our open source code implementation is available here .

4.
Med Image Anal ; 91: 103030, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37995627

RESUMO

One of the distinct characteristics of radiologists reading multiparametric prostate MR scans, using reporting systems like PI-RADS v2.1, is to score individual types of MR modalities, including T2-weighted, diffusion-weighted, and dynamic contrast-enhanced, and then combine these image-modality-specific scores using standardised decision rules to predict the likelihood of clinically significant cancer. This work aims to demonstrate that it is feasible for low-dimensional parametric models to model such decision rules in the proposed Combiner networks, without compromising the accuracy of predicting radiologic labels. First, we demonstrate that either a linear mixture model or a nonlinear stacking model is sufficient to model PI-RADS decision rules for localising prostate cancer. Second, parameters of these combining models are proposed as hyperparameters, weighing independent representations of individual image modalities in the Combiner network training, as opposed to end-to-end modality ensemble. A HyperCombiner network is developed to train a single image segmentation network that can be conditioned on these hyperparameters during inference for much-improved efficiency. Experimental results based on 751 cases from 651 patients compare the proposed rule-modelling approaches with other commonly-adopted end-to-end networks, in this downstream application of automating radiologist labelling on multiparametric MR. By acquiring and interpreting the modality combining rules, specifically the linear-weights or odds ratios associated with individual image modalities, three clinical applications are quantitatively presented and contextualised in the prostate cancer segmentation application, including modality availability assessment, importance quantification and rule discovery.


Assuntos
Neoplasias da Próstata , Radiologia , Masculino , Humanos , Neoplasias da Próstata/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Próstata , Imagem Multimodal
5.
IEEE Trans Biomed Eng ; PP2023 Oct 19.
Artigo em Inglês | MEDLINE | ID: mdl-37856260

RESUMO

OBJECTIVE: Reconstructing freehand ultrasound in 3D without any external tracker has been a long-standing challenge in ultrasound-assisted procedures. We aim to define new ways of parameterising long-term dependencies, and evaluate the performance. METHODS: First, long-term dependency is encoded by transformation positions within a frame sequence. This is achieved by combining a sequence model with a multi-transformation prediction. Second, two dependency factors are proposed, anatomical image content and scanning protocol, for contributing towards accurate reconstruction. Each factor is quantified experimentally by reducing respective training variances. RESULTS: 1) The added long-term dependency up to 400 frames at 20 frames per second (fps) indeed improved reconstruction, with an up to 82.4% lowered accumulated error, compared with the baseline performance. The improvement was found to be dependent on sequence length, transformation interval and scanning protocol and, unexpectedly, not on the use of recurrent networks with long-short term modules; 2) Decreasing either anatomical or protocol variance in training led to poorer reconstruction accuracy. Interestingly, greater performance was gained from representative protocol patterns, than from representative anatomical features. CONCLUSION: The proposed algorithm uses hyperparameter tuning to effectively utilise long-term dependency. The proposed dependency factors are of practical significance in collecting diverse training data, regulating scanning protocols and developing efficient networks. SIGNIFICANCE: The proposed new methodology with publicly available volunteer data and code for parametersing the long-term dependency, experimentally shown to be valid sources of performance improvement, which could potentially lead to better model development and practical optimisation of the reconstruction application.

6.
Med Image Anal ; 90: 102935, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37716198

RESUMO

The prowess that makes few-shot learning desirable in medical image analysis is the efficient use of the support image data, which are labelled to classify or segment new classes, a task that otherwise requires substantially more training images and expert annotations. This work describes a fully 3D prototypical few-shot segmentation algorithm, such that the trained networks can be effectively adapted to clinically interesting structures that are absent in training, using only a few labelled images from a different institute. First, to compensate for the widely recognised spatial variability between institutions in episodic adaptation of novel classes, a novel spatial registration mechanism is integrated into prototypical learning, consisting of a segmentation head and an spatial alignment module. Second, to assist the training with observed imperfect alignment, support mask conditioning module is proposed to further utilise the annotation available from the support images. Extensive experiments are presented in an application of segmenting eight anatomical structures important for interventional planning, using a data set of 589 pelvic T2-weighted MR images, acquired at seven institutes. The results demonstrate the efficacy in each of the 3D formulation, the spatial registration, and the support mask conditioning, all of which made positive contributions independently or collectively. Compared with the previously proposed 2D alternatives, the few-shot segmentation performance was improved with statistical significance, regardless whether the support data come from the same or different institutes.

7.
Med Phys ; 50(9): 5489-5504, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-36938883

RESUMO

BACKGROUND: Targeted prostate biopsy guided by multiparametric magnetic resonance imaging (mpMRI) detects more clinically significant lesions than conventional systemic biopsy. Lesion segmentation is required for planning MRI-targeted biopsies. The requirement for integrating image features available in T2-weighted and diffusion-weighted images poses a challenge in prostate lesion segmentation from mpMRI. PURPOSE: A flexible and efficient multistream fusion encoder is proposed in this work to facilitate the multiscale fusion of features from multiple imaging streams. A patch-based loss function is introduced to improve the accuracy in segmenting small lesions. METHODS: The proposed multistream encoder fuses features extracted in the three imaging streams at each layer of the network, thereby allowing improved feature maps to propagate downstream and benefit segmentation performance. The fusion is achieved through a spatial attention map generated by optimally weighting the contribution of the convolution outputs from each stream. This design provides flexibility for the network to highlight image modalities according to their relative influence on the segmentation performance. The encoder also performs multiscale integration by highlighting the input feature maps (low-level features) with the spatial attention maps generated from convolution outputs (high-level features). The Dice similarity coefficient (DSC), serving as a cost function, is less sensitive to incorrect segmentation for small lesions. We address this issue by introducing a patch-based loss function that provides an average of the DSCs obtained from local image patches. This local average DSC is equally sensitive to large and small lesions, as the patch-based DSCs associated with small and large lesions have equal weights in this average DSC. RESULTS: The framework was evaluated in 931 sets of images acquired in several clinical studies at two centers in Hong Kong and the United Kingdom. In particular, the training, validation, and test sets contain 615, 144, and 172 sets of images, respectively. The proposed framework outperformed single-stream networks and three recently proposed multistream networks, attaining F1 scores of 82.2 and 87.6% in the lesion and patient levels, respectively. The average inference time for an axial image was 11.8 ms. CONCLUSION: The accuracy and efficiency afforded by the proposed framework would accelerate the MRI interpretation workflow of MRI-targeted biopsy and focal therapies.


Assuntos
Neoplasias da Próstata , Masculino , Humanos , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia , Imageamento por Ressonância Magnética/métodos , Próstata/patologia , Algoritmos , Biópsia , Processamento de Imagem Assistida por Computador/métodos
8.
Int J Comput Assist Radiol Surg ; 18(8): 1437-1449, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-36790674

RESUMO

PURPOSE: Minimally invasive treatments for renal carcinoma offer a low rate of complications and quick recovery. One drawback of the use of computed tomography (CT) for needle guidance is the use of iodinated contrast agents, which require an increased X-ray dose and can potentially cause adverse reactions. The purpose of this work is to generalise the problem of synthetic contrast enhancement to allow the generation of multiple phases on non-contrast CT data from a real-world, clinical dataset without training multiple convolutional neural networks. METHODS: A framework for switching between contrast phases by conditioning the network on the phase information is proposed and compared with separately trained networks. We then examine how the degree of supervision affects the generated contrast by evaluating three established architectures: U-Net (fully supervised), Pix2Pix (adversarial with supervision), and CycleGAN (fully adversarial). RESULTS: We demonstrate that there is no performance loss when testing the proposed method against separately trained networks. Of the training paradigms investigated, the fully adversarial CycleGAN performs the worst, while the fully supervised U-Net generates more realistic voxel intensities and performed better than Pix2Pix in generating contrast images for use in a downstream segmentation task. Lastly, two models are shown to generalise to intra-procedural data not seen during the training process, also enhancing features such as needles and ice balls relevant to interventional radiological procedures. CONCLUSION: The proposed contrast switching framework is a feasible option for generating multiple contrast phases without the overhead of training multiple neural networks, while also being robust towards unseen data and enhancing contrast in features relevant to clinical practice.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Radiografia , Crioterapia
9.
IEEE Trans Med Imaging ; 42(3): 823-833, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36322502

RESUMO

We present a meta-learning framework for interactive medical image registration. Our proposed framework comprises three components: a learning-based medical image registration algorithm, a form of user interaction that refines registration at inference, and a meta-learning protocol that learns a rapidly adaptable network initialization. This paper describes a specific algorithm that implements the registration, interaction and meta-learning protocol for our exemplar clinical application: registration of magnetic resonance (MR) imaging to interactively acquired, sparsely-sampled transrectal ultrasound (TRUS) images. Our approach obtains comparable registration error (4.26 mm) to the best-performing non-interactive learning-based 3D-to-3D method (3.97 mm) while requiring only a fraction of the data, and occurring in real-time during acquisition. Applying sparsely sampled data to non-interactive methods yields higher registration errors (6.26 mm), demonstrating the effectiveness of interactive MR-TRUS registration, which may be applied intraoperatively given the real-time nature of the adaptation process.


Assuntos
Imageamento Tridimensional , Próstata , Masculino , Humanos , Próstata/diagnóstico por imagem , Imageamento Tridimensional/métodos , Ultrassonografia/métodos , Algoritmos , Imageamento por Ressonância Magnética
10.
Med Image Anal ; 82: 102620, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36148705

RESUMO

Prostate biopsy and image-guided treatment procedures are often performed under the guidance of ultrasound fused with magnetic resonance images (MRI). Accurate image fusion relies on accurate segmentation of the prostate on ultrasound images. Yet, the reduced signal-to-noise ratio and artifacts (e.g., speckle and shadowing) in ultrasound images limit the performance of automated prostate segmentation techniques and generalizing these methods to new image domains is inherently difficult. In this study, we address these challenges by introducing a novel 2.5D deep neural network for prostate segmentation on ultrasound images. Our approach addresses the limitations of transfer learning and finetuning methods (i.e., drop in performance on the original training data when the model weights are updated) by combining a supervised domain adaptation technique and a knowledge distillation loss. The knowledge distillation loss allows the preservation of previously learned knowledge and reduces the performance drop after model finetuning on new datasets. Furthermore, our approach relies on an attention module that considers model feature positioning information to improve the segmentation accuracy. We trained our model on 764 subjects from one institution and finetuned our model using only ten subjects from subsequent institutions. We analyzed the performance of our method on three large datasets encompassing 2067 subjects from three different institutions. Our method achieved an average Dice Similarity Coefficient (Dice) of 94.0±0.03 and Hausdorff Distance (HD95) of 2.28 mm in an independent set of subjects from the first institution. Moreover, our model generalized well in the studies from the other two institutions (Dice: 91.0±0.03; HD95: 3.7 mm and Dice: 82.0±0.03; HD95: 7.1 mm). We introduced an approach that successfully segmented the prostate on ultrasound images in a multi-center study, suggesting its clinical potential to facilitate the accurate fusion of ultrasound and MRI images to drive biopsy and image-guided treatments.


Assuntos
Redes Neurais de Computação , Próstata , Humanos , Masculino , Próstata/diagnóstico por imagem , Ultrassonografia , Imageamento por Ressonância Magnética/métodos , Pelve
11.
IEEE Trans Med Imaging ; 41(11): 3421-3431, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-35788452

RESUMO

In this work, we consider the task of pairwise cross-modality image registration, which may benefit from exploiting additional images available only at training time from an additional modality that is different to those being registered. As an example, we focus on aligning intra-subject multiparametric Magnetic Resonance (mpMR) images, between T2-weighted (T2w) scans and diffusion-weighted scans with high b-value (DWI [Formula: see text]). For the application of localising tumours in mpMR images, diffusion scans with zero b-value (DWI [Formula: see text]) are considered easier to register to T2w due to the availability of corresponding features. We propose a learning from privileged modality algorithm, using a training-only imaging modality DWI [Formula: see text], to support the challenging multi-modality registration problems. We present experimental results based on 369 sets of 3D multiparametric MRI images from 356 prostate cancer patients and report, with statistical significance, a lowered median target registration error of 4.34 mm, when registering the holdout DWI [Formula: see text] and T2w image pairs, compared with that of 7.96 mm before registration. Results also show that the proposed learning-based registration networks enabled efficient registration with comparable or better accuracy, compared with a classical iterative algorithm and other tested learning-based methods with/without the additional modality. These compared algorithms also failed to produce any significantly improved alignment between DWI [Formula: see text] and T2w in this challenging application.


Assuntos
Imageamento por Ressonância Magnética Multiparamétrica , Neoplasias da Próstata , Masculino , Humanos , Imagem de Difusão por Ressonância Magnética/métodos , Imageamento por Ressonância Magnética/métodos , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia , Algoritmos
12.
BMJ Open ; 12(5): e053204, 2022 05 02.
Artigo em Inglês | MEDLINE | ID: mdl-35501093

RESUMO

INTRODUCTION: Chronic liver disease is a growing cause of morbidity and mortality in the UK. Acute presentation with advanced disease is common and prioritisation of resources to those at highest risk at earlier disease stages is essential to improving patient outcomes. Existing prognostic tools are of limited accuracy and to date no imaging-based tools are used in clinical practice, despite multiple anatomical imaging features that worsen with disease severity.In this paper, we outline our scoping review protocol that aims to provide an overview of existing prognostic factors and models that link anatomical imaging features with clinical endpoints in chronic liver disease. This will provide a summary of the number, type and methods used by existing imaging feature-based prognostic studies and indicate if there are sufficient studies to justify future systematic reviews. METHODS AND ANALYSIS: The protocol was developed in accordance with existing scoping review guidelines. Searches of MEDLINE and Embase will be conducted using titles, abstracts and Medical Subject Headings restricted to publications after 1980 to ensure imaging method relevance on OvidSP. Initial screening will be undertaken by two independent reviewers. Full-text data extraction will be undertaken by three pretrained reviewers who have participated in a group data extraction session to ensure reviewer consensus and reduce inter-rater variability. Where needed, data extraction queries will be resolved by reviewer team discussion. Reporting of results will be based on grouping of related factors and their cumulative frequencies. Prognostic anatomical imaging features and clinical endpoints will be reported using descriptive statistics to summarise the number of studies, study characteristics and the statistical methods used. ETHICS AND DISSEMINATION: Ethical approval is not required as this study is based on previously published work. Findings will be disseminated by peer-reviewed publication and/or conference presentations.


Assuntos
Hepatopatias , Projetos de Pesquisa , Humanos , Hepatopatias/diagnóstico por imagem , Programas de Rastreamento , Literatura de Revisão como Assunto
13.
Med Image Anal ; 78: 102427, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35344824

RESUMO

In this paper, we consider image quality assessment (IQA) as a measure of how images are amenable with respect to a given downstream task, or task amenability. When the task is performed using machine learning algorithms, such as a neural-network-based task predictor for image classification or segmentation, the performance of the task predictor provides an objective estimate of task amenability. In this work, we use an IQA controller to predict the task amenability which, itself being parameterised by neural networks, can be trained simultaneously with the task predictor. We further develop a meta-reinforcement learning framework to improve the adaptability for both IQA controllers and task predictors, such that they can be fine-tuned efficiently on new datasets or meta-tasks. We demonstrate the efficacy of the proposed task-specific, adaptable IQA approach, using two clinical applications for ultrasound-guided prostate intervention and pneumonia detection on X-ray images.


Assuntos
Aprendizado de Máquina , Redes Neurais de Computação , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Masculino , Ultrassonografia
14.
IEEE Trans Med Imaging ; 41(6): 1311-1319, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-34962866

RESUMO

Ultrasound imaging is a commonly used technology for visualising patient anatomy in real-time during diagnostic and therapeutic procedures. High operator dependency and low reproducibility make ultrasound imaging and interpretation challenging with a steep learning curve. Automatic image classification using deep learning has the potential to overcome some of these challenges by supporting ultrasound training in novices, as well as aiding ultrasound image interpretation in patient with complex pathology for more experienced practitioners. However, the use of deep learning methods requires a large amount of data in order to provide accurate results. Labelling large ultrasound datasets is a challenging task because labels are retrospectively assigned to 2D images without the 3D spatial context available in vivo or that would be inferred while visually tracking structures between frames during the procedure. In this work, we propose a multi-modal convolutional neural network (CNN) architecture that labels endoscopic ultrasound (EUS) images from raw verbal comments provided by a clinician during the procedure. We use a CNN composed of two branches, one for voice data and another for image data, which are joined to predict image labels from the spoken names of anatomical landmarks. The network was trained using recorded verbal comments from expert operators. Our results show a prediction accuracy of 76% at image level on a dataset with 5 different labels. We conclude that the addition of spoken commentaries can increase the performance of ultrasound image classification, and eliminate the burden of manually labelling large EUS datasets necessary for deep learning applications.


Assuntos
Redes Neurais de Computação , Humanos , Reprodutibilidade dos Testes , Estudos Retrospectivos , Ultrassonografia
15.
Med Image Anal ; 74: 102231, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34583240

RESUMO

We present Free Point Transformer (FPT) - a deep neural network architecture for non-rigid point-set registration. Consisting of two modules, a global feature extraction module and a point transformation module, FPT does not assume explicit constraints based on point vicinity, thereby overcoming a common requirement of previous learning-based point-set registration methods. FPT is designed to accept unordered and unstructured point-sets with a variable number of points and uses a "model-free" approach without heuristic constraints. Training FPT is flexible and involves minimizing an intuitive unsupervised loss function, but supervised, semi-supervised, and partially- or weakly-supervised training are also supported. This flexibility makes FPT amenable to multimodal image registration problems where the ground-truth deformations are difficult or impossible to measure. In this paper, we demonstrate the application of FPT to non-rigid registration of prostate magnetic resonance (MR) imaging and sparsely-sampled transrectal ultrasound (TRUS) images. The registration errors were 4.71 mm and 4.81 mm for complete TRUS imaging and sparsely-sampled TRUS imaging, respectively. The results indicate superior accuracy to the alternative rigid and non-rigid registration algorithms tested and substantially lower computation time. The rapid inference possible with FPT makes it particularly suitable for applications where real-time registration is beneficial.


Assuntos
Neoplasias da Próstata , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Próstata/diagnóstico por imagem , Ultrassonografia
17.
Eur Urol ; 79(1): 20-29, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33051065

RESUMO

BACKGROUND: False positive multiparametric magnetic resonance imaging (mpMRI) phenotypes prompt unnecessary biopsies. The Prostate MRI Imaging Study (PROMIS) provides a unique opportunity to explore such phenotypes in biopsy-naïve men with raised prostate-specific antigen (PSA) and suspected cancer. OBJECTIVE: To compare mpMRI lesions in men with/without significant cancer on transperineal mapping biopsy (TPM). DESIGN, SETTING, AND PARTICIPANTS: PROMIS participants (n=235) underwent mpMRI followed by a combined biopsy procedure at University College London Hospital, including 5-mm TPM as the reference standard. Patients were divided into four mutually exclusive groups according to TPM findings: (1) no cancer, (2) insignificant cancer, (3) definition 2 significant cancer (Gleason ≥3+4 of any length and/or maximum cancer core length ≥4mm of any grade), and (4) definition 1 significant cancer (Gleason ≥4+3 of any length and/or maximum cancer core length ≥6mm of any grade). OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS: Index and/or additional lesions present in 178 participants were compared between TPM groups in terms of number, conspicuity, volume, location, and radiological characteristics. RESULTS AND LIMITATIONS: Most lesions were located in the peripheral zone. More men with significant cancer had two or more lesions than those without significant disease (67% vs 37%; p< 0.001). In the former group, index lesions were larger (mean volume 0.68 vs 0.50 ml; p< 0.001, Wilcoxon test), more conspicuous (Likert 4-5: 79% vs 22%; p< 0.001), and diffusion restricted (mean apparent diffusion coefficient [ADC]: 0.73 vs 0.86; p< 0.001, Wilcoxon test). In men with Likert 3 index lesions, log2PSA density and index lesion ADC were significant predictors of definition 1/2 disease in a logistic regression model (mean cross-validated area under the receiver-operator characteristic curve: 0.77 [95% confidence interval: 0.67-0.87]). CONCLUSIONS: Significant cancer-associated MRI lesions in biopsy-naïve men have clinical-radiological differences, with lesions seen in prostates without significant disease. MRI-calculated PSA density and ADC could predict significant cancer in those with indeterminate MRI phenotypes. PATIENT SUMMARY: Magnetic resonance imaging (MRI) lesions that mimic prostate cancer but are, in fact, benign prompt unnecessary biopsies in thousands of men with raised prostate-specific antigen. In this study we found that, on closer look, such false positive lesions have different features from cancerous ones. This means that doctors could potentially develop better tools to identify cancer on MRI and spare some patients from unnecessary biopsies.


Assuntos
Imageamento por Ressonância Magnética Multiparamétrica , Neoplasias da Próstata/diagnóstico por imagem , Biópsia , Reações Falso-Positivas , Humanos , Masculino , Fenótipo , Próstata , Antígeno Prostático Específico , Neoplasias da Próstata/genética , Neoplasias da Próstata/patologia
18.
J Urol ; 205(4): 1090-1099, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33315505

RESUMO

PURPOSE: We determined the early efficacy of bipolar radiofrequency ablation with a coil design for focal ablation of clinically significant localized prostate cancer visible at multiparametric magnetic resonance imaging. MATERIALS AND METHODS: A prospective IDEAL phase 2 development study (Focal Prostate Radiofrequency Ablation, NCT02294903) recruited treatment-naïve patients with a single focus of significant localized prostate cancer (Gleason 7 or 4 mm or more of Gleason 6) concordant with a lesion visible on multiparametric magnetic resonance imaging. Intervention was a focal ablation with a bipolar radiofrequency system (Encage™) encompassing the lesion and a predefined margin using nonrigid magnetic resonance imaging-ultrasound fusion. Primary outcome was the proportion of men with absence of significant localized disease on biopsy at 6 months. Trial followup consisted of serum prostate specific antigen, multiparametric magnetic resonance imaging at 1 week, and 6 and 12 months post-ablation. Validated patient reported outcome measures for urinary, erectile and bowel functions, and adverse events monitoring system were used. Analyses were done on a per-protocol basis. RESULTS: Of 21 patients recruited 20 received the intervention. Baseline characteristics were median age 66 years (IQR 63-69) and preoperative median prostate specific antigen 7.9 ng/ml (5.3-9.6). A total of 18 patients (90%) had Gleason 7 disease with median maximum cancer 7 mm (IQR 5-10), for a median of 2.8 cc multiparametric magnetic resonance imaging lesions (IQR 1.4-4.8). Targeted biopsy of the treated area (median number of cores 6, IQR 5-8) showed absence of significant localized prostate cancer in 16/20 men (80%), concordant with multiparametric magnetic resonance imaging. There was a low profile of side effects at patient reported outcome measures analysis and there were no serious adverse events. CONCLUSIONS: Focal therapy of significant localized prostate cancer associated with a magnetic resonance imaging lesion using bipolar radiofrequency showed early efficacy to ablate cancer with low rates of genitourinary and rectal side effects.


Assuntos
Imageamento por Ressonância Magnética Multiparamétrica , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/cirurgia , Ablação por Radiofrequência/instrumentação , Idoso , Biomarcadores Tumorais/sangue , Biópsia , Desenho de Equipamento , Humanos , Masculino , Pessoa de Meia-Idade , Gradação de Tumores , Estadiamento de Neoplasias , Estudos Prospectivos , Antígeno Prostático Específico/sangue , Neoplasias da Próstata/patologia
19.
Med Image Anal ; 58: 101558, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31526965

RESUMO

Convolutional neural networks (CNNs) have recently led to significant advances in automatic segmentations of anatomical structures in medical images, and a wide variety of network architectures are now available to the research community. For applications such as segmentation of the prostate in magnetic resonance images (MRI), the results of the PROMISE12 online algorithm evaluation platform have demonstrated differences between the best-performing segmentation algorithms in terms of numerical accuracy using standard metrics such as the Dice score and boundary distance. These small differences in the segmented regions/boundaries outputted by different algorithms may potentially have an unsubstantial impact on the results of downstream image analysis tasks, such as estimating organ volume and multimodal image registration, which inform clinical decisions. This impact has not been previously investigated. In this work, we quantified the accuracy of six different CNNs in segmenting the prostate in 3D patient T2-weighted MRI scans and compared the accuracy of organ volume estimation and MRI-ultrasound (US) registration errors using the prostate segmentations produced by different networks. Networks were trained and tested using a set of 232 patient MRIs with labels provided by experienced clinicians. A statistically significant difference was found among the Dice scores and boundary distances produced by these networks in a non-parametric analysis of variance (p < 0.001 and p < 0.001, respectively), where the following multiple comparison tests revealed that the statistically significant difference in segmentation errors were caused by at least one tested network. Gland volume errors (GVEs) and target registration errors (TREs) were then estimated using the CNN-generated segmentations. Interestingly, there was no statistical difference found in either GVEs or TREs among different networks, (p = 0.34 and p = 0.26, respectively). This result provides a real-world example that these networks with different segmentation performances may potentially provide indistinguishably adequate registration accuracies to assist prostate cancer imaging applications. We conclude by recommending that the differences in the accuracy of downstream image analysis tasks that make use of data output by automatic segmentation methods, such as CNNs, within a clinical pipeline should be taken into account when selecting between different network architectures, in addition to reporting the segmentation accuracy.


Assuntos
Imageamento por Ressonância Magnética , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão/métodos , Neoplasias da Próstata/diagnóstico por imagem , Ultrassonografia , Humanos , Masculino , Carga Tumoral
20.
J Med Imaging (Bellingham) ; 6(1): 011003, 2019 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-30840715

RESUMO

Image guidance systems that register scans of the prostate obtained using transrectal ultrasound (TRUS) and magnetic resonance imaging are becoming increasingly popular as a means of enabling tumor-targeted prostate cancer biopsy and treatment. However, intraoperative segmentation of TRUS images to define the three-dimensional (3-D) geometry of the prostate remains a necessary task in existing guidance systems, which often require significant manual interaction and are subject to interoperator variability. Therefore, automating this step would lead to more acceptable clinical workflows and greater standardization between different operators and hospitals. In this work, a convolutional neural network (CNN) for automatically segmenting the prostate in two-dimensional (2-D) TRUS slices of a 3-D TRUS volume was developed and tested. The network was designed to be able to incorporate 3-D spatial information by taking one or more TRUS slices neighboring each slice to be segmented as input, in addition to these slices. The accuracy of the CNN was evaluated on data from a cohort of 109 patients who had undergone TRUS-guided targeted biopsy, (a total of 4034 2-D slices). The segmentation accuracy was measured by calculating 2-D and 3-D Dice similarity coefficients, on the 2-D images and corresponding 3-D volumes, respectively, as well as the 2-D boundary distances, using a 10-fold patient-level cross-validation experiment. However, incorporating neighboring slices did not improve the segmentation performance in five out of six experiment results, which include varying the number of neighboring slices from 1 to 3 at either side. The up-sampling shortcuts reduced the overall training time of the network, 161 min compared with 253 min without the architectural addition.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...