Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
1.
Entropy (Basel) ; 20(7)2018 Jun 25.
Artículo en Inglés | MEDLINE | ID: mdl-33265581

RESUMEN

Brain networks are widely used models to understand the topology and organization of the brain. These networks can be represented by a graph, where nodes correspond to brain regions and edges to structural or functional connections. Several measures have been proposed to describe the topological features of these networks, but unfortunately, it is still unclear which measures give the best representation of the brain. In this paper, we propose a new set of measures based on information theory. Our approach interprets the brain network as a stochastic process where impulses are modeled as a random walk on the graph nodes. This new interpretation provides a solid theoretical framework from which several global and local measures are derived. Global measures provide quantitative values for the whole brain network characterization and include entropy, mutual information, and erasure mutual information. The latter is a new measure based on mutual information and erasure entropy. On the other hand, local measures are based on different decompositions of the global measures and provide different properties of the nodes. Local measures include entropic surprise, mutual surprise, mutual predictability, and erasure surprise. The proposed approach is evaluated using synthetic model networks and structural and functional human networks at different scales. Results demonstrate that the global measures can characterize new properties of the topology of a brain network and, in addition, for a given number of nodes, an optimal number of edges is found for small-world networks. Local measures show different properties of the nodes such as the uncertainty associated to the node, or the uniqueness of the path that the node belongs. Finally, the consistency of the results across healthy subjects demonstrates the robustness of the proposed measures.

2.
Int J Comput Assist Radiol Surg ; 19(6): 1003-1012, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38451359

RESUMEN

PURPOSE: Magnetic resonance (MR) imaging targeted prostate cancer (PCa) biopsy enables precise sampling of MR-detected lesions, establishing its importance in recommended clinical practice. Planning for the ultrasound-guided procedure involves pre-selecting needle sampling positions. However, performing this procedure is subject to a number of factors, including MR-to-ultrasound registration, intra-procedure patient movement and soft tissue motions. When a fixed pre-procedure planning is carried out without intra-procedure adaptation, these factors will lead to sampling errors which could cause false positives and false negatives. Reinforcement learning (RL) has been proposed for procedure plannings on similar applications such as this one, because intelligent agents can be trained for both pre-procedure and intra-procedure planning. However, it is not clear if RL is beneficial when it comes to addressing these intra-procedure errors. METHODS: In this work, we develop and compare imitation learning (IL), supervised by demonstrations of predefined sampling strategy, and RL approaches, under varying degrees of intra-procedure motion and registration error, to represent sources of targeting errors likely to occur in an intra-operative procedure. RESULTS: Based on results using imaging data from 567 PCa patients, we demonstrate the efficacy and value in adopting RL algorithms to provide intelligent intra-procedure action suggestions, compared to IL-based planning supervised by commonly adopted policies. CONCLUSIONS: The improvement in biopsy sampling performance for intra-procedure planning has not been observed in experiments with only pre-procedure planning. These findings suggest a strong role for RL in future prospective studies which adopt intra-procedure planning. Our open source code implementation is available here .


Asunto(s)
Biopsia Guiada por Imagen , Neoplasias de la Próstata , Humanos , Masculino , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/patología , Neoplasias de la Próstata/cirugía , Biopsia Guiada por Imagen/métodos , Imagen por Resonancia Magnética/métodos , Próstata/diagnóstico por imagen , Próstata/patología , Próstata/cirugía , Ultrasonografía Intervencional/métodos , Aprendizaje Automático
3.
Med Image Anal ; 95: 103181, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38640779

RESUMEN

Supervised machine learning-based medical image computing applications necessitate expert label curation, while unlabelled image data might be relatively abundant. Active learning methods aim to prioritise a subset of available image data for expert annotation, for label-efficient model training. We develop a controller neural network that measures priority of images in a sequence of batches, as in batch-mode active learning, for multi-class segmentation tasks. The controller is optimised by rewarding positive task-specific performance gain, within a Markov decision process (MDP) environment that also optimises the task predictor. In this work, the task predictor is a segmentation network. A meta-reinforcement learning algorithm is proposed with multiple MDPs, such that the pre-trained controller can be adapted to a new MDP that contains data from different institutes and/or requires segmentation of different organs or structures within the abdomen. We present experimental results using multiple CT datasets from more than one thousand patients, with segmentation tasks of nine different abdominal organs, to demonstrate the efficacy of the learnt prioritisation controller function and its cross-institute and cross-organ adaptability. We show that the proposed adaptable prioritisation metric yields converging segmentation accuracy for a new kidney segmentation task, unseen in training, using between approximately 40% to 60% of labels otherwise required with other heuristic or random prioritisation metrics. For clinical datasets of limited size, the proposed adaptable prioritisation offers a performance improvement of 22.6% and 10.2% in Dice score, for tasks of kidney and liver vessel segmentation, respectively, compared to random prioritisation and alternative active sampling strategies.


Asunto(s)
Algoritmos , Humanos , Tomografía Computarizada por Rayos X , Redes Neurales de la Computación , Aprendizaje Automático , Cadenas de Markov , Aprendizaje Automático Supervisado , Radiografía Abdominal/métodos
4.
IEEE Trans Med Imaging ; 41(6): 1311-1319, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-34962866

RESUMEN

Ultrasound imaging is a commonly used technology for visualising patient anatomy in real-time during diagnostic and therapeutic procedures. High operator dependency and low reproducibility make ultrasound imaging and interpretation challenging with a steep learning curve. Automatic image classification using deep learning has the potential to overcome some of these challenges by supporting ultrasound training in novices, as well as aiding ultrasound image interpretation in patient with complex pathology for more experienced practitioners. However, the use of deep learning methods requires a large amount of data in order to provide accurate results. Labelling large ultrasound datasets is a challenging task because labels are retrospectively assigned to 2D images without the 3D spatial context available in vivo or that would be inferred while visually tracking structures between frames during the procedure. In this work, we propose a multi-modal convolutional neural network (CNN) architecture that labels endoscopic ultrasound (EUS) images from raw verbal comments provided by a clinician during the procedure. We use a CNN composed of two branches, one for voice data and another for image data, which are joined to predict image labels from the spoken names of anatomical landmarks. The network was trained using recorded verbal comments from expert operators. Our results show a prediction accuracy of 76% at image level on a dataset with 5 different labels. We conclude that the addition of spoken commentaries can increase the performance of ultrasound image classification, and eliminate the burden of manually labelling large EUS datasets necessary for deep learning applications.


Asunto(s)
Redes Neurales de la Computación , Humanos , Reproducibilidad de los Resultados , Estudios Retrospectivos , Ultrasonografía
5.
Int J Comput Assist Radiol Surg ; 17(8): 1461-1468, 2022 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-35366130

RESUMEN

PURPOSE: The registration of Laparoscopic Ultrasound (LUS) to CT can enhance the safety of laparoscopic liver surgery by providing the surgeon with awareness on the relative positioning between critical vessels and a tumour. In an effort to provide a translatable solution for this poorly constrained problem, Content-based Image Retrieval (CBIR) based on vessel information has been suggested as a method for obtaining a global coarse registration without using tracking information. However, the performance of these frameworks is limited by the use of non-generalisable handcrafted vessel features. METHODS: We propose the use of a Deep Hashing (DH) network to directly convert vessel images from both LUS and CT into fixed size hash codes. During training, these codes are learnt from a patient-specific CT scan by supplying the network with triplets of vessel images which include both a registered and a mis-registered pair. Once hash codes have been learnt, they can be used to perform registration with CBIR methods. RESULTS: We test a CBIR pipeline on 11 sequences of untracked LUS distributed across 5 clinical cases. Compared to a handcrafted feature approach, our model improves the registration success rate significantly from 48% to 61%, considering a 20 mm error as the threshold for a successful coarse registration. CONCLUSIONS: We present the first DH framework for interventional multi-modal registration tasks. The presented approach is easily generalisable to other registration problems, does not require annotated data for training, and may promote the translation of these techniques.


Asunto(s)
Laparoscopía , Tomografía Computarizada por Rayos X , Humanos , Laparoscopía/métodos , Hígado/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Ultrasonografía/métodos
6.
Int J Comput Assist Radiol Surg ; 15(7): 1075-1084, 2020 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-32436132

RESUMEN

PURPOSE: This paper introduces the SciKit-Surgery libraries, designed to enable rapid development of clinical applications for image-guided interventions. SciKit-Surgery implements a family of compact, orthogonal, libraries accompanied by robust testing, documentation, and quality control. SciKit-Surgery libraries can be rapidly assembled into testable clinical applications and subsequently translated to production software without the need for software reimplementation. The aim is to support translation from single surgeon trials to multicentre trials in under 2 years. METHODS: At the time of publication, there were 13 SciKit-Surgery libraries provide functionality for visualisation and augmented reality in surgery, together with hardware interfaces for video, tracking, and ultrasound sources. The libraries are stand-alone, open source, and provide Python interfaces. This design approach enables fast development of robust applications and subsequent translation. The paper compares the libraries with existing platforms and uses two example applications to show how SciKit-Surgery libraries can be used in practice. RESULTS: Using the number of lines of code and the occurrence of cross-dependencies as proxy measurements of code complexity, two example applications using SciKit-Surgery libraries are analysed. The SciKit-Surgery libraries demonstrate ability to support rapid development of testable clinical applications. By maintaining stricter orthogonality between libraries, the number, and complexity of dependencies can be reduced. The SciKit-Surgery libraries also demonstrate the potential to support wider dissemination of novel research. CONCLUSION: The SciKit-Surgery libraries utilise the modularity of the Python language and the standard data types of the NumPy package to provide an easy-to-use, well-tested, and extensible set of tools for the development of applications for image-guided interventions. The example application built on SciKit-Surgery has a simpler dependency structure than the same application built using a monolithic platform, making ongoing clinical translation more feasible.


Asunto(s)
Realidad Aumentada , Programas Informáticos , Cirugía Asistida por Computador/métodos , Humanos
7.
J Med Imaging (Bellingham) ; 6(1): 011003, 2019 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-30840715

RESUMEN

Image guidance systems that register scans of the prostate obtained using transrectal ultrasound (TRUS) and magnetic resonance imaging are becoming increasingly popular as a means of enabling tumor-targeted prostate cancer biopsy and treatment. However, intraoperative segmentation of TRUS images to define the three-dimensional (3-D) geometry of the prostate remains a necessary task in existing guidance systems, which often require significant manual interaction and are subject to interoperator variability. Therefore, automating this step would lead to more acceptable clinical workflows and greater standardization between different operators and hospitals. In this work, a convolutional neural network (CNN) for automatically segmenting the prostate in two-dimensional (2-D) TRUS slices of a 3-D TRUS volume was developed and tested. The network was designed to be able to incorporate 3-D spatial information by taking one or more TRUS slices neighboring each slice to be segmented as input, in addition to these slices. The accuracy of the CNN was evaluated on data from a cohort of 109 patients who had undergone TRUS-guided targeted biopsy, (a total of 4034 2-D slices). The segmentation accuracy was measured by calculating 2-D and 3-D Dice similarity coefficients, on the 2-D images and corresponding 3-D volumes, respectively, as well as the 2-D boundary distances, using a 10-fold patient-level cross-validation experiment. However, incorporating neighboring slices did not improve the segmentation performance in five out of six experiment results, which include varying the number of neighboring slices from 1 to 3 at either side. The up-sampling shortcuts reduced the overall training time of the network, 161 min compared with 253 min without the architectural addition.

8.
Med Image Anal ; 58: 101558, 2019 12.
Artículo en Inglés | MEDLINE | ID: mdl-31526965

RESUMEN

Convolutional neural networks (CNNs) have recently led to significant advances in automatic segmentations of anatomical structures in medical images, and a wide variety of network architectures are now available to the research community. For applications such as segmentation of the prostate in magnetic resonance images (MRI), the results of the PROMISE12 online algorithm evaluation platform have demonstrated differences between the best-performing segmentation algorithms in terms of numerical accuracy using standard metrics such as the Dice score and boundary distance. These small differences in the segmented regions/boundaries outputted by different algorithms may potentially have an unsubstantial impact on the results of downstream image analysis tasks, such as estimating organ volume and multimodal image registration, which inform clinical decisions. This impact has not been previously investigated. In this work, we quantified the accuracy of six different CNNs in segmenting the prostate in 3D patient T2-weighted MRI scans and compared the accuracy of organ volume estimation and MRI-ultrasound (US) registration errors using the prostate segmentations produced by different networks. Networks were trained and tested using a set of 232 patient MRIs with labels provided by experienced clinicians. A statistically significant difference was found among the Dice scores and boundary distances produced by these networks in a non-parametric analysis of variance (p < 0.001 and p < 0.001, respectively), where the following multiple comparison tests revealed that the statistically significant difference in segmentation errors were caused by at least one tested network. Gland volume errors (GVEs) and target registration errors (TREs) were then estimated using the CNN-generated segmentations. Interestingly, there was no statistical difference found in either GVEs or TREs among different networks, (p = 0.34 and p = 0.26, respectively). This result provides a real-world example that these networks with different segmentation performances may potentially provide indistinguishably adequate registration accuracies to assist prostate cancer imaging applications. We conclude by recommending that the differences in the accuracy of downstream image analysis tasks that make use of data output by automatic segmentation methods, such as CNNs, within a clinical pipeline should be taken into account when selecting between different network architectures, in addition to reporting the segmentation accuracy.


Asunto(s)
Imagen por Resonancia Magnética , Redes Neurales de la Computación , Reconocimiento de Normas Patrones Automatizadas/métodos , Neoplasias de la Próstata/diagnóstico por imagen , Ultrasonografía , Humanos , Masculino , Carga Tumoral
9.
Eur Urol ; 75(5): 733-740, 2019 05.
Artículo en Inglés | MEDLINE | ID: mdl-30527787

RESUMEN

BACKGROUND: Multiparametric magnetic resonance imaging (mpMRI)-targeted prostate biopsies can improve detection of clinically significant prostate cancer and decrease the overdetection of insignificant cancers. It is unknown whether visual-registration targeting is sufficient or augmentation with image-fusion software is needed. OBJECTIVE: To assess concordance between the two methods. DESIGN, SETTING, AND PARTICIPANTS: We conducted a blinded, within-person randomised, paired validating clinical trial. From 2014 to 2016, 141 men who had undergone a prior (positive or negative) transrectal ultrasound biopsy and had a discrete lesion on mpMRI (score 3-5) requiring targeted transperineal biopsy were enrolled at a UK academic hospital; 129 underwent both biopsy strategies and completed the study. INTERVENTION: The order of performing biopsies using visual registration and a computer-assisted MRI/ultrasound image-fusion system (SmartTarget) on each patient was randomised. The equipment was reset between biopsy strategies to mitigate incorporation bias. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS: The proportion of clinically significant prostate cancer (primary outcome: Gleason pattern ≥3+4=7, maximum cancer core length ≥4mm; secondary outcome: Gleason pattern ≥4+3=7, maximum cancer core length ≥6mm) detected by each method was compared using McNemar's test of paired proportions. RESULTS AND LIMITATIONS: The two strategies combined detected 93 clinically significant prostate cancers (72% of the cohort). Each strategy detected 80/93 (86%) of these cancers; each strategy identified 13 cases missed by the other. Three patients experienced adverse events related to biopsy (urinary retention, urinary tract infection, nausea, and vomiting). No difference in urinary symptoms, erectile function, or quality of life between baseline and follow-up (median 10.5 wk) was observed. The key limitations were lack of parallel-group randomisation and a limit on the number of targeted cores. CONCLUSIONS: Visual-registration and image-fusion targeting strategies combined had the highest detection rate for clinically significant cancers. Targeted prostate biopsy should be performed using both strategies together. PATIENT SUMMARY: We compared two prostate cancer biopsy strategies: visual registration and image fusion. A combination of the two strategies found the most clinically important cancers and should be used together whenever targeted biopsy is being performed.


Asunto(s)
Biopsia Guiada por Imagen/métodos , Imagen por Resonancia Magnética , Imagen Multimodal , Neoplasias de la Próstata/patología , Ultrasonografía , Anciano , Reacciones Falso Negativas , Humanos , Masculino , Persona de Mediana Edad , Clasificación del Tumor , Estudios Prospectivos , Medición de Riesgo , Método Simple Ciego
10.
J Med Imaging (Bellingham) ; 5(2): 021206, 2018 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-29340289

RESUMEN

Segmentation of the levator hiatus in ultrasound allows the extraction of biometrics, which are of importance for pelvic floor disorder assessment. We present a fully automatic method using a convolutional neural network (CNN) to outline the levator hiatus in a two-dimensional image extracted from a three-dimensional ultrasound volume. In particular, our method uses a recently developed scaled exponential linear unit (SELU) as a nonlinear self-normalizing activation function, which for the first time has been applied in medical imaging with CNN. SELU has important advantages such as being parameter-free and mini-batch independent, which may help to overcome memory constraints during training. A dataset with 91 images from 35 patients during Valsalva, contraction, and rest, all labeled by three operators, is used for training and evaluation in a leave-one-patient-out cross validation. Results show a median Dice similarity coefficient of 0.90 with an interquartile range of 0.08, with equivalent performance to the three operators (with a Williams' index of 1.03), and outperforming a U-Net architecture without the need for batch normalization. We conclude that the proposed fully automatic method achieved equivalent accuracy in segmenting the pelvic floor levator hiatus compared to a previous semiautomatic approach.

11.
Med Phys ; 45(11): 5094-5104, 2018 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-30247765

RESUMEN

PURPOSE: In image-guided laparoscopy, optical tracking is commonly employed, but electromagnetic (EM) systems have been proposed in the literature. In this paper, we provide a thorough comparison of EM and optical tracking systems for use in image-guided laparoscopic surgery and a feasibility study of a combined, EM-tracked laparoscope and laparoscopic ultrasound (LUS) image guidance system. METHODS: We first assess the tracking accuracy of a laparoscope with two optical trackers tracking retroreflective markers mounted on the shaft and an EM tracker with the sensor embedded at the proximal end, using a standard evaluation plate. We then use a stylus to test the precision of position measurement and accuracy of distance measurement of the trackers. Finally, we assess the accuracy of an image guidance system comprised of an EM-tracked laparoscope and an EM-tracked LUS probe. RESULTS: In the experiment using a standard evaluation plate, the two optical trackers show less jitter in position and orientation measurement than the EM tracker. Also, the optical trackers demonstrate better consistency of orientation measurement within the test volume. However, their accuracy of measuring relative positions decreases significantly with longer distances whereas the EM tracker's performance is stable; at 50 mm distance, the RMS errors for the two optical trackers are 0.210 and 0.233 mm, respectively, and it is 0.214 mm for the EM tracker; at 250 mm distance, the RMS errors for the two optical trackers become 1.031 and 1.178 mm, respectively, while it is 0.367 mm for the EM tracker. In the experiment using the stylus, the two optical trackers have RMS errors of 1.278 and 1.555 mm in localizing the stylus tip, and it is 1.117 mm for the EM tracker. Our prototype of a combined, EM-tracked laparoscope and LUS system using representative calibration methods showed a RMS point localization error of 3.0 mm for the laparoscope and 1.3 mm for the LUS probe, the lager error of the former being predominantly due to the triangulation error when using a narrow-baseline stereo laparoscope. CONCLUSIONS: The errors incurred by optical trackers, due to the lever-arm effect and variation in tracking accuracy in the depth direction, would make EM-tracked solutions preferable if the EM sensor is placed at the proximal end of the laparoscope.


Asunto(s)
Fenómenos Electromagnéticos , Laparoscopios , Fenómenos Ópticos , Cirugía Asistida por Computador/instrumentación , Ultrasonografía/instrumentación , Estudios de Factibilidad
12.
Med Phys ; 45(4): 1408-1414, 2018 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-29443386

RESUMEN

PURPOSE: Image-guided systems that fuse magnetic resonance imaging (MRI) with three-dimensional (3D) ultrasound (US) images for performing targeted prostate needle biopsy and minimally invasive treatments for prostate cancer are of increasing clinical interest. To date, a wide range of different accuracy estimation procedures and error metrics have been reported, which makes comparing the performance of different systems difficult. METHODS: A set of nine measures are presented to assess the accuracy of MRI-US image registration, needle positioning, needle guidance, and overall system error, with the aim of providing a methodology for estimating the accuracy of instrument placement using a MR/US-guided transperineal approach. RESULTS: Using the SmartTarget fusion system, an MRI-US image alignment error was determined to be 2.0 ± 1.0 mm (mean ± SD), and an overall system instrument targeting error of 3.0 ± 1.2 mm. Three needle deployments for each target phantom lesion was found to result in a 100% lesion hit rate and a median predicted cancer core length of 5.2 mm. CONCLUSIONS: The application of a comprehensive, unbiased validation assessment for MR/US guided systems can provide useful information on system performance for quality assurance and system comparison. Furthermore, such an analysis can be helpful in identifying relationships between these errors, providing insight into the technical behavior of these systems.


Asunto(s)
Biopsia con Aguja/instrumentación , Biopsia Guiada por Imagen/instrumentación , Imagen por Resonancia Magnética , Próstata/diagnóstico por imagen , Próstata/patología , Proyectos de Investigación , Humanos , Procesamiento de Imagen Asistido por Computador , Masculino , Ultrasonografía
13.
IEEE Trans Med Imaging ; 37(8): 1822-1834, 2018 08.
Artículo en Inglés | MEDLINE | ID: mdl-29994628

RESUMEN

Automatic segmentation of abdominal anatomy on computed tomography (CT) images can support diagnosis, treatment planning, and treatment delivery workflows. Segmentation methods using statistical models and multi-atlas label fusion (MALF) require inter-subject image registrations, which are challenging for abdominal images, but alternative methods without registration have not yet achieved higher accuracy for most abdominal organs. We present a registration-free deep-learning-based segmentation algorithm for eight organs that are relevant for navigation in endoscopic pancreatic and biliary procedures, including the pancreas, the gastrointestinal tract (esophagus, stomach, and duodenum) and surrounding organs (liver, spleen, left kidney, and gallbladder). We directly compared the segmentation accuracy of the proposed method to the existing deep learning and MALF methods in a cross-validation on a multi-centre data set with 90 subjects. The proposed method yielded significantly higher Dice scores for all organs and lower mean absolute distances for most organs, including Dice scores of 0.78 versus 0.71, 0.74, and 0.74 for the pancreas, 0.90 versus 0.85, 0.87, and 0.83 for the stomach, and 0.76 versus 0.68, 0.69, and 0.66 for the esophagus. We conclude that the deep-learning-based segmentation represents a registration-free method for multi-organ abdominal CT segmentation whose accuracy can surpass current methods, potentially supporting image-guided navigation in gastrointestinal endoscopy procedures.


Asunto(s)
Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Radiografía Abdominal/métodos , Tomografía Computarizada por Rayos X/métodos , Algoritmos , Sistema Digestivo/diagnóstico por imagen , Humanos , Riñón/diagnóstico por imagen , Bazo/diagnóstico por imagen
14.
Int J Comput Assist Radiol Surg ; 13(6): 875-883, 2018 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-29663274

RESUMEN

PURPOSE: Navigation of endoscopic ultrasound (EUS)-guided procedures of the upper gastrointestinal (GI) system can be technically challenging due to the small fields-of-view of ultrasound and optical devices, as well as the anatomical variability and limited number of orienting landmarks during navigation. Co-registration of an EUS device and a pre-procedure 3D image can enhance the ability to navigate. However, the fidelity of this contextual information depends on the accuracy of registration. The purpose of this study was to develop and test the feasibility of a simulation-based planning method for pre-selecting patient-specific EUS-visible anatomical landmark locations to maximise the accuracy and robustness of a feature-based multimodality registration method. METHODS: A registration approach was adopted in which landmarks are registered to anatomical structures segmented from the pre-procedure volume. The predicted target registration errors (TREs) of EUS-CT registration were estimated using simulated visible anatomical landmarks and a Monte Carlo simulation of landmark localisation error. The optimal planes were selected based on the 90th percentile of TREs, which provide a robust and more accurate EUS-CT registration initialisation. The method was evaluated by comparing the accuracy and robustness of registrations initialised using optimised planes versus non-optimised planes using manually segmented CT images and simulated ([Formula: see text]) or retrospective clinical ([Formula: see text]) EUS landmarks. RESULTS: The results show a lower 90th percentile TRE when registration is initialised using the optimised planes compared with a non-optimised initialisation approach (p value [Formula: see text]). CONCLUSIONS: The proposed simulation-based method to find optimised EUS planes and landmarks for EUS-guided procedures may have the potential to improve registration accuracy. Further work will investigate applying the technique in a clinical setting.


Asunto(s)
Endosonografía/métodos , Imagenología Tridimensional/métodos , Páncreas/diagnóstico por imagen , Pancreatectomía/métodos , Neoplasias Pancreáticas/cirugía , Cirugía Asistida por Computador/métodos , Humanos , Páncreas/cirugía , Neoplasias Pancreáticas/diagnóstico por imagen , Estudios Retrospectivos , Tomografía Computarizada por Rayos X/métodos
15.
Med Image Anal ; 49: 1-13, 2018 10.
Artículo en Inglés | MEDLINE | ID: mdl-30007253

RESUMEN

One of the fundamental challenges in supervised learning for multimodal image registration is the lack of ground-truth for voxel-level spatial correspondence. This work describes a method to infer voxel-level transformation from higher-level correspondence information contained in anatomical labels. We argue that such labels are more reliable and practical to obtain for reference sets of image pairs than voxel-level correspondence. Typical anatomical labels of interest may include solid organs, vessels, ducts, structure boundaries and other subject-specific ad hoc landmarks. The proposed end-to-end convolutional neural network approach aims to predict displacement fields to align multiple labelled corresponding structures for individual image pairs during the training, while only unlabelled image pairs are used as the network input for inference. We highlight the versatility of the proposed strategy, for training, utilising diverse types of anatomical labels, which need not to be identifiable over all training image pairs. At inference, the resulting 3D deformable image registration algorithm runs in real-time and is fully-automated without requiring any anatomical labels or initialisation. Several network architecture variants are compared for registering T2-weighted magnetic resonance images and 3D transrectal ultrasound images from prostate cancer patients. A median target registration error of 3.6 mm on landmark centroids and a median Dice of 0.87 on prostate glands are achieved from cross-validation experiments, in which 108 pairs of multimodal images from 76 patients were tested with high-quality anatomical labels.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Redes Neurales de la Computación , Neoplasias de la Próstata/diagnóstico por imagen , Ultrasonografía , Puntos Anatómicos de Referencia , Humanos , Imagenología Tridimensional , Masculino , Próstata/anatomía & histología , Próstata/diagnóstico por imagen
16.
Comput Methods Programs Biomed ; 151: 203-212, 2017 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-28947002

RESUMEN

BACKGROUND AND OBJECTIVE: In computational neuroimaging, brain parcellation methods subdivide the brain into individual regions that can be used to build a network to study its structure and function. Using anatomical or functional connectivity, hierarchical clustering methods aim to offer a meaningful parcellation of the brain at each level of granularity. However, some of these methods have been only applied to small regions and strongly depend on the similarity measure used to merge regions. The aim of this work is to present a robust whole-brain hierarchical parcellation that preserves the global structure of the network. METHODS: Brain regions are modeled as a random walk on the connectome. From this model, a Markov process is derived, where the different nodes represent brain regions and in which the structure can be quantified. Functional or anatomical brain regions are clustered by using an agglomerative information bottleneck method that minimizes the overall loss of information of the structure by using mutual information as a similarity measure. RESULTS: The method is tested with synthetic models, structural and functional human connectomes and is compared with the classic k-means. Results show that the parcellated networks preserve the main properties and are consistent across subjects. CONCLUSION: This work provides a new framework to study the human connectome using functional or anatomical connectivity at different levels.


Asunto(s)
Encéfalo/diagnóstico por imagen , Conectoma , Teoría de la Información , Neuroimagen , Análisis por Conglomerados , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA