Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 99
Filtrar
Más filtros

Bases de datos
Tipo del documento
Intervalo de año de publicación
1.
Development ; 148(18)2021 03 12.
Artículo en Inglés | MEDLINE | ID: mdl-33712441

RESUMEN

Characterising phenotypes often requires quantification of anatomical shape. Quantitative shape comparison (morphometrics) traditionally uses manually located landmarks and is limited by landmark number and operator accuracy. Here, we apply a landmark-free method to characterise the craniofacial skeletal phenotype of the Dp1Tyb mouse model of Down syndrome and a population of the Diversity Outbred (DO) mouse model, comparing it with a landmark-based approach. We identified cranial dysmorphologies in Dp1Tyb mice, especially smaller size and brachycephaly (front-back shortening), homologous to the human phenotype. Shape variation in the DO mice was partly attributable to allometry (size-dependent shape variation) and sexual dimorphism. The landmark-free method performed as well as, or better than, the landmark-based method but was less labour-intensive, required less user training and, uniquely, enabled fine mapping of local differences as planar expansion or shrinkage. Its higher resolution pinpointed reductions in interior mid-snout structures and occipital bones in both the models that were not otherwise apparent. We propose that this landmark-free pipeline could make morphometrics widely accessible beyond its traditional niches in zoology and palaeontology, especially in characterising developmental mutant phenotypes.


Asunto(s)
Puntos Anatómicos de Referencia/fisiopatología , Síndrome de Down/fisiopatología , Imagenología Tridimensional/métodos , Animales , Pesos y Medidas Corporales/métodos , Modelos Animales de Enfermedad , Femenino , Masculino , Ratones , Ratones Endogámicos C57BL , Fenotipo , Caracteres Sexuales , Cráneo/fisiopatología
2.
Prenat Diagn ; 42(1): 49-59, 2022 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-34648206

RESUMEN

OBJECTIVE: Advances in artificial intelligence (AI) have demonstrated potential to improve medical diagnosis. We piloted the end-to-end automation of the mid-trimester screening ultrasound scan using AI-enabled tools. METHODS: A prospective method comparison study was conducted. Participants had both standard and AI-assisted US scans performed. The AI tools automated image acquisition, biometric measurement, and report production. A feedback survey captured the sonographers' perceptions of scanning. RESULTS: Twenty-three subjects were studied. The average time saving per scan was 7.62 min (34.7%) with the AI-assisted method (p < 0.0001). There was no difference in reporting time. There were no clinically significant differences in biometric measurements between the two methods. The AI tools saved a satisfactory view in 93% of the cases (four core views only), and 73% for the full 13 views, compared to 98% for both using the manual scan. Survey responses suggest that the AI tools helped sonographers to concentrate on image interpretation by removing disruptive tasks. CONCLUSION: Separating freehand scanning from image capture and measurement resulted in a faster scan and altered workflow. Removing repetitive tasks may allow more attention to be directed identifying fetal malformation. Further work is required to improve the image plane detection algorithm for use in real time.


Asunto(s)
Inteligencia Artificial/normas , Anomalías Congénitas/diagnóstico , Ultrasonografía Prenatal/instrumentación , Adulto , Inteligencia Artificial/tendencias , Anomalías Congénitas/diagnóstico por imagen , Femenino , Edad Gestacional , Humanos , Embarazo , Estudios Prospectivos , Reproducibilidad de los Resultados , Ultrasonografía Prenatal/métodos , Ultrasonografía Prenatal/normas
3.
Magn Reson Med ; 77(6): 2414-2423, 2017 06.
Artículo en Inglés | MEDLINE | ID: mdl-27605429

RESUMEN

PURPOSE: Fitting tracer kinetic models using linear methods is much faster than using their nonlinear counterparts, although this comes often at the expense of reduced accuracy and precision. The aim of this study was to derive and compare the performance of the linear compartmental tissue uptake (CTU) model with its nonlinear version with respect to their percentage error and precision. THEORY AND METHODS: The linear and nonlinear CTU models were initially compared using simulations with varying noise and temporal sampling. Subsequently, the clinical applicability of the linear model was demonstrated on 14 patients with locally advanced cervical cancer examined with dynamic contrast-enhanced magnetic resonance imaging. RESULTS: Simulations revealed equal percentage error and precision when noise was within clinical achievable ranges (contrast-to-noise ratio >10). The linear method was significantly faster than the nonlinear method, with a minimum speedup of around 230 across all tested sampling rates. Clinical analysis revealed that parameters estimated using the linear and nonlinear CTU model were highly correlated (ρ ≥ 0.95). CONCLUSION: The linear CTU model is computationally more efficient and more stable against temporal downsampling, whereas the nonlinear method is more robust to variations in noise. The two methods may be used interchangeably within clinical achievable ranges of temporal sampling and noise. Magn Reson Med 77:2414-2423, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.


Asunto(s)
Medios de Contraste/farmacocinética , Interpretación de Imagen Asistida por Computador/métodos , Modelos Lineales , Imagen por Resonancia Magnética/métodos , Modelos Biológicos , Neoplasias/metabolismo , Dinámicas no Lineales , Simulación por Computador , Humanos , Tasa de Depuración Metabólica , Neoplasias/diagnóstico por imagen , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
4.
J Electron Imaging ; 26(6)2017 Oct 04.
Artículo en Inglés | MEDLINE | ID: mdl-29225433

RESUMEN

In this work we propose to combine a supervoxel-based image representation with the concept of graph cuts as an efficient optimization technique for 3D deformable image registration. Due to the pixels/voxels-wise graph construction, the use of graph cuts in this context has been mainly limited to 2D applications. However, our work overcomes some of the previous limitations by posing the problem on a graph created by adjacent supervoxels, where the number of nodes in the graph is reduced from the number of voxels to the number of supervoxels. We demonstrate how a supervoxel image representation, combined with graph cuts-based optimization can be applied to 3D data. We further show that the application of a relaxed graph representation of the image, followed by guided image filtering over the estimated deformation field, allows us to model 'sliding motion'. Applying this method to lung image registration, results in highly accurate image registration and anatomically plausible estimations of the deformations. Evaluation of our method on a publicly available Computed Tomography lung image dataset (www.dir-lab.com) leads to the observation that our new approach compares very favorably with state-of-the-art in continuous and discrete image registration methods achieving Target Registration Error of 1.16mm on average per landmark.

5.
Neuroimage ; 84: 225-35, 2014 Jan 01.
Artículo en Inglés | MEDLINE | ID: mdl-23994455

RESUMEN

In dynamic positron emission tomography (PET) neuroimaging studies, where scan durations often exceed 1h, registration of motion-corrupted dynamic PET images is necessary in order to maintain the integrity of the physiological, pharmacological, or biochemical information derived from the tracer kinetic analysis of the scan. In this work, we incorporate a pharmacokinetic model, which is traditionally used to analyse PET data following any registration, into the registration process itself in order to allow for a groupwise registration of the temporal time frames. The new method is shown to achieve smaller registration errors and improved kinetic parameter estimates on validation data sets when compared with image similarity based registration approaches. When applied to measured clinical data from 10 healthy subjects scanned with [(11)C]-(+)-PHNO (a dopamine D3/D2 receptor tracer), it reduces the intra-class variability on the receptor binding outcome measure, further supporting the improvements in registration accuracy. Our method incorporates a generic tracer kinetic model which makes it applicable to different PET radiotracers to remove motion artefacts and increase the integrity of dynamic PET studies.


Asunto(s)
Encéfalo/metabolismo , Imagenología Tridimensional/métodos , Modelos Neurológicos , Oxazinas/farmacocinética , Tomografía de Emisión de Positrones/métodos , Receptores de Dopamina D3/metabolismo , Técnica de Sustracción , Algoritmos , Encéfalo/diagnóstico por imagen , Isótopos de Carbono/farmacocinética , Simulación por Computador , Femenino , Humanos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Masculino , Neuroimagen/métodos , Radiofármacos/farmacocinética , Receptores de Dopamina D3/antagonistas & inhibidores , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Análisis Espacio-Temporal , Factores de Tiempo , Adulto Joven
6.
Annu Rev Biomed Eng ; 15: 327-57, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-23683087

RESUMEN

The role of breast image analysis in radiologists' interpretation tasks in cancer risk assessment, detection, diagnosis, and treatment continues to expand. Breast image analysis methods include segmentation, feature extraction techniques, classifier design, biomechanical modeling, image registration, motion correction, and rigorous methods of evaluation. We present a review of the current status of these task-based image analysis methods, which are being developed for the various image acquisition modalities of mammography, tomosynthesis, computed tomography, ultrasound, and magnetic resonance imaging. Depending on the task, image-based biomarkers from such quantitative image analysis may include morphological, textural, and kinetic characteristics and may depend on accurate modeling and registration of the breast images. We conclude with a discussion of future directions.


Asunto(s)
Neoplasias de la Mama/diagnóstico , Neoplasias de la Mama/patología , Neoplasias de la Mama/terapia , Mama/patología , Interpretación de Imagen Radiográfica Asistida por Computador , Biomarcadores/metabolismo , Biomarcadores de Tumor , Fenómenos Biomecánicos , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Cinética , Mamografía/métodos , Movimiento (Física) , Imagen Multimodal/métodos , Fenotipo , Riesgo , Medición de Riesgo
7.
IEEE Trans Med Imaging ; 43(2): 846-859, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37831582

RESUMEN

Motion represents one of the major challenges in magnetic resonance imaging (MRI). Since the MR signal is acquired in frequency space, any motion of the imaged object leads to complex artefacts in the reconstructed image in addition to other MR imaging artefacts. Deep learning has been frequently proposed for motion correction at several stages of the reconstruction process. The wide range of MR acquisition sequences, anatomies and pathologies of interest, and motion patterns (rigid vs. deformable and random vs. regular) makes a comprehensive solution unlikely. To facilitate the transfer of ideas between different applications, this review provides a detailed overview of proposed methods for learning-based motion correction in MRI together with their common challenges and potentials. This review identifies differences and synergies in underlying data usage, architectures, training and evaluation strategies. We critically discuss general trends and outline future directions, with the aim to enhance interaction between different application areas and research fields.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Procesamiento de Imagen Asistido por Computador/métodos , Estudios Retrospectivos , Movimiento (Física) , Imagen por Resonancia Magnética/métodos , Artefactos
8.
Nat Rev Cardiol ; 21(1): 51-64, 2024 01.
Artículo en Inglés | MEDLINE | ID: mdl-37464183

RESUMEN

Artificial intelligence (AI) is likely to revolutionize the way medical images are analysed and has the potential to improve the identification and analysis of vulnerable or high-risk atherosclerotic plaques in coronary arteries, leading to advances in the treatment of coronary artery disease. However, coronary plaque analysis is challenging owing to cardiac and respiratory motion, as well as the small size of cardiovascular structures. Moreover, the analysis of coronary imaging data is time-consuming, can be performed only by clinicians with dedicated cardiovascular imaging training, and is subject to considerable interreader and intrareader variability. AI has the potential to improve the assessment of images of vulnerable plaque in coronary arteries, but requires robust development, testing and validation. Combining human expertise with AI might facilitate the reliable and valid interpretation of images obtained using CT, MRI, PET, intravascular ultrasonography and optical coherence tomography. In this Roadmap, we review existing evidence on the application of AI to the imaging of vulnerable plaque in coronary arteries and provide consensus recommendations developed by an interdisciplinary group of experts on AI and non-invasive and invasive coronary imaging. We also outline future requirements of AI technology to address bias, uncertainty, explainability and generalizability, which are all essential for the acceptance of AI and its clinical utility in handling the anticipated growing volume of coronary imaging procedures.


Asunto(s)
Enfermedad de la Arteria Coronaria , Placa Aterosclerótica , Humanos , Placa Aterosclerótica/diagnóstico por imagen , Inteligencia Artificial , Vasos Coronarios/diagnóstico por imagen , Enfermedad de la Arteria Coronaria/diagnóstico por imagen , Tomografía de Coherencia Óptica/métodos , Angiografía Coronaria
9.
Int J Cardiovasc Imaging ; 39(7): 1405-1419, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37103667

RESUMEN

Extended reality (XR), which encompasses virtual, augmented and mixed reality, is an emerging medical imaging display platform which enables intuitive and immersive interaction in a three-dimensional space. This technology holds the potential to enhance understanding of complex spatial relationships when planning and guiding cardiac procedures in congenital and structural heart disease moving beyond conventional 2D and 3D image displays. A systematic review of the literature demonstrates a rapid increase in publications describing adoption of this technology. At least 33 XR systems have been described, with many demonstrating proof of concept, but with no specific mention of regulatory approval including some prospective studies. Validation remains limited, and true clinical benefit difficult to measure. This review describes and critically appraises the range of XR technologies and its applications for procedural planning and guidance in structural heart disease while discussing the challenges that need to be overcome in future studies to achieve safe and effective clinical adoption.


Asunto(s)
Realidad Aumentada , Cardiopatías , Humanos , Cardiopatías/diagnóstico por imagen , Cardiopatías/terapia , Imagenología Tridimensional/métodos , Valor Predictivo de las Pruebas , Estudios Prospectivos
10.
Artículo en Inglés | MEDLINE | ID: mdl-38083521

RESUMEN

Colorimetric sensors represent an accessible and sensitive nanotechnology for rapid and accessible measurement of a substance's properties (e.g., analyte concentration) via color changes. Although colorimetric sensors are widely used in healthcare and laboratories, interpretation of their output is performed either by visual inspection or using cameras in highly controlled illumination set-ups, limiting their usage in end-user applications, with lower resolutions and altered light conditions. For that purpose, we implement a set of image processing and deep-learning (DL) methods that correct for non-uniform illumination alterations and accurately read the target variable from the color response of the sensor. Methods that perform both tasks independently vs. jointly in a multi-task model are evaluated. Video recordings of colorimetric sensors measuring temperature conditions were collected to build an experimental reference dataset. Sensor images were augmented with non-uniform color alterations. The best-performing DL architecture disentangles the luminance, chrominance, and noise via separate decoders and integrates a regression task in the latent space to predict the sensor readings, achieving a mean squared error (MSE) performance of 0.811±0.074[°C] and r2=0.930±0.007, under strong color perturbations, resulting in an improvement of 1.26[°C] when compared to the MSE of the best performing method with independent denoising and regression tasks.Clinical Relevance- The proposed methodology aims to improve the accuracy of colorimetric sensor reading and their large-scale accessibility as point-of-care diagnostic and continuous health monitoring devices, in altered illumination conditions.


Asunto(s)
Aprendizaje Profundo , Colorimetría , Iluminación , Procesamiento de Imagen Asistido por Computador/métodos , Examen Físico
11.
Med Image Anal ; 83: 102639, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36257132

RESUMEN

Automatic segmentation of the placenta in fetal ultrasound (US) is challenging due to the (i) high diversity of placenta appearance, (ii) the restricted quality in US resulting in highly variable reference annotations, and (iii) the limited field-of-view of US prohibiting whole placenta assessment at late gestation. In this work, we address these three challenges with a multi-task learning approach that combines the classification of placental location (e.g., anterior, posterior) and semantic placenta segmentation in a single convolutional neural network. Through the classification task the model can learn from larger and more diverse datasets while improving the accuracy of the segmentation task in particular in limited training set conditions. With this approach we investigate the variability in annotations from multiple raters and show that our automatic segmentations (Dice of 0.86 for anterior and 0.83 for posterior placentas) achieve human-level performance as compared to intra- and inter-observer variability. Lastly, our approach can deliver whole placenta segmentation using a multi-view US acquisition pipeline consisting of three stages: multi-probe image acquisition, image fusion and image segmentation. This results in high quality segmentation of larger structures such as the placenta in US with reduced image artifacts which are beyond the field-of-view of single probes.


Asunto(s)
Placenta , Humanos , Femenino , Embarazo , Placenta/diagnóstico por imagen
12.
Med Image Anal ; 89: 102793, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37482034

RESUMEN

The diagnostic value of ultrasound images may be limited by the presence of artefacts, notably acoustic shadows, lack of contrast and localised signal dropout. Some of these artefacts are dependent on probe orientation and scan technique, with each image giving a distinct, partial view of the imaged anatomy. In this work, we propose a novel method to fuse the partially imaged fetal head anatomy, acquired from numerous views, into a single coherent 3D volume of the full anatomy. Firstly, a stream of freehand 3D US images is acquired using a single probe, capturing as many different views of the head as possible. The imaged anatomy at each time-point is then independently aligned to a canonical pose using a recurrent spatial transformer network, making our approach robust to fast fetal and probe motion. Secondly, images are fused by averaging only the most consistent and salient features from all images, producing a more detailed compounding, while minimising artefacts. We evaluated our method quantitatively and qualitatively, using image quality metrics and expert ratings, yielding state of the art performance in terms of image quality and robustness to misalignments. Being online, fast and fully automated, our method shows promise for clinical use and deployment as a real-time tool in the fetal screening clinic, where it may enable unparallelled insight into the shape and structure of the face, skull and brain.


Asunto(s)
Feto , Imagenología Tridimensional , Humanos , Ultrasonografía , Imagenología Tridimensional/métodos , Feto/diagnóstico por imagen , Encéfalo/diagnóstico por imagen , Encéfalo/anatomía & histología , Cabeza/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos
13.
Cancer Cell ; 41(9): 1650-1661.e4, 2023 09 11.
Artículo en Inglés | MEDLINE | ID: mdl-37652006

RESUMEN

Deep learning (DL) can accelerate the prediction of prognostic biomarkers from routine pathology slides in colorectal cancer (CRC). However, current approaches rely on convolutional neural networks (CNNs) and have mostly been validated on small patient cohorts. Here, we develop a new transformer-based pipeline for end-to-end biomarker prediction from pathology slides by combining a pre-trained transformer encoder with a transformer network for patch aggregation. Our transformer-based approach substantially improves the performance, generalizability, data efficiency, and interpretability as compared with current state-of-the-art algorithms. After training and evaluating on a large multicenter cohort of over 13,000 patients from 16 colorectal cancer cohorts, we achieve a sensitivity of 0.99 with a negative predictive value of over 0.99 for prediction of microsatellite instability (MSI) on surgical resection specimens. We demonstrate that resection specimen-only training reaches clinical-grade performance on endoscopic biopsy tissue, solving a long-standing diagnostic problem.


Asunto(s)
Algoritmos , Neoplasias Colorrectales , Humanos , Biomarcadores , Biopsia , Inestabilidad de Microsatélites , Neoplasias Colorrectales/genética
14.
Neuroimage ; 59(3): 2438-51, 2012 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-21939772

RESUMEN

A long-standing issue in non-rigid image registration is the choice of the level of regularisation. Regularisation is necessary to preserve the smoothness of the registration and penalise against unnecessary complexity. The vast majority of existing registration methods use a fixed level of regularisation, which is typically hand-tuned by a user to provide "nice" results. However, the optimal level of regularisation will depend on the data which is being processed; lower signal-to-noise ratios require higher regularisation to avoid registering image noise as well as features, and different pairs of images require registrations of varying complexity depending on their anatomical similarity. In this paper we present a probabilistic registration framework that infers the level of regularisation from the data. An additional benefit of this proposed probabilistic framework is that estimates of the registration uncertainty are obtained. This framework has been implemented using a free-form deformation transformation model, although it would be generically applicable to a range of transformation models. We demonstrate our registration framework on the application of inter-subject brain registration of healthy control subjects from the NIREP database. In our results we show that our framework appropriately adapts the level of regularisation in the presence of noise, and that inferring regularisation on an individual basis leads to a reduction in model over-fitting as measured by image folding while providing a similar level of overlap.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/estadística & datos numéricos , Algoritmos , Teorema de Bayes , Encéfalo/anatomía & histología , Humanos , Imagenología Tridimensional , Imagen por Resonancia Magnética , Modelos Estadísticos , Distribución Normal , Reconocimiento de Normas Patrones Automatizadas , Relación Señal-Ruido
15.
IEEE Trans Radiat Plasma Med Sci ; 6(5): 552-563, 2022 May.
Artículo en Inglés | MEDLINE | ID: mdl-35664091

RESUMEN

We propose a new version of the forward-backward splitting expectation-maximisation network (FBSEM-Net) along with a new memory-efficient training method enabling the training of fully unrolled implementations of 3D FBSEM-Net. FBSEM-Net unfolds the maximum a posteriori expectation-maximisation algorithm and replaces the regularisation step by a residual convolutional neural network. Both the gradient of the prior and the regularisation strength are learned from training data. In this new implementation, three modifications of the original framework are included. First, iteration-dependent networks are used to have a customised regularisation at each iteration. Second, iteration-dependent targets and losses are introduced so that the regularised reconstruction matches the reconstruction of noise-free data at every iteration. Third, sequential training is performed, making training of large unrolled networks far more memory efficient and feasible. Since sequential training permits unrolling a high number of iterations, there is no need for artificial use of the regularisation step as a leapfrogging acceleration. The results obtained on 2D and 3D simulated data show that FBSEM-Net using iteration-dependent targets and losses improves the consistency in the optimisation of the network parameters over different training runs. We also found that using iteration-dependent targets increases the generalisation capabilities of the network. Furthermore, unrolled networks using iteration-dependent regularisation allowed a slight reduction in reconstruction error compared to using a fixed regularisation network at each iteration. Finally, we demonstrate that sequential training successfully addresses potentially serious memory issues during the training of deep unrolled networks. In particular, it enables the training of 3D fully unrolled FBSEM-Net, not previously feasible, by reducing the memory usage by up to 98% compared to a conventional end-to-end training. We also note that the truncation of the backpropagation (due to sequential training) does not notably impact the network's performance compared to conventional training with a full backpropagation through the entire network.

16.
Med Image Anal ; 77: 102360, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-35124370

RESUMEN

Late gadolinium enhancement magnetic resonance imaging (LGE MRI) is commonly used to visualize and quantify left atrial (LA) scars. The position and extent of LA scars provide important information on the pathophysiology and progression of atrial fibrillation (AF). Hence, LA LGE MRI computing and analysis are essential for computer-assisted diagnosis and treatment stratification of AF patients. Since manual delineations can be time-consuming and subject to intra- and inter-expert variability, automating this computing is highly desired, which nevertheless is still challenging and under-researched. This paper aims to provide a systematic review on computing methods for LA cavity, wall, scar, and ablation gap segmentation and quantification from LGE MRI, and the related literature for AF studies. Specifically, we first summarize AF-related imaging techniques, particularly LGE MRI. Then, we review the methodologies of the four computing tasks in detail and summarize the validation strategies applied in each task as well as state-of-the-art results on public datasets. Finally, the possible future developments are outlined, with a brief survey on the potential clinical applications of the aforementioned methods. The review indicates that the research into this topic is still in the early stages. Although several methods have been proposed, especially for the LA cavity segmentation, there is still a large scope for further algorithmic developments due to performance issues related to the high variability of enhancement appearance and differences in image acquisition.


Asunto(s)
Fibrilación Atrial , Gadolinio , Cicatriz , Medios de Contraste , Atrios Cardíacos/diagnóstico por imagen , Humanos , Imagen por Resonancia Magnética/métodos
17.
Med Image Anal ; 76: 102303, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-34875581

RESUMEN

Left atrial (LA) and atrial scar segmentation from late gadolinium enhanced magnetic resonance imaging (LGE MRI) is an important task in clinical practice. The automatic segmentation is however still challenging due to the poor image quality, the various LA shapes, the thin wall, and the surrounding enhanced regions. Previous methods normally solved the two tasks independently and ignored the intrinsic spatial relationship between LA and scars. In this work, we develop a new framework, namely AtrialJSQnet, where LA segmentation, scar projection onto the LA surface, and scar quantification are performed simultaneously in an end-to-end style. We propose a mechanism of shape attention (SA) via an implicit surface projection to utilize the inherent correlation between LA cavity and scars. In specific, the SA scheme is embedded into a multi-task architecture to perform joint LA segmentation and scar quantification. Besides, a spatial encoding (SE) loss is introduced to incorporate continuous spatial information of the target in order to reduce noisy patches in the predicted segmentation. We evaluated the proposed framework on 60 post-ablation LGE MRIs from the MICCAI2018 Atrial Segmentation Challenge. Moreover, we explored the domain generalization ability of the proposed AtrialJSQnet on 40 pre-ablation LGE MRIs from this challenge and 30 post-ablation multi-center LGE MRIs from another challenge (ISBI2012 Left Atrium Fibrosis and Scar Segmentation Challenge). Extensive experiments on public datasets demonstrated the effect of the proposed AtrialJSQnet, which achieved competitive performance over the state-of-the-art. The relatedness between LA segmentation and scar quantification was explicitly explored and has shown significant performance improvements for both tasks. The code has been released via https://zmiclab.github.io/projects.html.


Asunto(s)
Fibrilación Atrial , Cicatriz , Cicatriz/diagnóstico por imagen , Gadolinio , Atrios Cardíacos/diagnóstico por imagen , Humanos , Imagen por Resonancia Magnética/métodos
18.
J Imaging ; 8(11)2022 Nov 08.
Artículo en Inglés | MEDLINE | ID: mdl-36354877

RESUMEN

This study aimed to evaluate the accuracy and reliability of a virtual reality (VR) system line measurement tool using phantom data across three cardiac imaging modalities: three-dimensional echocardiography (3DE), computed tomography (CT) and magnetic resonance imaging (MRI). The same phantoms were also measured using industry-standard image visualisation software packages. Two participants performed blinded measurements on volume-rendered images of standard phantoms both in VR and on an industry-standard image visualisation platform. The intra- and interrater reliability of the VR measurement method was evaluated by intraclass correlation coefficient (ICC) and coefficient of variance (CV). Measurement accuracy was analysed using Bland−Altman and mean absolute percentage error (MAPE). VR measurements showed good intra- and interobserver reliability (ICC ≥ 0.99, p < 0.05; CV < 10%) across all imaging modalities. MAPE for VR measurements compared to ground truth were 1.6%, 1.6% and 7.7% in MRI, CT and 3DE datasets, respectively. Bland−Altman analysis demonstrated no systematic measurement bias in CT or MRI data in VR compared to ground truth. A small bias toward smaller measurements in 3DE data was seen in both VR (mean −0.52 mm [−0.16 to −0.88]) and the standard platform (mean −0.22 mm [−0.03 to −0.40]) when compared to ground truth. Limits of agreement for measurements across all modalities were similar in VR and standard software. This study has shown good measurement accuracy and reliability of VR in CT and MRI data with a higher MAPE for 3DE data. This may relate to the overall smaller measurement dimensions within the 3DE phantom. Further evaluation is required of all modalities for assessment of measurements <10 mm.

19.
IEEE Trans Pattern Anal Mach Intell ; 44(12): 8766-8778, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-32886606

RESUMEN

We introduce a method for training neural networks to perform image or volume segmentation in which prior knowledge about the topology of the segmented object can be explicitly provided and then incorporated into the training process. By using the differentiable properties of persistent homology, a concept used in topological data analysis, we can specify the desired topology of segmented objects in terms of their Betti numbers and then drive the proposed segmentations to contain the specified topological features. Importantly this process does not require any ground-truth labels, just prior knowledge of the topology of the structure being segmented. We demonstrate our approach in four experiments: one on MNIST image denoising and digit recognition, one on left ventricular myocardium segmentation from magnetic resonance imaging data from the UK Biobank, one on the ACDC public challenge dataset and one on placenta segmentation from 3-D ultrasound. We find that embedding explicit prior knowledge in neural network segmentation tasks is most beneficial when the segmentation task is especially challenging and that it can be used in either a semi-supervised or post-processing context to extract a useful training gradient from images without pixelwise labels.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Redes Neurales de la Computación , Imagen por Resonancia Magnética/métodos
20.
Phys Med Biol ; 67(9)2022 04 27.
Artículo en Inglés | MEDLINE | ID: mdl-35395657

RESUMEN

Objective.In clinical positron emission tomography (PET) imaging, quantification of radiotracer uptake in tumours is often performed using semi-quantitative measurements such as the standardised uptake value (SUV). For small objects, the accuracy of SUV estimates is limited by the noise properties of PET images and the partial volume effect. There is need for methods that provide more accurate and reproducible quantification of radiotracer uptake.Approach.In this work, we present a deep learning approach with the aim of improving quantification of lung tumour radiotracer uptake and tumour shape definition. A set of simulated tumours, assigned with 'ground truth' radiotracer distributions, are used to generate realistic PET raw data which are then reconstructed into PET images. In this work, the ground truth images are generated by placing simulated tumours characterised by different sizes and activity distributions in the left lung of an anthropomorphic phantom. These images are then used as input to an analytical simulator to simulate realistic raw PET data. The PET images reconstructed from the simulated raw data and the corresponding ground truth images are used to train a 3D convolutional neural network.Results.When tested on an unseen set of reconstructed PET phantom images, the network yields improved estimates of the corresponding ground truth. The same network is then applied to reconstructed PET data generated with different point spread functions. Overall the network is able to recover better defined tumour shapes and improved estimates of tumour maximum and median activities.Significance.Our results suggest that the proposed approach, trained on data simulated with one scanner geometry, has the potential to restore PET data acquired with different scanners.


Asunto(s)
Aprendizaje Profundo , Neoplasias Pulmonares , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Neoplasias Pulmonares/diagnóstico por imagen , Fantasmas de Imagen , Tomografía de Emisión de Positrones
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA