Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 99
Filtrar
1.
IEEE Trans Med Imaging ; 43(2): 846-859, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37831582

RESUMEN

Motion represents one of the major challenges in magnetic resonance imaging (MRI). Since the MR signal is acquired in frequency space, any motion of the imaged object leads to complex artefacts in the reconstructed image in addition to other MR imaging artefacts. Deep learning has been frequently proposed for motion correction at several stages of the reconstruction process. The wide range of MR acquisition sequences, anatomies and pathologies of interest, and motion patterns (rigid vs. deformable and random vs. regular) makes a comprehensive solution unlikely. To facilitate the transfer of ideas between different applications, this review provides a detailed overview of proposed methods for learning-based motion correction in MRI together with their common challenges and potentials. This review identifies differences and synergies in underlying data usage, architectures, training and evaluation strategies. We critically discuss general trends and outline future directions, with the aim to enhance interaction between different application areas and research fields.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Procesamiento de Imagen Asistido por Computador/métodos , Estudios Retrospectivos , Movimiento (Física) , Imagen por Resonancia Magnética/métodos , Artefactos
2.
Nat Rev Cardiol ; 21(1): 51-64, 2024 01.
Artículo en Inglés | MEDLINE | ID: mdl-37464183

RESUMEN

Artificial intelligence (AI) is likely to revolutionize the way medical images are analysed and has the potential to improve the identification and analysis of vulnerable or high-risk atherosclerotic plaques in coronary arteries, leading to advances in the treatment of coronary artery disease. However, coronary plaque analysis is challenging owing to cardiac and respiratory motion, as well as the small size of cardiovascular structures. Moreover, the analysis of coronary imaging data is time-consuming, can be performed only by clinicians with dedicated cardiovascular imaging training, and is subject to considerable interreader and intrareader variability. AI has the potential to improve the assessment of images of vulnerable plaque in coronary arteries, but requires robust development, testing and validation. Combining human expertise with AI might facilitate the reliable and valid interpretation of images obtained using CT, MRI, PET, intravascular ultrasonography and optical coherence tomography. In this Roadmap, we review existing evidence on the application of AI to the imaging of vulnerable plaque in coronary arteries and provide consensus recommendations developed by an interdisciplinary group of experts on AI and non-invasive and invasive coronary imaging. We also outline future requirements of AI technology to address bias, uncertainty, explainability and generalizability, which are all essential for the acceptance of AI and its clinical utility in handling the anticipated growing volume of coronary imaging procedures.


Asunto(s)
Enfermedad de la Arteria Coronaria , Placa Aterosclerótica , Humanos , Placa Aterosclerótica/diagnóstico por imagen , Inteligencia Artificial , Vasos Coronarios/diagnóstico por imagen , Enfermedad de la Arteria Coronaria/diagnóstico por imagen , Tomografía de Coherencia Óptica/métodos , Angiografía Coronaria
3.
Artículo en Inglés | MEDLINE | ID: mdl-38083521

RESUMEN

Colorimetric sensors represent an accessible and sensitive nanotechnology for rapid and accessible measurement of a substance's properties (e.g., analyte concentration) via color changes. Although colorimetric sensors are widely used in healthcare and laboratories, interpretation of their output is performed either by visual inspection or using cameras in highly controlled illumination set-ups, limiting their usage in end-user applications, with lower resolutions and altered light conditions. For that purpose, we implement a set of image processing and deep-learning (DL) methods that correct for non-uniform illumination alterations and accurately read the target variable from the color response of the sensor. Methods that perform both tasks independently vs. jointly in a multi-task model are evaluated. Video recordings of colorimetric sensors measuring temperature conditions were collected to build an experimental reference dataset. Sensor images were augmented with non-uniform color alterations. The best-performing DL architecture disentangles the luminance, chrominance, and noise via separate decoders and integrates a regression task in the latent space to predict the sensor readings, achieving a mean squared error (MSE) performance of 0.811±0.074[°C] and r2=0.930±0.007, under strong color perturbations, resulting in an improvement of 1.26[°C] when compared to the MSE of the best performing method with independent denoising and regression tasks.Clinical Relevance- The proposed methodology aims to improve the accuracy of colorimetric sensor reading and their large-scale accessibility as point-of-care diagnostic and continuous health monitoring devices, in altered illumination conditions.


Asunto(s)
Aprendizaje Profundo , Colorimetría , Iluminación , Procesamiento de Imagen Asistido por Computador/métodos , Examen Físico
4.
Cancer Cell ; 41(9): 1650-1661.e4, 2023 09 11.
Artículo en Inglés | MEDLINE | ID: mdl-37652006

RESUMEN

Deep learning (DL) can accelerate the prediction of prognostic biomarkers from routine pathology slides in colorectal cancer (CRC). However, current approaches rely on convolutional neural networks (CNNs) and have mostly been validated on small patient cohorts. Here, we develop a new transformer-based pipeline for end-to-end biomarker prediction from pathology slides by combining a pre-trained transformer encoder with a transformer network for patch aggregation. Our transformer-based approach substantially improves the performance, generalizability, data efficiency, and interpretability as compared with current state-of-the-art algorithms. After training and evaluating on a large multicenter cohort of over 13,000 patients from 16 colorectal cancer cohorts, we achieve a sensitivity of 0.99 with a negative predictive value of over 0.99 for prediction of microsatellite instability (MSI) on surgical resection specimens. We demonstrate that resection specimen-only training reaches clinical-grade performance on endoscopic biopsy tissue, solving a long-standing diagnostic problem.


Asunto(s)
Algoritmos , Neoplasias Colorrectales , Humanos , Biomarcadores , Biopsia , Inestabilidad de Microsatélites , Neoplasias Colorrectales/genética
5.
Med Image Anal ; 89: 102793, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37482034

RESUMEN

The diagnostic value of ultrasound images may be limited by the presence of artefacts, notably acoustic shadows, lack of contrast and localised signal dropout. Some of these artefacts are dependent on probe orientation and scan technique, with each image giving a distinct, partial view of the imaged anatomy. In this work, we propose a novel method to fuse the partially imaged fetal head anatomy, acquired from numerous views, into a single coherent 3D volume of the full anatomy. Firstly, a stream of freehand 3D US images is acquired using a single probe, capturing as many different views of the head as possible. The imaged anatomy at each time-point is then independently aligned to a canonical pose using a recurrent spatial transformer network, making our approach robust to fast fetal and probe motion. Secondly, images are fused by averaging only the most consistent and salient features from all images, producing a more detailed compounding, while minimising artefacts. We evaluated our method quantitatively and qualitatively, using image quality metrics and expert ratings, yielding state of the art performance in terms of image quality and robustness to misalignments. Being online, fast and fully automated, our method shows promise for clinical use and deployment as a real-time tool in the fetal screening clinic, where it may enable unparallelled insight into the shape and structure of the face, skull and brain.


Asunto(s)
Feto , Imagenología Tridimensional , Humanos , Ultrasonografía , Imagenología Tridimensional/métodos , Feto/diagnóstico por imagen , Encéfalo/diagnóstico por imagen , Encéfalo/anatomía & histología , Cabeza/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos
6.
Int J Cardiovasc Imaging ; 39(7): 1405-1419, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37103667

RESUMEN

Extended reality (XR), which encompasses virtual, augmented and mixed reality, is an emerging medical imaging display platform which enables intuitive and immersive interaction in a three-dimensional space. This technology holds the potential to enhance understanding of complex spatial relationships when planning and guiding cardiac procedures in congenital and structural heart disease moving beyond conventional 2D and 3D image displays. A systematic review of the literature demonstrates a rapid increase in publications describing adoption of this technology. At least 33 XR systems have been described, with many demonstrating proof of concept, but with no specific mention of regulatory approval including some prospective studies. Validation remains limited, and true clinical benefit difficult to measure. This review describes and critically appraises the range of XR technologies and its applications for procedural planning and guidance in structural heart disease while discussing the challenges that need to be overcome in future studies to achieve safe and effective clinical adoption.


Asunto(s)
Realidad Aumentada , Cardiopatías , Humanos , Cardiopatías/diagnóstico por imagen , Cardiopatías/terapia , Imagenología Tridimensional/métodos , Valor Predictivo de las Pruebas , Estudios Prospectivos
7.
Med Image Anal ; 83: 102639, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36257132

RESUMEN

Automatic segmentation of the placenta in fetal ultrasound (US) is challenging due to the (i) high diversity of placenta appearance, (ii) the restricted quality in US resulting in highly variable reference annotations, and (iii) the limited field-of-view of US prohibiting whole placenta assessment at late gestation. In this work, we address these three challenges with a multi-task learning approach that combines the classification of placental location (e.g., anterior, posterior) and semantic placenta segmentation in a single convolutional neural network. Through the classification task the model can learn from larger and more diverse datasets while improving the accuracy of the segmentation task in particular in limited training set conditions. With this approach we investigate the variability in annotations from multiple raters and show that our automatic segmentations (Dice of 0.86 for anterior and 0.83 for posterior placentas) achieve human-level performance as compared to intra- and inter-observer variability. Lastly, our approach can deliver whole placenta segmentation using a multi-view US acquisition pipeline consisting of three stages: multi-probe image acquisition, image fusion and image segmentation. This results in high quality segmentation of larger structures such as the placenta in US with reduced image artifacts which are beyond the field-of-view of single probes.


Asunto(s)
Placenta , Humanos , Femenino , Embarazo , Placenta/diagnóstico por imagen
8.
J Imaging ; 8(11)2022 Nov 08.
Artículo en Inglés | MEDLINE | ID: mdl-36354877

RESUMEN

This study aimed to evaluate the accuracy and reliability of a virtual reality (VR) system line measurement tool using phantom data across three cardiac imaging modalities: three-dimensional echocardiography (3DE), computed tomography (CT) and magnetic resonance imaging (MRI). The same phantoms were also measured using industry-standard image visualisation software packages. Two participants performed blinded measurements on volume-rendered images of standard phantoms both in VR and on an industry-standard image visualisation platform. The intra- and interrater reliability of the VR measurement method was evaluated by intraclass correlation coefficient (ICC) and coefficient of variance (CV). Measurement accuracy was analysed using Bland−Altman and mean absolute percentage error (MAPE). VR measurements showed good intra- and interobserver reliability (ICC ≥ 0.99, p < 0.05; CV < 10%) across all imaging modalities. MAPE for VR measurements compared to ground truth were 1.6%, 1.6% and 7.7% in MRI, CT and 3DE datasets, respectively. Bland−Altman analysis demonstrated no systematic measurement bias in CT or MRI data in VR compared to ground truth. A small bias toward smaller measurements in 3DE data was seen in both VR (mean −0.52 mm [−0.16 to −0.88]) and the standard platform (mean −0.22 mm [−0.03 to −0.40]) when compared to ground truth. Limits of agreement for measurements across all modalities were similar in VR and standard software. This study has shown good measurement accuracy and reliability of VR in CT and MRI data with a higher MAPE for 3DE data. This may relate to the overall smaller measurement dimensions within the 3DE phantom. Further evaluation is required of all modalities for assessment of measurements <10 mm.

9.
Oncogene ; 41(46): 5032-5045, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36241867

RESUMEN

Metastatic tumour progression is facilitated by tumour associated macrophages (TAMs) that enforce pro-tumour mechanisms and suppress immunity. In pulmonary metastases, it is unclear whether TAMs comprise tissue resident or infiltrating, recruited macrophages; and the different expression patterns of these TAMs are not well established. Using the mouse melanoma B16F10 model of experimental pulmonary metastasis, we show that infiltrating macrophages (IM) change their gene expression from an early pro-inflammatory to a later tumour promoting profile as the lesions grow. In contrast, resident alveolar macrophages (AM) maintain expression of crucial pro-inflammatory/anti-tumour genes with time. During metastatic growth, the pool of macrophages, which initially contains mainly alveolar macrophages, increasingly consists of infiltrating macrophages potentially facilitating metastasis progression. Blocking chemokine receptor mediated macrophage infiltration in the lung revealed a prominent role for CCR2 in Ly6C+ pro-inflammatory monocyte/macrophage recruitment during metastasis progression, while inhibition of CCR2 signalling led to increased metastatic colony burden. CCR1 blockade, in contrast, suppressed late phase pro-tumour MR+Ly6C- monocyte/macrophage infiltration accompanied by expansion of the alveolar macrophage compartment and accumulation of NK cells, leading to reduced metastatic burden. These data indicate that IM has greater plasticity and higher phenotypic responsiveness to tumour challenge than AM. A considerable difference is also confirmed between CCR1 and CCR2 with regard to the recruited IM subsets, with CCR1 presenting a potential therapeutic target in pulmonary metastasis from melanoma.


Asunto(s)
Macrófagos Alveolares , Melanoma , Ratones , Animales , Macrófagos Alveolares/metabolismo , Macrófagos/metabolismo , Melanoma/patología , Receptores de Quimiocina , Modelos Animales de Enfermedad , Pulmón/metabolismo , Receptores CCR2/genética , Receptores CCR2/metabolismo , Receptores CCR1/genética , Receptores CCR1/metabolismo
10.
IEEE Trans Radiat Plasma Med Sci ; 6(5): 552-563, 2022 May.
Artículo en Inglés | MEDLINE | ID: mdl-35664091

RESUMEN

We propose a new version of the forward-backward splitting expectation-maximisation network (FBSEM-Net) along with a new memory-efficient training method enabling the training of fully unrolled implementations of 3D FBSEM-Net. FBSEM-Net unfolds the maximum a posteriori expectation-maximisation algorithm and replaces the regularisation step by a residual convolutional neural network. Both the gradient of the prior and the regularisation strength are learned from training data. In this new implementation, three modifications of the original framework are included. First, iteration-dependent networks are used to have a customised regularisation at each iteration. Second, iteration-dependent targets and losses are introduced so that the regularised reconstruction matches the reconstruction of noise-free data at every iteration. Third, sequential training is performed, making training of large unrolled networks far more memory efficient and feasible. Since sequential training permits unrolling a high number of iterations, there is no need for artificial use of the regularisation step as a leapfrogging acceleration. The results obtained on 2D and 3D simulated data show that FBSEM-Net using iteration-dependent targets and losses improves the consistency in the optimisation of the network parameters over different training runs. We also found that using iteration-dependent targets increases the generalisation capabilities of the network. Furthermore, unrolled networks using iteration-dependent regularisation allowed a slight reduction in reconstruction error compared to using a fixed regularisation network at each iteration. Finally, we demonstrate that sequential training successfully addresses potentially serious memory issues during the training of deep unrolled networks. In particular, it enables the training of 3D fully unrolled FBSEM-Net, not previously feasible, by reducing the memory usage by up to 98% compared to a conventional end-to-end training. We also note that the truncation of the backpropagation (due to sequential training) does not notably impact the network's performance compared to conventional training with a full backpropagation through the entire network.

11.
Phys Med Biol ; 67(9)2022 04 27.
Artículo en Inglés | MEDLINE | ID: mdl-35395657

RESUMEN

Objective.In clinical positron emission tomography (PET) imaging, quantification of radiotracer uptake in tumours is often performed using semi-quantitative measurements such as the standardised uptake value (SUV). For small objects, the accuracy of SUV estimates is limited by the noise properties of PET images and the partial volume effect. There is need for methods that provide more accurate and reproducible quantification of radiotracer uptake.Approach.In this work, we present a deep learning approach with the aim of improving quantification of lung tumour radiotracer uptake and tumour shape definition. A set of simulated tumours, assigned with 'ground truth' radiotracer distributions, are used to generate realistic PET raw data which are then reconstructed into PET images. In this work, the ground truth images are generated by placing simulated tumours characterised by different sizes and activity distributions in the left lung of an anthropomorphic phantom. These images are then used as input to an analytical simulator to simulate realistic raw PET data. The PET images reconstructed from the simulated raw data and the corresponding ground truth images are used to train a 3D convolutional neural network.Results.When tested on an unseen set of reconstructed PET phantom images, the network yields improved estimates of the corresponding ground truth. The same network is then applied to reconstructed PET data generated with different point spread functions. Overall the network is able to recover better defined tumour shapes and improved estimates of tumour maximum and median activities.Significance.Our results suggest that the proposed approach, trained on data simulated with one scanner geometry, has the potential to restore PET data acquired with different scanners.


Asunto(s)
Aprendizaje Profundo , Neoplasias Pulmonares , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Neoplasias Pulmonares/diagnóstico por imagen , Fantasmas de Imagen , Tomografía de Emisión de Positrones
12.
Med Image Anal ; 77: 102360, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-35124370

RESUMEN

Late gadolinium enhancement magnetic resonance imaging (LGE MRI) is commonly used to visualize and quantify left atrial (LA) scars. The position and extent of LA scars provide important information on the pathophysiology and progression of atrial fibrillation (AF). Hence, LA LGE MRI computing and analysis are essential for computer-assisted diagnosis and treatment stratification of AF patients. Since manual delineations can be time-consuming and subject to intra- and inter-expert variability, automating this computing is highly desired, which nevertheless is still challenging and under-researched. This paper aims to provide a systematic review on computing methods for LA cavity, wall, scar, and ablation gap segmentation and quantification from LGE MRI, and the related literature for AF studies. Specifically, we first summarize AF-related imaging techniques, particularly LGE MRI. Then, we review the methodologies of the four computing tasks in detail and summarize the validation strategies applied in each task as well as state-of-the-art results on public datasets. Finally, the possible future developments are outlined, with a brief survey on the potential clinical applications of the aforementioned methods. The review indicates that the research into this topic is still in the early stages. Although several methods have been proposed, especially for the LA cavity segmentation, there is still a large scope for further algorithmic developments due to performance issues related to the high variability of enhancement appearance and differences in image acquisition.


Asunto(s)
Fibrilación Atrial , Gadolinio , Cicatriz , Medios de Contraste , Atrios Cardíacos/diagnóstico por imagen , Humanos , Imagen por Resonancia Magnética/métodos
13.
IEEE Trans Pattern Anal Mach Intell ; 44(12): 8766-8778, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-32886606

RESUMEN

We introduce a method for training neural networks to perform image or volume segmentation in which prior knowledge about the topology of the segmented object can be explicitly provided and then incorporated into the training process. By using the differentiable properties of persistent homology, a concept used in topological data analysis, we can specify the desired topology of segmented objects in terms of their Betti numbers and then drive the proposed segmentations to contain the specified topological features. Importantly this process does not require any ground-truth labels, just prior knowledge of the topology of the structure being segmented. We demonstrate our approach in four experiments: one on MNIST image denoising and digit recognition, one on left ventricular myocardium segmentation from magnetic resonance imaging data from the UK Biobank, one on the ACDC public challenge dataset and one on placenta segmentation from 3-D ultrasound. We find that embedding explicit prior knowledge in neural network segmentation tasks is most beneficial when the segmentation task is especially challenging and that it can be used in either a semi-supervised or post-processing context to extract a useful training gradient from images without pixelwise labels.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Redes Neurales de la Computación , Imagen por Resonancia Magnética/métodos
14.
Prenat Diagn ; 42(1): 49-59, 2022 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-34648206

RESUMEN

OBJECTIVE: Advances in artificial intelligence (AI) have demonstrated potential to improve medical diagnosis. We piloted the end-to-end automation of the mid-trimester screening ultrasound scan using AI-enabled tools. METHODS: A prospective method comparison study was conducted. Participants had both standard and AI-assisted US scans performed. The AI tools automated image acquisition, biometric measurement, and report production. A feedback survey captured the sonographers' perceptions of scanning. RESULTS: Twenty-three subjects were studied. The average time saving per scan was 7.62 min (34.7%) with the AI-assisted method (p < 0.0001). There was no difference in reporting time. There were no clinically significant differences in biometric measurements between the two methods. The AI tools saved a satisfactory view in 93% of the cases (four core views only), and 73% for the full 13 views, compared to 98% for both using the manual scan. Survey responses suggest that the AI tools helped sonographers to concentrate on image interpretation by removing disruptive tasks. CONCLUSION: Separating freehand scanning from image capture and measurement resulted in a faster scan and altered workflow. Removing repetitive tasks may allow more attention to be directed identifying fetal malformation. Further work is required to improve the image plane detection algorithm for use in real time.


Asunto(s)
Inteligencia Artificial/normas , Anomalías Congénitas/diagnóstico , Ultrasonografía Prenatal/instrumentación , Adulto , Inteligencia Artificial/tendencias , Anomalías Congénitas/diagnóstico por imagen , Femenino , Edad Gestacional , Humanos , Embarazo , Estudios Prospectivos , Reproducibilidad de los Resultados , Ultrasonografía Prenatal/métodos , Ultrasonografía Prenatal/normas
15.
Med Image Anal ; 76: 102303, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-34875581

RESUMEN

Left atrial (LA) and atrial scar segmentation from late gadolinium enhanced magnetic resonance imaging (LGE MRI) is an important task in clinical practice. The automatic segmentation is however still challenging due to the poor image quality, the various LA shapes, the thin wall, and the surrounding enhanced regions. Previous methods normally solved the two tasks independently and ignored the intrinsic spatial relationship between LA and scars. In this work, we develop a new framework, namely AtrialJSQnet, where LA segmentation, scar projection onto the LA surface, and scar quantification are performed simultaneously in an end-to-end style. We propose a mechanism of shape attention (SA) via an implicit surface projection to utilize the inherent correlation between LA cavity and scars. In specific, the SA scheme is embedded into a multi-task architecture to perform joint LA segmentation and scar quantification. Besides, a spatial encoding (SE) loss is introduced to incorporate continuous spatial information of the target in order to reduce noisy patches in the predicted segmentation. We evaluated the proposed framework on 60 post-ablation LGE MRIs from the MICCAI2018 Atrial Segmentation Challenge. Moreover, we explored the domain generalization ability of the proposed AtrialJSQnet on 40 pre-ablation LGE MRIs from this challenge and 30 post-ablation multi-center LGE MRIs from another challenge (ISBI2012 Left Atrium Fibrosis and Scar Segmentation Challenge). Extensive experiments on public datasets demonstrated the effect of the proposed AtrialJSQnet, which achieved competitive performance over the state-of-the-art. The relatedness between LA segmentation and scar quantification was explicitly explored and has shown significant performance improvements for both tasks. The code has been released via https://zmiclab.github.io/projects.html.


Asunto(s)
Fibrilación Atrial , Cicatriz , Cicatriz/diagnóstico por imagen , Gadolinio , Atrios Cardíacos/diagnóstico por imagen , Humanos , Imagen por Resonancia Magnética/métodos
16.
J Imaging ; 7(8)2021 Aug 19.
Artículo en Inglés | MEDLINE | ID: mdl-34460787

RESUMEN

The intricate nature of congenital heart disease requires understanding of the complex, patient-specific three-dimensional dynamic anatomy of the heart, from imaging data such as three-dimensional echocardiography for successful outcomes from surgical and interventional procedures. Conventional clinical systems use flat screens, and therefore, display remains two-dimensional, which undermines the full understanding of the three-dimensional dynamic data. Additionally, the control of three-dimensional visualisation with two-dimensional tools is often difficult, so used only by imaging specialists. In this paper, we describe a virtual reality system for immersive surgery planning using dynamic three-dimensional echocardiography, which enables fast prototyping for visualisation such as volume rendering, multiplanar reformatting, flow visualisation and advanced interaction such as three-dimensional cropping, windowing, measurement, haptic feedback, automatic image orientation and multiuser interactions. The available features were evaluated by imaging and nonimaging clinicians, showing that the virtual reality system can help improve the understanding and communication of three-dimensional echocardiography imaging and potentially benefit congenital heart disease treatment.

17.
JTCVS Tech ; 7: 269-277, 2021 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-34100000

RESUMEN

OBJECTIVES: To investigate how virtual reality (VR) imaging impacts decision-making in atrioventricular valve surgery. METHODS: This was a single-center retrospective study involving 15 children and adolescents, median age 6 years (range, 0.33-16) requiring surgical repair of the atrioventricular valves between the years 2016 and 2019. The patients' preoperative 3-dimesnional (3D) echocardiographic data were used to create 3D visualization in a VR application. Five pediatric cardiothoracic surgeons completed a questionnaire formulated to compare their surgical decisions regarding the cases after reviewing conventionally presented 2-dimesnional and 3D echocardiographic images and again after visualization of 3D echocardiograms using the VR platform. Finally, intraoperative findings were shared with surgeons to confirm assessment of the pathology. RESULTS: In 67% of cases presented with VR, surgeons reported having "more" or "much more" confidence in their understanding of each patient's pathology and their surgical approach. In all but one case, surgeons were at least as confident after reviewing the VR compared with standard imaging. The case where surgeons reported to be least confident on VR had the worst technical quality of data used. After viewing patient cases on VR, surgeons reported that they would have made minor modifications to surgical approach in 53% and major modifications in 7% of cases. CONCLUSIONS: The main impact of viewing imaging on VR is the improved clarity of the anatomical structures. Surgeons reported that this would have impacted the surgical approach in the majority of cases. Poor-quality 3D echocardiographic data were associated with a negative impact of VR visualization; thus. quality assessment of imaging is necessary before projecting in a VR format.

18.
J Nucl Med ; 2021 May 28.
Artículo en Inglés | MEDLINE | ID: mdl-34049978

RESUMEN

Simultaneous PET-MR imaging has shown potential for the comprehensive assessment of myocardial health from a single examination. Furthermore, MR-derived respiratory motion information has been shown to improve PET image quality by incorporating this information into the PET image reconstruction. Separately, MR-based anatomically guided PET image reconstruction has been shown to perform effective denoising, but this has been so far demonstrated mainly in brain imaging. To date the combined benefits of motion compensation and anatomical guidance have not been demonstrated for myocardial PET-MR imaging. This work addresses this by proposing a single cardiac PET-MR image reconstruction framework which fully utilises MR-derived information to allow both motion compensation and anatomical guidance within the reconstruction. Methods: Fifteen patients underwent a 18F-FDG cardiac PET-MR scan with a previously introduced acquisition framework. The MR data processing and image reconstruction pipeline produces respiratory motion fields and a high-resolution respiratory motion-corrected MR image with good tissue contrast. This MR-derived information was then included in a respiratory motion-corrected, cardiac-gated, anatomically guided image reconstruction of the simultaneously acquired PET data. Reconstructions were evaluated by measuring myocardial contrast and noise and compared to images from several comparative intermediate methods using the components of the proposed framework separately. Results: Including respiratory motion correction, cardiac gating, and anatomical guidance significantly increased contrast. In particular, myocardium-to-blood pool contrast increased by 143% on average (p<0.0001) compared to conventional uncorrected, non-guided PET images. Furthermore, anatomical guidance significantly reduced image noise compared to non-guided image reconstruction by 16.1% (p<0.0001). Conclusion: The proposed framework for MR-derived motion compensation and anatomical guidance of cardiac PET data was shown to significantly improve image quality compared to alternative reconstruction methods. Each component of the reconstruction pipeline was shown to have a positive impact on the final image quality. These improvements have the potential to improve clinical interpretability and diagnosis based on cardiac PET-MR images.

19.
Development ; 148(18)2021 03 12.
Artículo en Inglés | MEDLINE | ID: mdl-33712441

RESUMEN

Characterising phenotypes often requires quantification of anatomical shape. Quantitative shape comparison (morphometrics) traditionally uses manually located landmarks and is limited by landmark number and operator accuracy. Here, we apply a landmark-free method to characterise the craniofacial skeletal phenotype of the Dp1Tyb mouse model of Down syndrome and a population of the Diversity Outbred (DO) mouse model, comparing it with a landmark-based approach. We identified cranial dysmorphologies in Dp1Tyb mice, especially smaller size and brachycephaly (front-back shortening), homologous to the human phenotype. Shape variation in the DO mice was partly attributable to allometry (size-dependent shape variation) and sexual dimorphism. The landmark-free method performed as well as, or better than, the landmark-based method but was less labour-intensive, required less user training and, uniquely, enabled fine mapping of local differences as planar expansion or shrinkage. Its higher resolution pinpointed reductions in interior mid-snout structures and occipital bones in both the models that were not otherwise apparent. We propose that this landmark-free pipeline could make morphometrics widely accessible beyond its traditional niches in zoology and palaeontology, especially in characterising developmental mutant phenotypes.


Asunto(s)
Puntos Anatómicos de Referencia/fisiopatología , Síndrome de Down/fisiopatología , Imagenología Tridimensional/métodos , Animales , Pesos y Medidas Corporales/métodos , Modelos Animales de Enfermedad , Femenino , Masculino , Ratones , Ratones Endogámicos C57BL , Fenotipo , Caracteres Sexuales , Cráneo/fisiopatología
20.
IEEE Trans Med Imaging ; 39(12): 4001-4010, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-32746141

RESUMEN

Segmenting anatomical structures in medical images has been successfully addressed with deep learning methods for a range of applications. However, this success is heavily dependent on the quality of the image that is being segmented. A commonly neglected point in the medical image analysis community is the vast amount of clinical images that have severe image artefacts due to organ motion, movement of the patient and/or image acquisition related issues. In this paper, we discuss the implications of image motion artefacts on cardiac MR segmentation and compare a variety of approaches for jointly correcting for artefacts and segmenting the cardiac cavity. The method is based on our recently developed joint artefact detection and reconstruction method, which reconstructs high quality MR images from k-space using a joint loss function and essentially converts the artefact correction task to an under-sampled image reconstruction task by enforcing a data consistency term. In this paper, we propose to use a segmentation network coupled with this in an end-to-end framework. Our training optimises three different tasks: 1) image artefact detection, 2) artefact correction and 3) image segmentation. We train the reconstruction network to automatically correct for motion-related artefacts using synthetically corrupted cardiac MR k-space data and uncorrected reconstructed images. Using a test set of 500 2D+time cine MR acquisitions from the UK Biobank data set, we achieve demonstrably good image quality and high segmentation accuracy in the presence of synthetic motion artefacts. We showcase better performance compared to various image correction architectures.


Asunto(s)
Artefactos , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Corazón/diagnóstico por imagen , Humanos , Movimiento (Física)
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...