Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
Int J Cardiovasc Imaging ; 39(7): 1405-1419, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37103667

ABSTRACT

Extended reality (XR), which encompasses virtual, augmented and mixed reality, is an emerging medical imaging display platform which enables intuitive and immersive interaction in a three-dimensional space. This technology holds the potential to enhance understanding of complex spatial relationships when planning and guiding cardiac procedures in congenital and structural heart disease moving beyond conventional 2D and 3D image displays. A systematic review of the literature demonstrates a rapid increase in publications describing adoption of this technology. At least 33 XR systems have been described, with many demonstrating proof of concept, but with no specific mention of regulatory approval including some prospective studies. Validation remains limited, and true clinical benefit difficult to measure. This review describes and critically appraises the range of XR technologies and its applications for procedural planning and guidance in structural heart disease while discussing the challenges that need to be overcome in future studies to achieve safe and effective clinical adoption.


Subject(s)
Augmented Reality , Heart Diseases , Humans , Heart Diseases/diagnostic imaging , Heart Diseases/therapy , Imaging, Three-Dimensional/methods , Predictive Value of Tests , Prospective Studies
2.
Med Image Anal ; 83: 102639, 2023 01.
Article in English | MEDLINE | ID: mdl-36257132

ABSTRACT

Automatic segmentation of the placenta in fetal ultrasound (US) is challenging due to the (i) high diversity of placenta appearance, (ii) the restricted quality in US resulting in highly variable reference annotations, and (iii) the limited field-of-view of US prohibiting whole placenta assessment at late gestation. In this work, we address these three challenges with a multi-task learning approach that combines the classification of placental location (e.g., anterior, posterior) and semantic placenta segmentation in a single convolutional neural network. Through the classification task the model can learn from larger and more diverse datasets while improving the accuracy of the segmentation task in particular in limited training set conditions. With this approach we investigate the variability in annotations from multiple raters and show that our automatic segmentations (Dice of 0.86 for anterior and 0.83 for posterior placentas) achieve human-level performance as compared to intra- and inter-observer variability. Lastly, our approach can deliver whole placenta segmentation using a multi-view US acquisition pipeline consisting of three stages: multi-probe image acquisition, image fusion and image segmentation. This results in high quality segmentation of larger structures such as the placenta in US with reduced image artifacts which are beyond the field-of-view of single probes.


Subject(s)
Placenta , Humans , Female , Pregnancy , Placenta/diagnostic imaging
3.
J Imaging ; 8(11)2022 Nov 08.
Article in English | MEDLINE | ID: mdl-36354877

ABSTRACT

This study aimed to evaluate the accuracy and reliability of a virtual reality (VR) system line measurement tool using phantom data across three cardiac imaging modalities: three-dimensional echocardiography (3DE), computed tomography (CT) and magnetic resonance imaging (MRI). The same phantoms were also measured using industry-standard image visualisation software packages. Two participants performed blinded measurements on volume-rendered images of standard phantoms both in VR and on an industry-standard image visualisation platform. The intra- and interrater reliability of the VR measurement method was evaluated by intraclass correlation coefficient (ICC) and coefficient of variance (CV). Measurement accuracy was analysed using Bland−Altman and mean absolute percentage error (MAPE). VR measurements showed good intra- and interobserver reliability (ICC ≥ 0.99, p < 0.05; CV < 10%) across all imaging modalities. MAPE for VR measurements compared to ground truth were 1.6%, 1.6% and 7.7% in MRI, CT and 3DE datasets, respectively. Bland−Altman analysis demonstrated no systematic measurement bias in CT or MRI data in VR compared to ground truth. A small bias toward smaller measurements in 3DE data was seen in both VR (mean −0.52 mm [−0.16 to −0.88]) and the standard platform (mean −0.22 mm [−0.03 to −0.40]) when compared to ground truth. Limits of agreement for measurements across all modalities were similar in VR and standard software. This study has shown good measurement accuracy and reliability of VR in CT and MRI data with a higher MAPE for 3DE data. This may relate to the overall smaller measurement dimensions within the 3DE phantom. Further evaluation is required of all modalities for assessment of measurements <10 mm.

4.
SoftwareX ; 17: 100959, 2022 Jan.
Article in English | MEDLINE | ID: mdl-36619798

ABSTRACT

We present PRETUS - a Plugin-based Real Time UltraSound software platform for live ultrasound image analysis and operator support. The software is lightweight; functionality is brought in via independent plug-ins that can be arranged in sequence. The software allows to capture the real-time stream of ultrasound images from virtually any ultrasound machine, applies computational methods and visualizes the results on-the-fly. Plug-ins can run concurrently without blocking each other. They can be implemented in C++ and Python. A graphical user interface can be implemented for each plug-in, and presented to the user in a compact way. The software is free and open source, and allows for rapid prototyping and testing of real-time ultrasound imaging methods in a manufacturer-agnostic fashion. The software is provided with input, output and processing plug-ins, as well as with tutorials to illustrate how to develop new plug-ins for PRETUS.

5.
Prenat Diagn ; 42(1): 49-59, 2022 Jan.
Article in English | MEDLINE | ID: mdl-34648206

ABSTRACT

OBJECTIVE: Advances in artificial intelligence (AI) have demonstrated potential to improve medical diagnosis. We piloted the end-to-end automation of the mid-trimester screening ultrasound scan using AI-enabled tools. METHODS: A prospective method comparison study was conducted. Participants had both standard and AI-assisted US scans performed. The AI tools automated image acquisition, biometric measurement, and report production. A feedback survey captured the sonographers' perceptions of scanning. RESULTS: Twenty-three subjects were studied. The average time saving per scan was 7.62 min (34.7%) with the AI-assisted method (p < 0.0001). There was no difference in reporting time. There were no clinically significant differences in biometric measurements between the two methods. The AI tools saved a satisfactory view in 93% of the cases (four core views only), and 73% for the full 13 views, compared to 98% for both using the manual scan. Survey responses suggest that the AI tools helped sonographers to concentrate on image interpretation by removing disruptive tasks. CONCLUSION: Separating freehand scanning from image capture and measurement resulted in a faster scan and altered workflow. Removing repetitive tasks may allow more attention to be directed identifying fetal malformation. Further work is required to improve the image plane detection algorithm for use in real time.


Subject(s)
Artificial Intelligence/standards , Congenital Abnormalities/diagnosis , Ultrasonography, Prenatal/instrumentation , Adult , Artificial Intelligence/trends , Congenital Abnormalities/diagnostic imaging , Female , Gestational Age , Humans , Pregnancy , Prospective Studies , Reproducibility of Results , Ultrasonography, Prenatal/methods , Ultrasonography, Prenatal/standards
6.
J Imaging ; 7(8)2021 Aug 19.
Article in English | MEDLINE | ID: mdl-34460787

ABSTRACT

The intricate nature of congenital heart disease requires understanding of the complex, patient-specific three-dimensional dynamic anatomy of the heart, from imaging data such as three-dimensional echocardiography for successful outcomes from surgical and interventional procedures. Conventional clinical systems use flat screens, and therefore, display remains two-dimensional, which undermines the full understanding of the three-dimensional dynamic data. Additionally, the control of three-dimensional visualisation with two-dimensional tools is often difficult, so used only by imaging specialists. In this paper, we describe a virtual reality system for immersive surgery planning using dynamic three-dimensional echocardiography, which enables fast prototyping for visualisation such as volume rendering, multiplanar reformatting, flow visualisation and advanced interaction such as three-dimensional cropping, windowing, measurement, haptic feedback, automatic image orientation and multiuser interactions. The available features were evaluated by imaging and nonimaging clinicians, showing that the virtual reality system can help improve the understanding and communication of three-dimensional echocardiography imaging and potentially benefit congenital heart disease treatment.

7.
JTCVS Tech ; 7: 269-277, 2021 Jun.
Article in English | MEDLINE | ID: mdl-34100000

ABSTRACT

OBJECTIVES: To investigate how virtual reality (VR) imaging impacts decision-making in atrioventricular valve surgery. METHODS: This was a single-center retrospective study involving 15 children and adolescents, median age 6 years (range, 0.33-16) requiring surgical repair of the atrioventricular valves between the years 2016 and 2019. The patients' preoperative 3-dimesnional (3D) echocardiographic data were used to create 3D visualization in a VR application. Five pediatric cardiothoracic surgeons completed a questionnaire formulated to compare their surgical decisions regarding the cases after reviewing conventionally presented 2-dimesnional and 3D echocardiographic images and again after visualization of 3D echocardiograms using the VR platform. Finally, intraoperative findings were shared with surgeons to confirm assessment of the pathology. RESULTS: In 67% of cases presented with VR, surgeons reported having "more" or "much more" confidence in their understanding of each patient's pathology and their surgical approach. In all but one case, surgeons were at least as confident after reviewing the VR compared with standard imaging. The case where surgeons reported to be least confident on VR had the worst technical quality of data used. After viewing patient cases on VR, surgeons reported that they would have made minor modifications to surgical approach in 53% and major modifications in 7% of cases. CONCLUSIONS: The main impact of viewing imaging on VR is the improved clarity of the anatomical structures. Surgeons reported that this would have impacted the surgical approach in the majority of cases. Poor-quality 3D echocardiographic data were associated with a negative impact of VR visualization; thus. quality assessment of imaging is necessary before projecting in a VR format.

8.
Healthc Technol Lett ; 6(6): 220-225, 2019 Dec.
Article in English | MEDLINE | ID: mdl-32038861

ABSTRACT

Virtual reality (VR) has the potential to aid in the understanding of complex volumetric medical images, by providing an immersive and intuitive experience accessible to both experts and non-imaging specialists. A key feature of any clinical image analysis tool is measurement of clinically relevant anatomical structures. However, this feature has been largely neglected in VR applications. The authors propose a Unity-based system to carry out linear measurements on three-dimensional (3D), purposefully designed for the measurement of 3D echocardiographic images. The proposed system is compared to commercially available, widely used image analysis packages that feature both 2D (multi-planar reconstruction) and 3D (volume rendering) measurement tools. The results indicate that the proposed system provides statistically equivalent measurements compared to the reference 2D system, while being more accurate than the commercial 3D system.

9.
Healthc Technol Lett ; 5(5): 148-153, 2018 Oct.
Article in English | MEDLINE | ID: mdl-30800321

ABSTRACT

The authors present a method to interconnect the Visualisation Toolkit (VTK) and Unity. This integration enables them to exploit the visualisation capabilities of VTK with Unity's widespread support of virtual, augmented, and mixed reality displays, and interaction and manipulation devices, for the development of medical image applications for virtual environments. The proposed method utilises OpenGL context sharing between Unity and VTK to render VTK objects into the Unity scene via a Unity native plugin. The proposed method is demonstrated in a simple Unity application that performs VTK volume rendering to display thoracic computed tomography and cardiac magnetic resonance images. Quantitative measurements of the achieved frame rates show that this approach provides over 90 fps using standard hardware, which is suitable for current augmented reality/virtual reality display devices.

SELECTION OF CITATIONS
SEARCH DETAIL
...