Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters











Database
Language
Publication year range
1.
Eur J Obstet Gynecol Reprod Biol ; 298: 13-17, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38705008

ABSTRACT

INTRODUCTION: This study aims to investigate probe motion during full mid-trimester anomaly scans. METHODS: We undertook a prospective, observational study of obstetric sonographers at a UK University Teaching Hospital. We collected prospectively full-length video recordings of routine second-trimester anomaly scans synchronized with probe trajectory tracking data during the scan. Videos were reviewed and trajectories analyzed using duration, path metrics (path length, velocity, acceleration, jerk, and volume) and angular metrics (spectral arc, angular area, angular velocity, angular acceleration, and angular jerk). These trajectories were then compared according to the participant level of expertise, fetal presentation, and patient BMI. RESULTS: A total of 17 anomaly scans were recorded. The average velocity of the probe was 12.9 ± 3.4 mm/s for the consultants versus 24.6 ± 5.7 mm/s for the fellows (p = 0.02), the average acceleration 170.4 ± 26.3 mm/s2 versus 328.9 ± 62.7 mm/s2 (p = 0.02), and the average jerk 7491.7 ± 1056.1 mm/s3 versus 14944.1 ± 3146.3 mm/s3 (p = 0.02), the working volume 9.106 ± 4.106 mm3 versus 29.106 ± 11.106 mm3 (p = 0.03), respectively. The angular metrics were not significantly different according to the participant level of expertise, the fetal presentation, or to patients BMI. CONCLUSION: Some differences in the probe path metrics (velocity, acceleration, jerk and working volume) were noticed according to operator's level.


Subject(s)
Pregnancy Trimester, Second , Ultrasonography, Prenatal , Humans , Female , Pregnancy , Prospective Studies , Ultrasonography, Prenatal/methods , Video Recording , Adult , Congenital Abnormalities/diagnostic imaging
2.
Ann Surg Open ; 4(2): e275, 2023 May 23.
Article in English | MEDLINE | ID: mdl-37342255

ABSTRACT

Introduction: 3D models produced from medical imaging can be used to plan treatment, design prosthesis, teach and for communication. Despite the clinical benefit, few clinicians have experience of how 3D models are produced.This is the first study evaluating a training tool to teach clinicians to produce 3D models and reporting the perceived impact on their clinical practice. Method: Following ethical approval, 10 clinicians completed a bespoke training tool, comprising written and video material alongside online support. Each clinician and 2 technicians (included as control) were sent 3 CT scans and asked to produce 6 fibula 3D models using an open-source software (3Dslicer). The produced models were compared to those produced by the technicians using Hausdorff distance calculation. Thematic analysis was used to study the post-intervention questionnaire. Results: The mean Hausdorff distance between the final model produced by the clinicians and technicians was 0.65mm SD0.54mm. The first model made by clinicians took a mean time of 1hr 25mins and the final model took 16:04mins (5:00-46:00mins). 100% of learners reported finding the training tool useful and will employ it in future practice. Discussion: The training tool described in this paper is able to successfully train clinicians to produce fibula models from CT scans. Learners were able to produce comparable models to technicians within an acceptable timeframe. This does not replace technicians. However, the learners perceived this training will allow them to use this technology in more cases, with appropriate case selection and they appreciate the limits of this technology.

3.
Br J Oral Maxillofac Surg ; 61(1): 19-27, 2023 01.
Article in English | MEDLINE | ID: mdl-36513525

ABSTRACT

Augmented-reality (AR) head-mounted devices (HMD) allow the wearer to have digital images superposed on to their field of vision. They are being used to superpose annotations on to the surgical field akin to a navigation system. This review examines published validation studies on HMD-AR systems, their reported protocols, and outcomes. The aim was to establish commonalities and an acceptable registration outcome. Multiple databases were systematically searched for relevant articles between January 2015 and January 2021. Studies that examined the registration of AR content using a HMD to guide surgery were eligible for inclusion. The country of origin, year of publication, medical specialty, HMD device, software, and method of registration, were recorded. A meta-analysis of the mean registration error was conducted. A total of 4784 papers were identified, of which 23 met the inclusion criteria. They included studies using HoloLens (Microsoft) (n = 22) and nVisor ST60 (NVIS Inc) (n = 1). Sixty-six per cent of studies were in hard tissue specialties. Eleven studies reported registration errors using pattern markers (mean (SD) 2.6 (1.8) mm), and four reported registration errors using surface markers (mean (SD) 3.8 (3.7) mm). Three studies reported registration errors using manual alignment (mean (SD) 2.2 (1.3) mm). The majority of studies in this review used in-house software with a variety of registration methods and reported errors. The mean registration error calculated in this study can be considered as a minimum acceptable standard. It should be taken into consideration when procedural applications are selected.


Subject(s)
Surgery, Computer-Assisted , Humans , Surgery, Computer-Assisted/methods , Software , Equipment Design
4.
IEEE Trans Med Imaging ; 41(7): 1677-1687, 2022 07.
Article in English | MEDLINE | ID: mdl-35108200

ABSTRACT

Automatically recognising surgical gestures from surgical data is an important building block of automated activity recognition and analytics, technical skill assessment, intra-operative assistance and eventually robotic automation. The complexity of articulated instrument trajectories and the inherent variability due to surgical style and patient anatomy make analysis and fine-grained segmentation of surgical motion patterns from robot kinematics alone very difficult. Surgical video provides crucial information from the surgical site with context for the kinematic data and the interaction between the instruments and tissue. Yet sensor fusion between the robot data and surgical video stream is non-trivial because the data have different frequency, dimensions and discriminative capability. In this paper, we integrate multimodal attention mechanisms in a two-stream temporal convolutional network to compute relevance scores and weight kinematic and visual feature representations dynamically in time, aiming to aid multimodal network training and achieve effective sensor fusion. We report the results of our system on the JIGSAWS benchmark dataset and on a new in vivo dataset of suturing segments from robotic prostatectomy procedures. Our results are promising and obtain multimodal prediction sequences with higher accuracy and better temporal structure than corresponding unimodal solutions. Visualization of attention scores also gives physically interpretable insights on network understanding of strengths and weaknesses of each sensor.


Subject(s)
Robotic Surgical Procedures , Robotics , Biomechanical Phenomena , Gestures , Humans , Motion , Robotics/methods
5.
J Robot Surg ; 6(1): 23-31, 2012 Mar.
Article in English | MEDLINE | ID: mdl-27637976

ABSTRACT

Robotic partial nephrectomy is presently the fastest-growing robotic surgical procedure, and in comparison to traditional techniques it offers reduced tissue trauma and likelihood of post-operative infection, while hastening recovery time and improving cosmesis. It is also an ideal candidate for image guidance technology since soft tissue deformation, while still present, is localised and less problematic compared to other surgical procedures. This work describes the implementation and ongoing development of an effective image guidance system that aims to address some of the remaining challenges in this area. Specific innovations include the introduction of an intuitive, partially automated registration interface, and the use of a hardware platform that makes sophisticated augmented reality overlays practical in real time. Results and examples of image augmentation are presented from both retrospective and live cases. Quantitative analysis of registration error verifies that the proposed registration technique is appropriate for the chosen image guidance targets.

6.
Stud Health Technol Inform ; 132: 378-83, 2008.
Article in English | MEDLINE | ID: mdl-18391325

ABSTRACT

An interactive finite element simulation of the beating heart is described in which the intrinsic motion is implied from preoperative 4D tomographic scan data. The equations of motion are reversed such that, given changes in node displacements over time, the node forces that produce those changes are recovered. Subsequently, these forces are resolved from the global coordinate system into systems local to each mesh element such that, at each simulation time step, the collection of node forces can be expressed as simple weighted sums of current node positions. This facilitates the combination of extrinsic forces like those due to tool-tissue interactions, gravity, insufflation of the thoracic cavity and left lung deflation. The method has been applied initially to volumetric images of a pneumatically-operated beating heart phantom.


Subject(s)
Finite Element Analysis , Heart/physiology , Robotics , Thoracic Surgery , User-Computer Interface , Humans , London , Surgery, Computer-Assisted
SELECTION OF CITATIONS
SEARCH DETAIL