RESUMEN
Background: Telementoring technologies enable a remote mentor to guide a mentee in real-time during surgical procedures. This addresses challenges, such as lack of expertise and limited surgical training/education opportunities in remote locations. This review aims to provide a comprehensive account of these technologies tailored for open surgery. Methods: A comprehensive scoping review of the scientific literature was conducted using PubMed, ScienceDirect, ACM Digital Library, and IEEE Xplore databases. Broad and inclusive searches were done to identify articles reporting telementoring or teleguidance technologies in open surgery. Results: Screening of the search results yielded 43 articles describing surgical telementoring for open approach. The studies were categorized based on the type of open surgery (surgical specialty, surgical procedure, and stage of clinical trial), the telementoring technology used (information transferred between mentor and mentee, devices used for rendering the information), and assessment of the technology (experience level of mentor and mentee, study design, and assessment criteria). Majority of the telementoring technologies focused on trauma-related surgeries and mixed reality headsets were commonly used for rendering information (telestrations, surgical tools, or hand gestures) to the mentee. These technologies were primarily assessed on high-fidelity synthetic phantoms. Conclusions: Despite longer operative time, these telementoring technologies demonstrated clinical viability during open surgeries through improved performance and confidence of the mentee. In general, usage of immersive devices and annotations appears to be promising, although further clinical trials will be required to thoroughly assess its benefits.
Asunto(s)
Tutoría , Telemedicina , Humanos , Tutoría/métodos , Procedimientos Quirúrgicos Operativos/educación , Procedimientos Quirúrgicos Operativos/métodos , MentoresRESUMEN
BACKGROUND: A variety of human computer interfaces are used by robotic surgical systems to control and actuate camera scopes during minimally invasive surgery. The purpose of this review is to examine the different user interfaces used in both commercial systems and research prototypes. METHODS: A comprehensive scoping review of scientific literature was conducted using PubMed and IEEE Xplore databases to identify user interfaces used in commercial products and research prototypes of robotic surgical systems and robotic scope holders. Papers related to actuated scopes with human-computer interfaces were included. Several aspects of user interfaces for scope manipulation in commercial and research systems were reviewed. RESULTS: Scope assistance was classified into robotic surgical systems (for multiple port, single port, and natural orifice) and robotic scope holders (for rigid, articulated, and flexible endoscopes). Benefits and drawbacks of control by different user interfaces such as foot, hand, voice, head, eye, and tool tracking were outlined. In the review, it was observed that hand control, with its familiarity and intuitiveness, is the most used interface in commercially available systems. Control by foot, head tracking, and tool tracking are increasingly used to address limitations, such as interruptions to surgical workflow, caused by using a hand interface. CONCLUSION: Integrating a combination of different user interfaces for scope manipulation may provide maximum benefit for the surgeons. However, smooth transition between interfaces might pose a challenge while combining controls.
Asunto(s)
Procedimientos Quirúrgicos Robotizados , Robótica , Humanos , Endoscopios , Interfaz Usuario-Computador , Procedimientos Quirúrgicos Mínimamente InvasivosRESUMEN
OBJECTIVE: The objective of this feasibility study was to develop and assess a tele-ultrasound system that would enable an expert sonographer (situated at the remote site) to provide real-time guidance to an operator (situated at the imaging site) using a mixed-reality environment. METHODS: An architecture along with the operational workflow of the system is designed and a prototype is developed that enables guidance in form of audiovisual cues. The visual cues comprise holograms (of the ultrasound images and ultrasound probe) and is rendered to the operator using a head-mounted display device. The position and orientation of the ultrasound probe's hologram are remotely controlled by the expert sonographer and guide the placement of a physical ultrasound probe at the imaging site. The developed prototype was evaluated for its performance on a network. In addition, a user study (with 12 participants) was conducted to assess the operator's ability to align the probe under different guidance modes. RESULTS: The network performance revealed the view of the imaging site and ultrasound images were transferred to the remote site in 233 ± 42 and 158 ± 38 ms, respectively. The expert sonographer was able to transfer, to the imaging site, data related to position and orientation of the ultrasound probe's hologram in 78 ± 13 ms. The user study indicated that the audiovisual cues are sufficient for an operator to position and orient a physical probe for accurate depiction of the targeted tissue (p < 0.001). The probe's placement translational and rotational errors were 1.4 ± 0.6 mm and 5.4 ± 2.2º. CONCLUSION: The work illustrates the feasibility of using a mixed-reality environment for effective communication between an expert sonographer (ultrasound physician) and an operator. Further studies are required to determine its applicability in a clinical setting during tele-ultrasound.