Your browser doesn't support javascript.
loading
Depth over RGB: automatic evaluation of open surgery skills using depth camera.
Zuckerman, Ido; Werner, Nicole; Kouchly, Jonathan; Huston, Emma; DiMarco, Shannon; DiMusto, Paul; Laufer, Shlomi.
Affiliation
  • Zuckerman I; Faculty of Data and Decision Sciences, Technion - Israel Institute of Technology, Haifa, 3200003, Israel. ido.z@campus.technion.ac.il.
  • Werner N; Department of Surgery, University of Wisconsin-Madison School of Medicine and Public Health, 600 Highland Ave, Madison, WI, 53792, USA.
  • Kouchly J; Faculty of Data and Decision Sciences, Technion - Israel Institute of Technology, Haifa, 3200003, Israel.
  • Huston E; Clinical Simulation Program, University of Wisconsin Hospitals and Clinics, 600 Highland Ave, Madison, WI, 53792, USA.
  • DiMarco S; Clinical Simulation Program, University of Wisconsin Hospitals and Clinics, 600 Highland Ave, Madison, WI, 53792, USA.
  • DiMusto P; Department of Surgery, University of Wisconsin-Madison School of Medicine and Public Health, 600 Highland Ave, Madison, WI, 53792, USA.
  • Laufer S; Faculty of Data and Decision Sciences, Technion - Israel Institute of Technology, Haifa, 3200003, Israel.
Int J Comput Assist Radiol Surg ; 19(7): 1349-1357, 2024 Jul.
Article in En | MEDLINE | ID: mdl-38748053
ABSTRACT

PURPOSE:

In this paper, we present a novel approach to the automatic evaluation of open surgery skills using depth cameras. This work is intended to show that depth cameras achieve similar results to RGB cameras, which is the common method in the automatic evaluation of open surgery skills. Moreover, depth cameras offer advantages such as robustness to lighting variations, camera positioning, simplified data compression, and enhanced privacy, making them a promising alternative to RGB cameras.

METHODS:

Experts and novice surgeons completed two simulators of open suturing. We focused on hand and tool detection and action segmentation in suturing procedures. YOLOv8 was used for tool detection in RGB and depth videos. Furthermore, UVAST and MSTCN++ were used for action segmentation. Our study includes the collection and annotation of a dataset recorded with Azure Kinect.

RESULTS:

We demonstrated that using depth cameras in object detection and action segmentation achieves comparable results to RGB cameras. Furthermore, we analyzed 3D hand path length, revealing significant differences between experts and novice surgeons, emphasizing the potential of depth cameras in capturing surgical skills. We also investigated the influence of camera angles on measurement accuracy, highlighting the advantages of 3D cameras in providing a more accurate representation of hand movements.

CONCLUSION:

Our research contributes to advancing the field of surgical skill assessment by leveraging depth cameras for more reliable and privacy evaluations. The findings suggest that depth cameras can be valuable in assessing surgical skills and provide a foundation for future research in this area.
Subject(s)
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Video Recording / Clinical Competence Limits: Humans Language: En Journal: Int J Comput Assist Radiol Surg / Int. j. comput. assist. radiol. surg. (Internet) / International journal of computer assisted radiology and surgery (Internet) Year: 2024 Document type: Article

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Video Recording / Clinical Competence Limits: Humans Language: En Journal: Int J Comput Assist Radiol Surg / Int. j. comput. assist. radiol. surg. (Internet) / International journal of computer assisted radiology and surgery (Internet) Year: 2024 Document type: Article