Your browser doesn't support javascript.
loading
Convolutional Neural Network-Based Deep Learning Engine for Mastoidectomy Instrument Recognition and Movement Tracking.
Raymond, Mallory J; Biswal, Biswajit; Pipaliya, Royal M; Rowley, Mark A; Meyer, Ted A.
Afiliação
  • Raymond MJ; Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Jacksonville, USA.
  • Biswal B; Department of Otolaryngology-Head and Neck Surgery, Mayo Clinic Florida, Jacksonville, Florida, USA.
  • Pipaliya RM; Computer Science and Mathematics, South Carolina State University, Orangeburg, South Carolina, USA.
  • Rowley MA; Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Jacksonville, USA.
  • Meyer TA; Department of Otolaryngology-Head and Neck Surgery, University of Arizona, Tucson, Arizona, USA.
Otolaryngol Head Neck Surg ; 170(6): 1555-1560, 2024 Jun.
Article em En | MEDLINE | ID: mdl-38520201
ABSTRACT

OBJECTIVE:

To develop a convolutional neural network-based computer vision model to recognize and track 2 mastoidectomy surgical instruments-the drill and the suction-irrigator-from intraoperative video recordings of mastoidectomies. STUDY

DESIGN:

Technological development and model validation.

SETTING:

Academic center.

METHODS:

Ten 1-minute videos of mastoidectomies done for cochlear implantation by varying levels of resident surgeons were collected. For each video, containing 900 frames, an open-access computer vision annotation tool was used to annotate the drill and suction-irrigator class images with bounding boxes. A mastoidectomy instrument tracking module, which extracts the center coordinates of bounding boxes, was developed using a feature pyramid network and layered with DETECTRON, an open-access faster-region-based convolutional neural network. Eight videos were used to train the model, and 2 videos were used for testing. Outcome measures included Intersection over Union (IoU) ratio, accuracy, and average precision.

RESULTS:

For an IoU of 0.5, the mean average precision for the drill was 99% and 86% for the suction-irrigator. The model proved capable of generating maps of drill and suction-irrigator stroke direction and distance for the entirety of each video.

CONCLUSIONS:

This computer vision model can identify and track the drill and suction-irrigator from videos of intraoperative mastoidectomies performed by residents with excellent precision. It can now be employed to retrospectively study objective mastoidectomy measures of expert and resident surgeons, such as drill and suction-irrigator stroke concentration, economy of motion, speed, and coordination, setting the stage for characterization of objective expectations for safe and efficient mastoidectomies.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Gravação em Vídeo / Redes Neurais de Computação / Mastoidectomia / Aprendizado Profundo Limite: Humans Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Gravação em Vídeo / Redes Neurais de Computação / Mastoidectomia / Aprendizado Profundo Limite: Humans Idioma: En Ano de publicação: 2024 Tipo de documento: Article