Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artif Intell Med ; 144: 102641, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37783536

RESUMO

Pedicle drilling is a complex and critical spinal surgery task. Detecting breach or penetration of the surgical tool to the cortical wall during pilot-hole drilling is essential to avoid damage to vital anatomical structures adjacent to the pedicle, such as the spinal cord, blood vessels, and nerves. Currently, the guidance of pedicle drilling is done using image-guided methods that are radiation intensive and limited to the preoperative information. This work proposes a new radiation-free breach detection algorithm leveraging a non-visual sensor setup in combination with deep learning approach. Multiple vibroacoustic sensors, such as a contact microphone, a free-field microphone, a tri-axial accelerometer, a uni-axial accelerometer, and an optical tracking system were integrated into the setup. Data were collected on four cadaveric human spines, ranging from L5 to T10. An experienced spine surgeon drilled the pedicles relying on optical navigation. A new automatic labeling method based on the tracking data was introduced. Labeled data was subsequently fed to the network in mel-spectrograms, classifying the data into breach and non-breach. Different sensor types, sensor positioning, and their combinations were evaluated. The best results in breach recall for individual sensors could be achieved using contact microphones attached to the dorsal skin (85.8%) and uni-axial accelerometers clamped to the spinous process of the drilled vertebra (81.0%). The best-performing data fusion model combined the latter two sensors with a breach recall of 98%. The proposed method shows the great potential of non-visual sensor fusion for avoiding screw misplacement and accidental bone breaches during pedicle drilling and could be extended to further surgical applications.


Assuntos
Fusão Vertebral , Humanos , Fusão Vertebral/métodos , Parafusos Ósseos , Procedimentos Neurocirúrgicos , Tomografia Computadorizada por Raios X/métodos
2.
Sci Rep ; 13(1): 5930, 2023 04 12.
Artigo em Inglês | MEDLINE | ID: mdl-37045878

RESUMO

Despite the undeniable advantages of image-guided surgical assistance systems in terms of accuracy, such systems have not yet fully met surgeons' needs or expectations regarding usability, time efficiency, and their integration into the surgical workflow. On the other hand, perceptual studies have shown that presenting independent but causally correlated information via multimodal feedback involving different sensory modalities can improve task performance. This article investigates an alternative method for computer-assisted surgical navigation, introduces a novel four-DOF sonification methodology for navigated pedicle screw placement, and discusses advanced solutions based on multisensory feedback. The proposed method comprises a novel four-DOF sonification solution for alignment tasks in four degrees of freedom based on frequency modulation synthesis. We compared the resulting accuracy and execution time of the proposed sonification method with visual navigation, which is currently considered the state of the art. We conducted a phantom study in which 17 surgeons executed the pedicle screw placement task in the lumbar spine, guided by either the proposed sonification-based or the traditional visual navigation method. The results demonstrated that the proposed method is as accurate as the state of the art while decreasing the surgeon's need to focus on visual navigation displays instead of the natural focus on surgical tools and targeted anatomy during task execution.


Assuntos
Parafusos Pediculares , Fusão Vertebral , Cirurgia Assistida por Computador , Fusão Vertebral/métodos , Vértebras Lombares/cirurgia , Cirurgia Assistida por Computador/métodos , Imagens de Fantasmas
3.
J Imaging ; 9(2)2023 Feb 14.
Artigo em Inglês | MEDLINE | ID: mdl-36826963

RESUMO

Translational research is aimed at turning discoveries from basic science into results that advance patient treatment. The translation of technical solutions into clinical use is a complex, iterative process that involves different stages of design, development, and validation, such as the identification of unmet clinical needs, technical conception, development, verification and validation, regulatory matters, and ethics. For this reason, many promising technical developments at the interface of technology, informatics, and medicine remain research prototypes without finding their way into clinical practice. Augmented reality is a technology that is now making its breakthrough into patient care, even though it has been available for decades. In this work, we explain the translational process for Medical AR devices and present associated challenges and opportunities. To the best knowledge of the authors, this concept paper is the first to present a guideline for the translation of medical AR research into clinical practice.

4.
J Imaging ; 8(11)2022 Nov 06.
Artigo em Inglês | MEDLINE | ID: mdl-36354875

RESUMO

Correct positioning of the endoscope is crucial for successful hip arthroscopy. Only with adequate alignment can the anatomical target area be visualized and the procedure be successfully performed. Conventionally, surgeons rely on anatomical landmarks such as bone structure, and on intraoperative X-ray imaging, to correctly place the surgical trocar and insert the endoscope to gain access to the surgical site. One factor complicating the placement is deformable soft tissue, as it can obscure important anatomical landmarks. In addition, the commonly used endoscopes with an angled camera complicate hand-eye coordination and, thus, navigation to the target area. Adjusting for an incorrectly positioned endoscope prolongs surgery time, requires a further incision and increases the radiation exposure as well as the risk of infection. In this work, we propose an augmented reality system to support endoscope placement during arthroscopy. Our method comprises the augmentation of a tracked endoscope with a virtual augmented frustum to indicate the reachable working volume. This is further combined with an in situ visualization of the patient anatomy to improve perception of the target area. For this purpose, we highlight the anatomy that is visible in the endoscopic camera frustum and use an automatic colorization method to improve spatial perception. Our system was implemented and visualized on a head-mounted display. The results of our user study indicate the benefit of the proposed system compared to baseline positioning without additional support, such as an increased alignment speed, improved positioning error and reduced mental effort. The proposed approach might aid in the positioning of an angled endoscope, and may result in better access to the surgical area, reduced surgery time, less patient trauma, and less X-ray exposure during surgery.

5.
J Imaging ; 8(11)2022 Nov 09.
Artigo em Inglês | MEDLINE | ID: mdl-36354878

RESUMO

Ultrasound education traditionally involves theoretical and practical training on patients or on simulators; however, difficulty accessing training equipment during the COVID-19 pandemic has highlighted the need for home-based training systems. Due to the prohibitive cost of ultrasound probes, few medical students have access to the equipment required for at home training. Our proof of concept study focused on the development and assessment of the technical feasibility and training performance of an at-home training solution to teach the basics of interpreting and generating ultrasound data. The training solution relies on monitor-based augmented reality for displaying virtual content and requires only a marker printed on paper and a computer with webcam. With input webcam video, we performed body pose estimation to track the student's limbs and used surface tracking of printed fiducials to track the position of a simulated ultrasound probe. The novelty of our work is in its combination of printed markers with marker-free body pose tracking. In a small user study, four ultrasound lecturers evaluated the training quality with a questionnaire and indicated the potential of our system. The strength of our method is that it allows students to learn the manipulation of an ultrasound probe through the simulated probe combined with the tracking system and to learn how to read ultrasounds in B-mode and Doppler mode.

6.
J Imaging ; 9(1)2022 Dec 23.
Artigo em Inglês | MEDLINE | ID: mdl-36662102

RESUMO

Three decades after the first set of work on Medical Augmented Reality (MAR) was presented to the international community, and ten years after the deployment of the first MAR solutions into operating rooms, its exact definition, basic components, systematic design, and validation still lack a detailed discussion. This paper defines the basic components of any Augmented Reality (AR) solution and extends them to exemplary Medical Augmented Reality Systems (MARS). We use some of the original MARS applications developed at the Chair for Computer Aided Medical Procedures and deployed into medical schools for teaching anatomy and into operating rooms for telemedicine and surgical guidance throughout the last decades to identify the corresponding basic components. In this regard, the paper is not discussing all past or existing solutions but only aims at defining the principle components and discussing the particular domain modeling for MAR and its design-development-validation process, and providing exemplary cases through the past in-house developments of such solutions.

7.
Int J Comput Assist Radiol Surg ; 16(5): 799-808, 2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-33881732

RESUMO

PURPOSE:  : Tracking of tools and surgical activity is becoming more and more important in the context of computer assisted surgery. In this work, we present a data generation framework, dataset and baseline methods to facilitate further research in the direction of markerless hand and instrument pose estimation in realistic surgical scenarios. METHODS:  : We developed a rendering pipeline to create inexpensive and realistic synthetic data for model pretraining. Subsequently, we propose a pipeline to capture and label real data with hand and object pose ground truth in an experimental setup to gather high-quality real data. We furthermore present three state-of-the-art RGB-based pose estimation baselines. RESULTS:  : We evaluate three baseline models on the proposed datasets. The best performing baseline achieves an average tool 3D vertex error of 16.7 mm on synthetic data as well as 13.8 mm on real data which is comparable to the state-of-the art in RGB-based hand/object pose estimation. CONCLUSION:  : To the best of our knowledge, we propose the first synthetic and real data generation pipelines to generate hand and object pose labels for open surgery. We present three baseline models for RGB based object and object/hand pose estimation based on RGB frames. Our realistic synthetic data generation pipeline may contribute to overcome the data bottleneck in the surgical domain and can easily be transferred to other medical applications.


Assuntos
Aprendizado Profundo , Mãos/diagnóstico por imagem , Imageamento Tridimensional/métodos , Cirurgia Assistida por Computador/métodos , Algoritmos , Calibragem , Humanos , Salas Cirúrgicas , Ortopedia/métodos , Reprodutibilidade dos Testes
9.
Sci Rep ; 11(1): 3993, 2021 02 17.
Artigo em Inglês | MEDLINE | ID: mdl-33597615

RESUMO

In this work, we developed and validated a computer method capable of robustly detecting drill breakthrough events and show the potential of deep learning-based acoustic sensing for surgical error prevention. Bone drilling is an essential part of orthopedic surgery and has a high risk of injuring vital structures when over-drilling into adjacent soft tissue. We acquired a dataset consisting of structure-borne audio recordings of drill breakthrough sequences with custom piezo contact microphones in an experimental setup using six human cadaveric hip specimens. In the following step, we developed a deep learning-based method for the automated detection of drill breakthrough events in a fast and accurate fashion. We evaluated the proposed network regarding breakthrough detection sensitivity and latency. The best performing variant yields a sensitivity of [Formula: see text]% for drill breakthrough detection in a total execution time of 139.29[Formula: see text]. The validation and performance evaluation of our solution demonstrates promising results for surgical error prevention by automated acoustic-based drill breakthrough detection in a realistic experiment while being multiple times faster than a surgeon's reaction time. Furthermore, our proposed method represents an important step for the translation of acoustic-based breakthrough detection towards surgical use.


Assuntos
Inteligência Artificial , Processamento de Imagem Assistida por Computador/métodos , Procedimentos Cirúrgicos Minimamente Invasivos/métodos , Procedimentos Ortopédicos/métodos , Cirurgia Assistida por Computador/métodos , Animais , Osso e Ossos/cirurgia , Cadáver , Humanos , Microscopia Acústica , Modelos Biológicos , Ortopedia
10.
Int J Comput Assist Radiol Surg ; 15(5): 771-779, 2020 May.
Artigo em Inglês | MEDLINE | ID: mdl-32323212

RESUMO

PURPOSE: Minimally invasive surgery (MIS) has become the standard for many surgical procedures as it minimizes trauma, reduces infection rates and shortens hospitalization. However, the manipulation of objects in the surgical workspace can be difficult due to the unintuitive handling of instruments and limited range of motion. Apart from the advantages of robot-assisted systems such as augmented view or improved dexterity, both robotic and MIS techniques introduce drawbacks such as limited haptic perception and their major reliance on visual perception. METHODS: In order to address the above-mentioned limitations, a perception study was conducted to investigate whether the transmission of intra-abdominal acoustic signals can potentially improve the perception during MIS. To investigate whether these acoustic signals can be used as a basis for further automated analysis, a large audio data set capturing the application of electrosurgery on different types of porcine tissue was acquired. A sliding window technique was applied to compute log-mel-spectrograms, which were fed to a pre-trained convolutional neural network for feature extraction. A fully connected layer was trained on the intermediate feature representation to classify instrument-tissue interaction. RESULTS: The perception study revealed that acoustic feedback has potential to improve the perception during MIS and to serve as a basis for further automated analysis. The proposed classification pipeline yielded excellent performance for four types of instrument-tissue interaction (muscle, fascia, liver and fatty tissue) and achieved top-1 accuracies of up to 89.9%. Moreover, our model is able to distinguish electrosurgical operation modes with an overall classification accuracy of 86.40%. CONCLUSION: Our proof-of-principle indicates great application potential for guidance systems in MIS, such as controlled tissue resection. Supported by a pilot perception study with surgeons, we believe that utilizing audio signals as an additional information channel has great potential to improve the surgical performance and to partly compensate the loss of haptic feedback.


Assuntos
Acústica , Procedimentos Cirúrgicos Minimamente Invasivos/métodos , Procedimentos Cirúrgicos Robóticos/métodos , Animais , Retroalimentação , Fígado/cirurgia , Músculo Esquelético/cirurgia , Redes Neurais de Computação , Suínos
11.
Int J Comput Assist Radiol Surg ; 15(6): 973-980, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-32342258

RESUMO

PURPOSE: We propose a novel methodology for generating synthetic X-rays from 2D RGB images. This method creates accurate simulations for use in non-diagnostic visualization problems where the only input comes from a generic camera. Traditional methods are restricted to using simulation algorithms on 3D computer models. To solve this problem, we propose a method of synthetic X-ray generation using conditional generative adversarial networks (CGANs). METHODS: We create a custom synthetic X-ray dataset generator to generate image triplets for X-ray images, pose images, and RGB images of natural hand poses sampled from the NYU hand pose dataset. This dataset is used to train two general-purpose CGAN networks, pix2pix and CycleGAN, as well as our novel architecture called pix2xray which expands upon the pix2pix architecture to include the hand pose into the network. RESULTS: Our results demonstrate that our pix2xray architecture outperforms both pix2pix and CycleGAN in producing higher-quality X-ray images. We measure higher similarity metrics in our approach, with pix2pix coming in second, and CycleGAN producing the worst results. Our network performs better in the difficult cases which involve high occlusion due to occluded poses or large rotations. CONCLUSION: Overall our work establishes a baseline that synthetic X-rays can be simulated using 2D RGB input. We establish the need for additional data such as the hand pose to produce clearer results and show that future research must focus on more specialized architectures to improve overall image clarity and structure.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Radiografia/métodos , Raios X , Algoritmos , Simulação por Computador , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...