Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Plast Reconstr Surg Glob Open ; 12(7): e5940, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38957720

RESUMO

We introduce a novel technique using augmented reality (AR) on smartphones and tablets, making it possible for surgeons to review perforator anatomy in three dimensions on the go. Autologous breast reconstruction with abdominal flaps remains challenging due to the highly variable anatomy of the deep inferior epigastric artery. Computed tomography angiography has mitigated some but not all challenges. Previously, volume rendering and different headsets were used to enable better three-dimensional (3D) review for surgeons. However, surgeons have been dependent on others to provide 3D imaging data. Leveraging the ubiquity of Apple devices, our approach permits surgeons to review 3D models of deep inferior epigastric artery anatomy segmented from abdominal computed tomography angiography directly on their iPhone/iPad. Segmentation can be performed in common radiology software. The models are converted to the universal scene description zipped format, which allows immediate use on Apple devices without third-party software. They can be easily shared using secure, Health Insurance Portability and Accountability Act-compliant sharing services already provided by most hospitals. Surgeons can simply open the file on their mobile device to explore the images in 3D using "object mode" natively without additional applications or can switch to AR mode to pin the model in their real-world surroundings for intuitive exploration. We believe patient-specific 3D anatomy models are a powerful tool for intuitive understanding and communication of complex perforator anatomy and would be a valuable addition in routine clinical practice and education. Using this one-click solution on existing devices that is simple to implement, we hope to streamline the adoption of AR models by plastic surgeons.

2.
Surg Innov ; : 15533506241262946, 2024 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-38905568

RESUMO

Plastic surgeons routinely use 3D-models in their clinical practice, from 3D-photography and surface imaging to 3D-segmentations from radiological scans. However, these models continue to be viewed on flattened 2D screens that do not enable an intuitive understanding of 3D-relationships and cause challenges regarding collaboration with colleagues. The Metaverse has been proposed as a new age of applications building on modern Mixed Reality headset technology that allows remote collaboration on virtual 3D-models in a shared physical-virtual space in real-time. We demonstrate the first use of the Metaverse in the context of reconstructive surgery, focusing on preoperative planning discussions and trainee education. Using a HoloLens headset with the Microsoft Mesh application, we performed planning sessions for 4 DIEP-flaps in our reconstructive metaverse on virtual patient-models segmented from routine CT angiography. In these sessions, surgeons discuss perforator anatomy and perforator selection strategies whilst comprehensively assessing the respective models. We demonstrate the workflow for a one-on-one interaction between an attending surgeon and a trainee in a video featuring both viewpoints as seen through the headset. We believe the Metaverse will provide novel opportunities to use the 3D-models that are already created in everyday plastic surgery practice in a more collaborative, immersive, accessible, and educational manner.

3.
Plast Reconstr Surg ; 2024 Feb 14.
Artigo em Inglês | MEDLINE | ID: mdl-38351515

RESUMO

Preoperative CT angiography (CTA) is increasingly performed prior to perforator flap-based reconstruction. However, radiological 2D thin-slices do not allow for intuitive interpretation and translation to intraoperative findings. 3D volume rendering has been used to alleviate the need for mental 2D-to-3D abstraction. Even though volume rendering allows for a much easier understanding of anatomy, it currently has limited utility as the skin obstructs the view of critical structures. Using free, open-source software, we introduce a new skin-masking technique that allows surgeons to easily create a segmentation mask of the skin that can later be used to toggle the skin on and off. Additionally, the mask can be used in other rendering applications. We use Cinematic Anatomy for photorealistic volume rendering and interactive exploration of the CTA with and without skin. We present results from using this technique to investigate perforator anatomy in deep inferior epigastric perforator flaps and demonstrate that the skin-masking workflow is performed in less than 5 minutes. In Cinematic Anatomy, the view onto the abdominal wall and especially onto perforators becomes significantly sharper and more detailed when no longer obstructed by the skin. We perform a virtual, partial muscle dissection to show the intramuscular and submuscular course of the perforators. The skin-masking workflow allows surgeons to improve arterial and perforator detail in volume renderings easily and quickly by removing skin and could alternatively also be performed solely using open-source and free software. The workflow can be easily expanded to other perforator flaps without the need for modification.

4.
Otol Neurotol ; 44(8): e602-e609, 2023 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-37464458

RESUMO

OBJECTIVE: To objectively evaluate vestibular schwannomas (VSs) and their spatial relationships with the ipsilateral inner ear (IE) in magnetic resonance imaging (MRI) using deep learning. STUDY DESIGN: Cross-sectional study. PATIENTS: A total of 490 adults with VS, high-resolution MRI scans, and no previous neurotologic surgery. INTERVENTIONS: MRI studies of VS patients were split into training (390 patients) and test (100 patients) sets. A three-dimensional convolutional neural network model was trained to segment VS and IE structures using contrast-enhanced T1-weighted and T2-weighted sequences, respectively. Manual segmentations were used as ground truths. Model performance was evaluated on the test set and on an external set of 100 VS patients from a public data set (Vestibular-Schwannoma-SEG). MAIN OUTCOME MEASURES: Dice score, relative volume error, average symmetric surface distance, 95th-percentile Hausdorff distance, and centroid locations. RESULTS: Dice scores for VS and IE volume segmentations were 0.91 and 0.90, respectively. On the public data set, the model segmented VS tumors with a Dice score of 0.89 ± 0.06 (mean ± standard deviation), relative volume error of 9.8 ± 9.6%, average symmetric surface distance of 0.31 ± 0.22 mm, and 95th-percentile Hausdorff distance of 1.26 ± 0.76 mm. Predicted VS segmentations overlapped with ground truth segmentations in all test subjects. Mean errors of predicted VS volume, VS centroid location, and IE centroid location were 0.05 cm 3 , 0.52 mm, and 0.85 mm, respectively. CONCLUSIONS: A deep learning system can segment VS and IE structures in high-resolution MRI scans with excellent accuracy. This technology offers promise to improve the clinical workflow for assessing VS radiomics and enhance the management of VS patients.


Assuntos
Orelha Interna , Neuroma Acústico , Adulto , Humanos , Inteligência Artificial , Neuroma Acústico/diagnóstico por imagem , Estudos Transversais , Imageamento por Ressonância Magnética/métodos
5.
Int J Comput Assist Radiol Surg ; 18(11): 2033-2041, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37450175

RESUMO

PURPOSE: Middle and inner ear procedures target hearing loss, infections, and tumors of the temporal bone and lateral skull base. Despite the advances in surgical techniques, these procedures remain challenging due to limited haptic and visual feedback. Augmented reality (AR) may improve operative safety by allowing the 3D visualization of anatomical structures from preoperative computed tomography (CT) scans on real intraoperative microscope video feed. The purpose of this work was to develop a real-time CT-augmented stereo microscope system using camera calibration and electromagnetic (EM) tracking. METHODS: A 3D printed and electromagnetically tracked calibration board was used to compute the intrinsic and extrinsic parameters of the surgical stereo microscope. These parameters were used to establish a transformation between the EM tracker coordinate system and the stereo microscope image space such that any tracked 3D point can be projected onto the left and right images of the microscope video stream. This allowed the augmentation of the microscope feed of a 3D printed temporal bone with its corresponding CT-derived virtual model. Finally, the calibration board was also used for evaluating the accuracy of the calibration. RESULTS: We evaluated the accuracy of the system by calculating the registration error (RE) in 2D and 3D in a microsurgical laboratory setting. Our calibration workflow achieved a RE of 0.11 ± 0.06 mm in 2D and 0.98 ± 0.13 mm in 3D. In addition, we overlaid a 3D CT model on the microscope feed of a 3D resin printed model of a segmented temporal bone. The system exhibited small latency and good registration accuracy. CONCLUSION: We present the calibration of an electromagnetically tracked surgical stereo microscope for augmented reality visualization. The calibration method achieved accuracy within a range suitable for otologic procedures. The AR process introduces enhanced visualization of the surgical field while allowing depth perception.

6.
Int J Comput Assist Radiol Surg ; 18(1): 85-93, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-35933491

RESUMO

PURPOSE: Virtual reality (VR) simulation has the potential to advance surgical education, procedural planning, and intraoperative guidance. "SurgiSim" is a VR platform developed for the rehearsal of complex procedures using patient-specific anatomy, high-fidelity stereoscopic graphics, and haptic feedback. SurgiSim is the first VR simulator to include a virtual operating room microscope. We describe the process of designing and refining the VR microscope user experience (UX) and user interaction (UI) to optimize surgical rehearsal and education. METHODS: Human-centered VR design principles were applied in the design of the SurgiSim microscope to optimize the user's sense of presence. Throughout the UX's development, the team of developers met regularly with surgeons to gather end-user feedback. Supplemental testing was performed on four participants. RESULTS: Through observation and participant feedback, we made iterative design upgrades to the SurgiSim platform. We identified the following key characteristics of the VR microscope UI: overall appearance, hand controller interface, and microscope movement. CONCLUSION: Our design process identified challenges arising from the disparity between VR and physical environments that pertain to microscope education and deployment. These roadblocks were addressed using creative solutions. Future studies will investigate the efficacy of VR surgical microscope training on real-world microscope skills as assessed by validated performance metrics.


Assuntos
Treinamento por Simulação , Cirurgiões , Realidade Virtual , Humanos , Simulação por Computador , Cirurgiões/educação , Salas Cirúrgicas , Treinamento por Simulação/métodos , Competência Clínica , Interface Usuário-Computador
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA