Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
1.
Sensors (Basel) ; 24(2)2024 Jan 12.
Artículo en Inglés | MEDLINE | ID: mdl-38257585

RESUMEN

This paper proposes a method for generating dynamic virtual fixtures with real-time 3D image feedback to facilitate human-robot collaboration in medical robotics. Seamless shared control in a dynamic environment, like that of a surgical field, remains challenging despite extensive research on collaborative control and planning. To address this problem, our method dynamically creates virtual fixtures to guide the manipulation of a trocar-placing robot arm using the force field generated by point cloud data from an RGB-D camera. Additionally, the "view scope" concept selectively determines the region for computational points, thereby reducing computational load. In a phantom experiment for robot-assisted port incision in minimally invasive thoracic surgery, our method demonstrates substantially improved accuracy for port placement, reducing error and completion time by 50% (p=1.06×10-2) and 35% (p=3.23×10-2), respectively. These results suggest that our proposed approach is promising in improving surgical human-robot collaboration.


Asunto(s)
Robótica , Cirugía Torácica , Humanos , Retroalimentación , Procedimientos Quirúrgicos Mínimamente Invasivos , Fantasmas de Imagen
2.
Sensors (Basel) ; 23(24)2023 Dec 16.
Artículo en Inglés | MEDLINE | ID: mdl-38139718

RESUMEN

Medical image analysis forms the basis of image-guided surgery (IGS) and many of its fundamental tasks. Driven by the growing number of medical imaging modalities, the research community of medical imaging has developed methods and achieved functionality breakthroughs. However, with the overwhelming pool of information in the literature, it has become increasingly challenging for researchers to extract context-relevant information for specific applications, especially when many widely used methods exist in a variety of versions optimized for their respective application domains. By being further equipped with sophisticated three-dimensional (3D) medical image visualization and digital reality technology, medical experts could enhance their performance capabilities in IGS by multiple folds. The goal of this narrative review is to organize the key components of IGS in the aspects of medical image processing and visualization with a new perspective and insights. The literature search was conducted using mainstream academic search engines with a combination of keywords relevant to the field up until mid-2022. This survey systemically summarizes the basic, mainstream, and state-of-the-art medical image processing methods as well as how visualization technology like augmented/mixed/virtual reality (AR/MR/VR) are enhancing performance in IGS. Further, we hope that this survey will shed some light on the future of IGS in the face of challenges and opportunities for the research directions of medical image processing and visualization.


Asunto(s)
Realidad Aumentada , Cirugía Asistida por Computador , Realidad Virtual , Cirugía Asistida por Computador/métodos , Procesamiento de Imagen Asistido por Computador
3.
Sensors (Basel) ; 23(20)2023 Oct 16.
Artículo en Inglés | MEDLINE | ID: mdl-37896597

RESUMEN

Microsurgical techniques have been widely utilized in various surgical specialties, such as ophthalmology, neurosurgery, and otolaryngology, which require intricate and precise surgical tool manipulation on a small scale. In microsurgery, operations on delicate vessels or tissues require high standards in surgeons' skills. This exceptionally high requirement in skills leads to a steep learning curve and lengthy training before the surgeons can perform microsurgical procedures with quality outcomes. The microsurgery robot (MSR), which can improve surgeons' operation skills through various functions, has received extensive research attention in the past three decades. There have been many review papers summarizing the research on MSR for specific surgical specialties. However, an in-depth review of the relevant technologies used in MSR systems is limited in the literature. This review details the technical challenges in microsurgery, and systematically summarizes the key technologies in MSR with a developmental perspective from the basic structural mechanism design, to the perception and human-machine interaction methods, and further to the ability in achieving a certain level of autonomy. By presenting and comparing the methods and technologies in this cutting-edge research, this paper aims to provide readers with a comprehensive understanding of the current state of MSR research and identify potential directions for future development in MSR.


Asunto(s)
Neurocirugia , Robótica , Humanos , Robótica/métodos , Microcirugia/educación , Procedimientos Neuroquirúrgicos , Competencia Clínica
4.
Surg Endosc ; 30(9): 4136-49, 2016 09.
Artículo en Inglés | MEDLINE | ID: mdl-26659243

RESUMEN

BACKGROUND: Surgical navigation technology directed at fetoscopic procedures is relatively underdeveloped compared with other forms of endoscopy. The narrow fetoscopic field of views and the vast vascular network on the placenta make examination and photocoagulation treatment of twin-to-twin transfusion syndrome challenging. Though ultrasonography is used for intraoperative guidance, its navigational ability is not fully exploited. This work aims to integrate 3D ultrasound imaging and endoscopic vision seamlessly for placental vasculature mapping through a self-contained framework without external navigational devices. METHODS: This is achieved through development, integration, and experimentation of novel navigational modules. Firstly, a framework design that addresses the current limitations based on identified gaps is conceptualized. Secondly, integration of navigational modules including (1) ultrasound-based localization, (2) image alignment, and (3) vision-based tracking to update the scene texture map is implemented. This updated texture map is projected to an ultrasound-constructed 3D model for photorealistic texturing of the 3D scene creating a panoramic view of the moving fetoscope. In addition, a collaborative scheme for the integration of the modular workflow system is proposed to schedule updates in a systematic fashion. Finally, experiments are carried out to evaluate each modular variation and an integrated collaborative scheme of the framework. RESULTS: The modules and the collaborative scheme are evaluated through a series of phantom experiments with controlled trajectories for repeatability. The collaborative framework demonstrated the best accuracy (5.2 % RMS error) compared with all the three single-module variations during the experiment. Validation on an ex vivo monkey placenta shows visual continuity of the freehand fetoscopic panorama. CONCLUSIONS: The proposed developed collaborative framework and the evaluation study of the framework variations provide analytical insights for effective integration of ultrasonography and endoscopy. This contributes to the development of navigation techniques in fetoscopic procedures and can potentially be extended to other applications in intraoperative imaging.


Asunto(s)
Fetoscopía/métodos , Imagenología Tridimensional/métodos , Fantasmas de Imagen , Placenta/irrigación sanguínea , Placenta/diagnóstico por imagen , Cirugía Asistida por Computador/métodos , Ultrasonografía Prenatal/métodos , Endoscopios , Femenino , Humanos , Embarazo
5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 3697-3700, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34892039

RESUMEN

Cell morphological analysis has great impact towards our understanding of cell biology. It is however technically challenging to acquire the complete process of cell cycles under microscope inspection. Using convolutional long short-term memory (ConvLSTM) networks, this paper proposes a comprehensive visualization method for cell cycles by retro-reconstruction of the preceding frames that are not captured. Results suggested that this method has the potential to overcome existing technical bottlenecks in image acquisition of cellular process and hence facilitate cell analysis.Clinical Relevance- This model allows back-tracing to complete the visualization of the cellular processes through a short segment of microscope-acquired cellular changes hence providing a starting point for exploring applications in predicting or backtracking unknown cellular processes.


Asunto(s)
Memoria a Largo Plazo , Redes Neurales de la Computación , Fenómenos Fisiológicos Celulares
6.
Zookeys ; 1021: 19-35, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33716540

RESUMEN

A new species Euxaldar daweishanensis Yang, Chang & Chen, sp. nov. is described and illustrated from southwestern China. The female genitalia of the genus Euxaldar is described and presented for the first time. A checklist and key to the known species of the genus are provided. A revised molecular phylogenetic analysis of the family Issidae based on combined partial sequences of 18S, 28S, COI, and Cytb is provided using both Maximum likelihood and Bayesian inference analyses.

7.
Mitochondrial DNA B Resour ; 5(3): 2619-2620, 2020 Jul 07.
Artículo en Inglés | MEDLINE | ID: mdl-33457883

RESUMEN

In this study, we have sequenced and annotated the complete mitochondrial genome (mitogenome) of Hemisphaeriusrufovarius (Hemiptera: Fulgoroidea: Issidae) for the first time,the mitogenome is 15,955 bp (GenBankNo. MT210096), includes13 PCGs, 2 rRNAs, 22 tRNAs and one putative control region (D-loop). The AT content of this mitogenomeis 78.3% (A 47.7%,T30.6%, C 13.3%, and G 8.4%). Most the PCGs started with ATN or TTG(nad5), and ended with TANor single T. The result ofPhylogenetic tree showed a close relationship among the families Issidae, Flatidae and Ricaniidae.

8.
Insects ; 11(12)2020 Dec 17.
Artículo en Inglés | MEDLINE | ID: mdl-33348760

RESUMEN

Although many hypotheses have been proposed to understand the mechanisms underlying large-scale richness patterns, the environmental determinants are still poorly understood, particularly in insects. Here, we tested the relative contributions of seven hypotheses previously proposed to explain planthopper richness patterns in China. The richness patterns were visualized at a 1° × 1° grid size, using 14,722 distribution records for 1335 planthoppers. We used ordinary least squares and spatial error simultaneous autoregressive models to examine the relationships between richness and single environmental variables and employed model averaging to assess the environmental variable relative roles. Species richness was unevenly distributed, with high species numbers occurring in the central and southern mountainous areas. The mean annual temperature change since the Last Glacial Maximum was the most important factor for richness patterns, followed by mean annual temperature and net primary productivity. Therefore, historical climate stability, ambient energy, and productivity hypotheses were supported strongly, but orogenic processes and geological isolation may also play a vital role.

9.
Zookeys ; 861: 29-41, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31333322

RESUMEN

Two new species of the tribe Hemisphaeriini: Ceratogergithusbrachyspinus Yang & Chen, sp. nov. (Yunnan) and Neohemisphaeriusclavatus Yang & Chen, sp. nov. (Guizhou) are described and illustrated. A checklist to Hemisphaeriini genera is provided. The generic characteristics of the genera Ceratogergithus Gnezdilov, 2017 and Neohemisphaerius Chen, Zhang & Chang, 2014 are redefined. Checklists and keys to the species of each genus are given.

10.
Int J Med Robot ; 13(2)2017 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-27283505

RESUMEN

BACKGROUND: Oral and maxillofacial surgery has not been benefitting from image guidance techniques owing to the limitations in image registration. METHODS: A real-time markerless image registration method is proposed by integrating a shape matching method into a 2D tracking framework. The image registration is performed by matching the patient's teeth model with intraoperative video to obtain its pose. The resulting pose is used to overlay relevant models from the same CT space on the camera video for augmented reality. RESULTS: The proposed system was evaluated on mandible/maxilla phantoms, a volunteer and clinical data. Experimental results show that the target overlay error is about 1 mm, and the frame rate of registration update yields 3-5 frames per second with a 4 K camera. CONCLUSIONS: The significance of this work lies in its simplicity in clinical setting and the seamless integration into the current medical procedure with satisfactory response time and overlay accuracy. Copyright © 2016 John Wiley & Sons, Ltd.


Asunto(s)
Maxilares/diagnóstico por imagen , Procedimientos Quirúrgicos Orales/métodos , Procedimientos Quirúrgicos Ortognáticos , Técnica de Sustracción , Cirugía Asistida por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Interfaz Usuario-Computador , Humanos , Fantasmas de Imagen , Intensificación de Imagen Radiográfica/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Grabación en Video/métodos
11.
Int J Med Robot ; 12(3): 375-86, 2016 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-26443691

RESUMEN

BACKGROUND: Visualization of the vast placental vasculature is crucial in fetoscopic laser photocoagulation for twin-to-twin transfusion syndrome treatment. However, vasculature mosaic is challenging due to the fluctuating imaging conditions during fetoscopic surgery. METHOD: A scene adaptive feature-based approach for image correspondence in free-hand endoscopic placental video is proposed. It contributes towards existing techniques by introducing a failure detection method based on statistical attributes of the feature distribution, and an updating mechanism that self-tunes parameters to recover from registration failures. RESULTS: Validations on endoscopic image sequences of a phantom and a monkey placenta are carried out to demonstrate mismatch recovery. In two 100-frame sequences, automatic self-tuned results improved by 8% compared with manual experience-based tuning and a slight 2.5% deterioration against exhaustive tuning (gold standard). CONCLUSION: This scene-adaptive image correspondence approach, which is not restricted to a set of generalized parameters, is suitable for applications associated with dynamically changing imaging conditions. Copyright © 2015 John Wiley & Sons, Ltd.


Asunto(s)
Fetoscopía/métodos , Placenta/irrigación sanguínea , Cirugía Asistida por Computador , Animales , Femenino , Haplorrinos , Humanos , Embarazo
12.
Comput Med Imaging Graph ; 40: 147-59, 2015 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-25465067

RESUMEN

Autostereoscopic 3D image overlay for augmented reality (AR) based surgical navigation has been studied and reported many times. For the purpose of surgical overlay, the 3D image is expected to have the same geometric shape as the original organ, and can be transformed to a specified location for image overlay. However, how to generate a 3D image with high geometric fidelity and quantitative evaluation of 3D image's geometric accuracy have not been addressed. This paper proposes a graphics processing unit (GPU) based computer-generated integral imaging pipeline for real-time autostereoscopic 3D display, and an automatic closed-loop 3D image calibration paradigm for displaying undistorted 3D images. Based on the proposed methods, a novel AR device for 3D image surgical overlay is presented, which mainly consists of a 3D display, an AR window, a stereo camera for 3D measurement, and a workstation for information processing. The evaluation on the 3D image rendering performance with 2560×1600 elemental image resolution shows the rendering speeds of 50-60 frames per second (fps) for surface models, and 5-8 fps for large medical volumes. The evaluation of the undistorted 3D image after the calibration yields sub-millimeter geometric accuracy. A phantom experiment simulating oral and maxillofacial surgery was also performed to evaluate the proposed AR overlay device in terms of the image registration accuracy, 3D image overlay accuracy, and the visual effects of the overlay. The experimental results show satisfactory image registration and image overlay accuracy, and confirm the system usability.


Asunto(s)
Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Procedimientos Quirúrgicos Orales/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Cirugía Asistida por Computador/métodos , Interfaz Usuario-Computador , Algoritmos , Calibración/normas , Sistemas de Computación , Humanos , Aumento de la Imagen/métodos , Aumento de la Imagen/normas , Interpretación de Imagen Asistida por Computador/normas , Imagenología Tridimensional/normas , Procedimientos Quirúrgicos Orales/normas , Reconocimiento de Normas Patrones Automatizadas/normas , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Cirugía Asistida por Computador/normas
13.
Int J Med Robot ; 11(2): 223-34, 2015 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-24801120

RESUMEN

BACKGROUND: This study presents a tracker-less image-mapping framework for surgical navigation motivated by the clinical need for intuitive visual guidance during minimally invasive fetoscopic surgery. METHODS: A navigation framework mapping 2D endoscopic vision to a 3D ultrasound image model is proposed. This maps an endoscopic image onto a 3D placenta model through a one-time ultrasound image-based localization method followed by a series of concurrent image alignments and texture mapping of the untracked endoscopic video stream. RESULTS: The mean absolute error of our ultrasound image-based localization method was (1.63 mm, 0.93°). The simulation analysis reveals an upper bound mapping performance with a mean error of 1.53 mm. In a phantom experiment, the overall mapping performance is close to this accuracy and achieves a mean absolute error of 2 mm, thereby supporting the feasibility of this method. CONCLUSION: This novel integration of intraoperative visual guidance has potential contributions to innovative fusions of image guidance techniques for effective navigation in minimally invasive fetoscopic surgery.


Asunto(s)
Fetoscopía/métodos , Imagenología Tridimensional , Placenta/diagnóstico por imagen , Placenta/cirugía , Simulación por Computador , Femenino , Transfusión Feto-Fetal/diagnóstico por imagen , Transfusión Feto-Fetal/cirugía , Humanos , Procedimientos Quirúrgicos Mínimamente Invasivos/métodos , Modelos Anatómicos , Embarazo , Ultrasonografía
14.
IEEE Trans Biomed Eng ; 61(4): 1295-304, 2014 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-24658253

RESUMEN

Computer-assisted oral and maxillofacial surgery (OMS) has been rapidly evolving since the last decade. State-of-the-art surgical navigation in OMS still suffers from bulky tracking sensors, troublesome image registration procedures, patient movement, loss of depth perception in visual guidance, and low navigation accuracy. We present an augmented reality navigation system with automatic marker-free image registration using 3-D image overlay and stereo tracking for dental surgery. A customized stereo camera is designed to track both the patient and instrument. Image registration is performed by patient tracking and real-time 3-D contour matching, without requiring any fiducial and reference markers. Real-time autostereoscopic 3-D imaging is implemented with the help of a consumer-level graphics processing unit. The resulting 3-D image of the patient's anatomy is overlaid on the surgical site by a half-silvered mirror using image registration and IP-camera registration to guide the surgeon by exposing hidden critical structures. The 3-D image of the surgical instrument is also overlaid over the real one for an augmented display. The 3-D images present both stereo and motion parallax from which depth perception can be obtained. Experiments were performed to evaluate various aspects of the system; the overall image overlay error of the proposed system was 0.71 mm.


Asunto(s)
Operatoria Dental/métodos , Imagenología Tridimensional/métodos , Interfaz Usuario-Computador , Cabeza/anatomía & histología , Humanos , Imagenología Tridimensional/instrumentación , Modelos Biológicos , Fantasmas de Imagen , Fotografía Dental
15.
Ann Acad Med Singap ; 40(5): 231-6, 2011 May.
Artículo en Inglés | MEDLINE | ID: mdl-21678014

RESUMEN

Radiofrequecy ablation is the most widely used local ablative therapy for both primary and metastatic liver tumours. However, it has limited application in the treatment of large tumours (tumours >3cm) and multicentric tumours. In recent years, many strategies have been developed to extend the application of radiofrequency ablation to large tumours. A promising approach is to take advantage of the rapid advancement in imaging and robotic technologies to construct an integrated surgical navigation and medical robotic system. This paper presents a review of existing surgical navigation methods and medical robots. We also introduce our current developed model - Transcutaneous Robot-assisted Ablation-device Insertion Navigation System (TRAINS). The clinical viability of this prototyped integrated navigation and robotic system for large and multicentric tumors is demonstrated using animal experiments.


Asunto(s)
Ablación por Catéter/instrumentación , Neoplasias Hepáticas/cirugía , Hígado , Robótica , Cirugía Asistida por Computador/instrumentación , Procedimientos Quirúrgicos Operativos/métodos , Ablación por Catéter/métodos , Humanos , Imagenología Tridimensional , Neoplasias Hepáticas/patología , Neoplasias Hepáticas/terapia , Cirugía Asistida por Computador/métodos
16.
Artículo en Inglés | MEDLINE | ID: mdl-22255346

RESUMEN

Laparoscopic Surgery poses significant complexity in hand-eye coordination to the surgeon. In order to improve their proficiency beyond the limited exposure in the operating theatre, surgeons need to practice on laparoscopic trainers. We have constructed a robotic laparoscopic trainer with identical degrees of freedom and range of motion as a conventional laparoscopic instrument. We hypothesize that active robotic assistance through a laparoscopic trainer improves training efficacy as compared to autonomous practice. In order to test the hypothesis, we have divided the subjects into two groups. The control group practiced on two laparoscopic tasks manually without feedback or supervision. The other group practiced on the same tasks with robotic assistance. Results from the robot-assisted group show that tool orientation (pitch and yaw joint motion) in the pointing task improved by more than 15%.


Asunto(s)
Laparoscopía/instrumentación , Aprendizaje , Destreza Motora , Robótica , Adulto , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA