Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 21
Filtrar
1.
Magn Reson Med ; 92(3): 1149-1161, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38650444

RESUMO

PURPOSE: To improve image quality, mitigate quantification biases and variations for free-breathing liver proton density fat fraction (PDFF) and R 2 * $$ {\mathrm{R}}_2^{\ast } $$ quantification accelerated by radial k-space undersampling. METHODS: A free-breathing multi-echo stack-of-radial MRI method was developed with compressed sensing with multidimensional regularization. It was validated in motion phantoms with reference acquisitions without motion and in 11 subjects (6 patients with nonalcoholic fatty liver disease) with reference breath-hold Cartesian acquisitions. Images, PDFF, and R 2 * $$ {\mathrm{R}}_2^{\ast } $$ maps were reconstructed using different radial view k-space sampling factors and reconstruction settings. Results were compared with reference-standard results using Bland-Altman analysis. Using linear mixed-effects model fitting (p < 0.05 considered significant), mean and SD were evaluated for biases and variations of PDFF and R 2 * $$ {\mathrm{R}}_2^{\ast } $$ , respectively, and coefficient of variation on the first echo image was evaluated as a surrogate for image quality. RESULTS: Using the empirically determined optimal sampling factor of 0.25 in the accelerated in vivo protocols, mean differences and limits of agreement for the proposed method were [-0.5; -33.6, 32.7] s-1 for R 2 * $$ {\mathrm{R}}_2^{\ast } $$ and [-1.0%; -5.8%, 3.8%] for PDFF, close to those of a previous self-gating method using fully sampled radial views: [-0.1; -27.1, 27.0] s-1 for R 2 * $$ {\mathrm{R}}_2^{\ast } $$ and [-0.4%; -4.5%, 3.7%] for PDFF. The proposed method had significantly lower coefficient of variation than other methods (p < 0.001). Effective acquisition time of 64 s or 59 s was achieved, compared with 171 s or 153 s for two baseline protocols with different radial views corresponding to sampling factor of 1.0. CONCLUSION: This proposed method may allow accelerated free-breathing liver PDFF and R 2 * $$ {\mathrm{R}}_2^{\ast } $$ mapping with reduced biases and variations.


Assuntos
Processamento de Imagem Assistida por Computador , Fígado , Imageamento por Ressonância Magnética , Imagens de Fantasmas , Humanos , Fígado/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Estudos Retrospectivos , Feminino , Masculino , Processamento de Imagem Assistida por Computador/métodos , Pessoa de Meia-Idade , Respiração , Algoritmos , Adulto , Reprodutibilidade dos Testes , Hepatopatia Gordurosa não Alcoólica/diagnóstico por imagem , Movimento (Física) , Tecido Adiposo/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Idoso
2.
Biomimetics (Basel) ; 8(6)2023 Oct 02.
Artigo em Inglês | MEDLINE | ID: mdl-37887602

RESUMO

As human-robot interaction becomes more prevalent in industrial and clinical settings, detecting changes in human posture has become increasingly crucial. While recognizing human actions has been extensively studied, the transition between different postures or movements has been largely overlooked. This study explores using two deep-learning methods, the linear Feedforward Neural Network (FNN) and Long Short-Term Memory (LSTM), to detect changes in human posture among three different movements: standing, walking, and sitting. To explore the possibility of rapid posture-change detection upon human intention, the authors introduced transition stages as distinct features for the identification. During the experiment, the subject wore an inertial measurement unit (IMU) on their right leg to measure joint parameters. The measurement data were used to train the two machine learning networks, and their performances were tested. This study also examined the effect of the sampling rates on the LSTM network. The results indicate that both methods achieved high detection accuracies. Still, the LSTM model outperformed the FNN in terms of speed and accuracy, achieving 91% and 95% accuracy for data sampled at 25 Hz and 100 Hz, respectively. Additionally, the network trained for one test subject was able to detect posture changes in other subjects, demonstrating the feasibility of personalized or generalized deep learning models for detecting human intentions. The accuracies for posture transition time and identification at a sampling rate of 100 Hz were 0.17 s and 94.44%, respectively. In summary, this study achieved some good outcomes and laid a crucial foundation for the engineering application of digital twins, exoskeletons, and human intention control.

3.
Sensors (Basel) ; 23(16)2023 Aug 16.
Artigo em Inglês | MEDLINE | ID: mdl-37631740

RESUMO

The gait pattern of exoskeleton control conflicting with the human operator's (the pilot) intention may cause awkward maneuvering or even injury. Therefore, it has been the focus of many studies to help decide the proper gait operation. However, the timing for the recognization plays a crucial role in the operation. The delayed detection of the pilot's intent can be equally undesirable to the exoskeleton operation. Instead of recognizing the motion, this study examines the possibility of identifying the transition between gaits to achieve in-time detection. This study used the data from IMU sensors for future mobile applications. Furthermore, we tested using two machine learning networks: a linearfFeedforward neural network and a long short-term memory network. The gait data are from five subjects for training and testing. The study results show that: 1. The network can successfully separate the transition period from the motion periods. 2. The detection of gait change from walking to sitting can be as fast as 0.17 s, which is adequate for future control applications. However, detecting the transition from standing to walking can take as long as 1.2 s. 3. This study also find that the network trained for one person can also detect movement changes for different persons without deteriorating the performance.


Assuntos
Intenção , Movimento , Humanos , Movimento (Física) , Marcha , Aprendizado de Máquina
4.
IEEE Robot Autom Lett ; 6(3): 5261-5268, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-34621980

RESUMO

The overarching goal of this work is to demonstrate the feasibility of using optical coherence tomography (OCT) to guide a robotic system to extract lens fragments from ex vivo pig eyes. A convolutional neural network (CNN) was developed to semantically segment four intraocular structures (lens material, capsule, cornea, and iris) from OCT images. The neural network was trained on images from ten pig eyes, validated on images from eight different eyes, and tested on images from another ten eyes. This segmentation algorithm was incorporated into the Intraocular Robotic Interventional Surgical System (IRISS) to realize semi-automated detection and extraction of lens material. To demonstrate the system, the semi-automated detection and extraction task was performed on seven separate ex vivo pig eyes. The developed neural network exhibited 78.20% for the validation set and 83.89% for the test set in mean intersection over union metrics. Successful implementation and efficacy of the developed method were confirmed by comparing the preoperative and postoperative OCT volume scans from the seven experiments.

5.
Sci Robot ; 6(53)2021 04 07.
Artigo em Inglês | MEDLINE | ID: mdl-34043561

RESUMO

Mimicking biological neuromuscular systems' sensory motion requires the unification of sensing and actuation in a singular artificial muscle material, which must not only actuate but also sense their own motions. These functionalities would be of great value for soft robotics that seek to achieve multifunctionality and local sensing capabilities approaching natural organisms. Here, we report a soft somatosensitive actuating material using an electrically conductive and photothermally responsive hydrogel, which combines the functions of piezoresistive strain/pressure sensing and photo/thermal actuation into a single material. Synthesized through an unconventional ice-templated ultraviolet-cryo-polymerization technique, the homogenous tough conductive hydrogel exhibited a densified conducting network and highly porous microstructure, achieving a unique combination of ultrahigh conductivity (36.8 milisiemens per centimeter, 103-fold enhancement) and mechanical robustness, featuring high stretchability (170%), large volume shrinkage (49%), and 30-fold faster response than conventional hydrogels. With the unique compositional homogeneity of the monolithic material, our hydrogels overcame a limitation of conventional physically integrated sensory actuator systems with interface constraints and predefined functions. The two-in-one functional hydrogel demonstrated both exteroception to perceive the environment and proprioception to kinesthetically sense its deformations in real time, while actuating with near-infinite degrees of freedom. We have demonstrated a variety of light-driven locomotion including contraction, bending, shape recognition, object grasping, and transporting with simultaneous self-monitoring. When connected to a control circuit, the muscle-like material achieved closed-loop feedback controlled, reversible step motion. This material design can also be applied to liquid crystal elastomers.


Assuntos
Materiais Biomiméticos , Biomimética , Robótica , Materiais Inteligentes , Resinas Acrílicas , Compostos de Anilina , Animais , Órgãos Artificiais , Condutividade Elétrica , Retroalimentação Sensorial , Hidrogéis , Luz , Fenômenos Mecânicos , Músculos , Octopodiformes/fisiologia , Porosidade , Estudo de Prova de Conceito , Propriocepção , Sensação , Temperatura , Resistência à Tração
6.
Int J Med Robot ; 17(3): e2248, 2021 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-33638592

RESUMO

BACKGROUND: In cataract surgery, polishing of the posterior capsule (PC) can lead to improved surgical outcomes but is currently avoided due to its high-risk nature. This work developed a robotic system capable of performing PC polishing on ex vivo pig eyes using optical coherence tomography (OCT) guidance. METHODS: The lenses of five ex vivo pig eyes were extracted and a thin layer of glue deposited onto the PC. Transpupillary OCT scans of the anterior segment were used to generate a PC-polishing trajectory. During polishing, OCT B-scans tracked the tool tip and were displayed to the operator. RESULTS: Complete removal of the glue was accomplished in all five trials with no PC rupture reported. CONCLUSIONS: The feasibility of using a robotic system guided by OCT to perform PC polishing on a biological model was demonstrated. Contributions include modelling of the PC anatomy, intraoperative OCT visualization, and automated tool-tip motion with scheduled aspiration pressures.


Assuntos
Procedimentos Cirúrgicos Robóticos , Animais , Extração de Catarata , Suínos , Tomografia de Coerência Óptica
7.
IEEE ASME Trans Mechatron ; 26(5): 2758-2769, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35528629

RESUMO

Retinal vein occlusion is one of the most common causes of vision loss, occurring when a blood clot or other obstruction occludes a retinal vein. A potential remedy for retinal vein occlusion is retinal vein cannulation, a surgical procedure that involves infusing the occluded vein with a fibrinolytic drug to restore blood flow through the vascular lumen. This work presents an image-guided robotic system capable of performing automated cannulation on silicone retinal vein phantoms. The system is integrated with an optical coherence tomography probe and camera to provide visual feedback to guide the robotic system. Through automation, the developed system targets a vein phantom to within 20 µm and automatically cannulates and infuses the vascular lumen with dyed water. The system was evaluated through 30 experimental trials and shown to be capable of performing automated cannulation of retinal vein phantoms with no reported cases of failure.

8.
J Magn Reson Imaging ; 53(1): 118-129, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-32478915

RESUMO

BACKGROUND: Stack-of-radial multiecho gradient-echo MRI is promising for free-breathing liver R2* quantification and may benefit children. PURPOSE: To validate stack-of-radial MRI with self-gating motion compensation in phantoms, and to evaluate it in children. STUDY TYPE: Prospective. PHANTOMS: Four vials with different R2* driven by a motion stage. SUBJECTS: Sixteen pediatric patients with suspected nonalcoholic fatty liver disease or steatohepatitis (five females, 13 ± 4 years, body mass index 29.2 ± 8.6 kg/m2 ). FIELD STRENGTH/SEQUENCES: Stack-of-radial, and 2D and 3D Cartesian multiecho gradient-echo sequences at 3T. ASSESSMENT: Ungated and gated stack-of-radial proton density fat fraction (PDFF) and R2* maps were reconstructed without and with self-gating motion compensation. Stack-of-radial R2* measurements of phantoms without and with motion were validated against reference 2D Cartesian results of phantoms without motion. In subjects, free-breathing stack-of-radial and reference breath-hold 3D Cartesian were acquired. Subject inclusion for statistical analysis and region of interest placement were determined independently by two observers. STATISTICAL TESTS: Phantom results were fitted with a weighted linear model. Demographic differences between excluded and included subjects were tested by multivariate analysis of variance. PDFF and R2* measurements were compared using Bland-Altman analysis. Interobserver agreement was assessed by the intraclass correlation coefficient (ICC). RESULTS: Ungated stack-of-radial R2* inside moving phantom vials showed a significant positive bias of 64.3 s-1 (P < 0.00001), unlike gated results (P > 0.31). Subject inclusion decisions for statistical analysis from two observers were consistent. No significant differences were found between four excluded and 12 included subjects (P = 0.14). Compared to breath-hold Cartesian, ungated and gated free-breathing stack-of-radial exhibited mean R2* differences of 18.5 s-1 and 3.6 s-1 . Mean PDFF differences were 1.1% and 1.0% for ungated and gated measurements, respectively. Interobserver agreement was excellent (ICC for PDFF = 0.99, ICC for R2* = 0.90; P < 0.0003). DATA CONCLUSION: Stack-of-radial MRI with self-gating motion compensation seems to allow free-breathing liver R2* and PDFF quantification in children. LEVEL OF EVIDENCE: 2 TECHNICAL EFFICACY STAGE: 2.


Assuntos
Imageamento por Ressonância Magnética , Prótons , Criança , Feminino , Humanos , Fígado/diagnóstico por imagem , Movimento (Física) , Estudos Prospectivos
9.
Int J Comput Assist Radiol Surg ; 15(10): 1673-1684, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-32676870

RESUMO

PURPOSE: Accurate needle tracking provides essential information for MRI-guided percutaneous interventions. Passive needle tracking using MR images is challenged by variations of the needle-induced signal void feature in different situations. This work aimed to develop an automatic needle tracking algorithm for MRI-guided interventions based on the Mask Region Proposal-Based Convolutional Neural Network (R-CNN). METHODS: Mask R-CNN was adapted and trained to segment the needle feature using 250 intra-procedural images from 85 MRI-guided prostate biopsy cases and 180 real-time images from MRI-guided needle insertion in ex vivo tissue. The segmentation masks were passed into the needle feature localization algorithm to extract the needle feature tip location and axis orientation. The proposed algorithm was tested using 208 intra-procedural images from 40 MRI-guided prostate biopsy cases, and 3 real-time MRI datasets in ex vivo tissue. The algorithm results were compared with human-annotated references. RESULTS: In prostate datasets, the proposed algorithm achieved needle feature tip localization error with median Euclidean distance (dxy) of 0.71 mm and median difference in axis orientation angle (dθ) of 1.28°, respectively. In 3 real-time MRI datasets, the proposed algorithm achieved consistent dynamic needle feature tracking performance with processing time of 75 ms/image: (a) median dxy = 0.90 mm, median dθ = 1.53°; (b) median dxy = 1.31 mm, median dθ = 1.9°; (c) median dxy = 1.09 mm, median dθ = 0.91°. CONCLUSIONS: The proposed algorithm using Mask R-CNN can accurately track the needle feature tip and axis on MR images from in vivo intra-procedural prostate biopsy cases and ex vivo real-time MRI experiments with a range of different conditions. The algorithm achieved pixel-level tracking accuracy in real time and has potential to assist MRI-guided percutaneous interventions.


Assuntos
Biópsia Guiada por Imagem/métodos , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Próstata/patologia , Algoritmos , Humanos , Masculino , Agulhas
10.
Int J Med Robot ; 16(2): e2041, 2020 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-31674721

RESUMO

BACKGROUND: Magnetic resonance imaging (MRI) has unique advantages for guiding interventions, but the narrow space is a major challenge. This study evaluates the feasibility of a remote-controlled hydrostatic actuator system for MRI-guided targeted needle placement. METHODS: The effects of the hydrostatic actuator system on MR image quality were evaluated. Using a reference step-and-shoot method (SS) and the proposed actuator-assisted method (AA), two operators performed MRI-guided needle placement in targets (n = 12) in a motion phantom. RESULTS: The hydrostatic actuator system exhibited negligible impact on MR image quality. In dynamic targets, AA was significantly more accurate and precise than SS, with mean ± SD needle-to-target error of 1.8 ± 1.0 mm (operator 1) and 1.3 ± 0.5 mm (operator 2). AA reduced the insertion time by 50% to 80% and total procedure time by 25%, compared to SS. CONCLUSIONS: The proposed hydrostatic actuator system may improve accuracy and reduce procedure time for MRI-guided targeted needle placement during motion.


Assuntos
Imagem por Ressonância Magnética Intervencionista/métodos , Cirurgia Assistida por Computador/métodos , Algoritmos , Desenho de Equipamento , Humanos , Movimento (Física) , Agulhas , Imagens de Fantasmas , Reprodutibilidade dos Testes , Respiração , Robótica , Razão Sinal-Ruído
11.
IEEE Trans Biomed Eng ; 67(6): 1727-1738, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-31567071

RESUMO

OBJECTIVE: Magnetic resonance imaging (MRI) can provide guidance for interventions in organs affected by respiration (e.g., liver). This study aims to: 1) investigate image-based and surrogate-based motion tracking methods using real-time golden-angle radial MRI; and 2) propose and evaluate a new fusion-based respiratory motion prediction framework with multi-rate Kalman filtering. METHODS: Images with different temporal footprints were reconstructed from the same golden-angle radial MRI data stream to simultaneously enable image-based and surrogate-based tracking at 10 Hz. A custom software pipeline was constructed to perform online tracking and calibrate tracking error and latency using a programmable motion phantom. A fusion-based motion prediction method was developed to combine the lower tracking error of image-based tracking with the lower latency of surrogate-based tracking. The fusion-based method was evaluated in retrospective studies using in vivo real-time free-breathing liver MRI. RESULTS: Phantom experiments confirmed that the median online tracking error of image-based tracking was lower than surrogate-based methods, however, with higher median system latency (350 ms vs. 150 ms). In retrospective in vivo studies, 75 respiratory waveforms of target features from eight subjects were analyzed. The median root-mean-squared prediction error (RMSE) for the fusion-based method (0.97 mm) was reduced (Wilcoxon signed rank test p < 0.05) compared to surrogate-based (1.18 mm) and image-based (1.3 mm) methods. CONCLUSION: The proposed fusion-based respiratory motion prediction framework using golden-angle radial MRI can achieve low-latency feedback with improved accuracy. SIGNIFICANCE: Respiratory motion prediction using the fusion-based method can overcome system latency to provide accurate feedback information for MRI-guided interventions.


Assuntos
Imageamento por Ressonância Magnética , Respiração , Humanos , Movimento (Física) , Imagens de Fantasmas , Estudos Retrospectivos
12.
Int J Med Robot ; 16(2): e2040, 2020 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-31597000

RESUMO

A laparoscopic surgical training system, the LapaRobot, is introduced. The system is composed of an expert station and a trainee station connected through the Internet. Embedded actuators allow the trainee station to be driven by an expert surgeon so that a trainee learns proper technique through physical feedback. The surgical-tool trajectory and video feed can be recorded and later "played back" to a trainee to hone operative skills through guided repetition without the need for expert supervision. The system is designed to create a high-fidelity approximation of the intracorporeal workspace, incorporate commercially available surgical instruments, and provide a wealth of high-resolution data for quantitative analysis and feedback. Experimental evaluation demonstrated a 55% improvement in surgical performance with use of our system. In this paper, we introduce the details of the design and fabrication of the LapaRobot, illustrate the mechatronics and software-control schemes, and evaluate the system in a study.


Assuntos
Laparoscopia/educação , Procedimentos Cirúrgicos Robóticos/métodos , Telemedicina/métodos , Fenômenos Biomecânicos , Competência Clínica , Simulação por Computador , Instrução por Computador/métodos , Desenho de Equipamento , Humanos , Laparoscopia/métodos , Mentores , Software
13.
J Cataract Refract Surg ; 45(11): 1665-1669, 2019 11.
Artigo em Inglês | MEDLINE | ID: mdl-31706519

RESUMO

PURPOSE: To evaluate semiautomated surgical lens extraction procedures using the optical coherence tomography (OCT)-integrated Intraocular Robotic Interventional Surgical System. SETTING: Stein Eye Institute and Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, USA. DESIGN: Experimental study. METHODS: Semiautomated lens extraction was performed on postmortem pig eyes using a robotic platform integrated with an OCT imaging system. Lens extraction was performed using a series of automated steps including robot-to-eye alignment, irrigation/aspiration (I/A) handpiece insertion, anatomic modeling, surgical path planning, and I/A handpiece navigation. Intraoperative surgical supervision and human intervention were enabled by real-time OCT image feedback to the surgeon via a graphical user interface. Manual preparation of the pig-eye models, including the corneal incision and capsulorhexis, was performed by a trained cataract surgeon before the semiautomated lens extraction procedures. A scoring system was used to assess surgical complications in a postoperative evaluation. RESULTS: Complete lens extraction was achieved in 25 of 30 eyes. In the remaining 5 eyes, small lens pieces (≤1.0 mm3) were detected near the lens equator, where transpupillary OCT could not image. No posterior capsule rupture or corneal leakage occurred. The mean surgical duration was 277 seconds ± 42 (SD). Based on a 3-point scale (0 = no damage), damage to the iris was 0.33 ± 0.20, damage to the cornea was 1.47 ± 0.20 (due to tissue dehydration), and stress at the incision was 0.97 ± 0.11. CONCLUSIONS: No posterior capsule rupture was reported. Complete lens removal was achieved in 25 trials without significant surgical complications. Refinements to the procedures are required before fully automated lens extraction can be realized.


Assuntos
Cristalino/cirurgia , Facoemulsificação/métodos , Procedimentos Cirúrgicos Robóticos , Cirurgia Assistida por Computador/métodos , Tomografia de Coerência Óptica/métodos , Animais , Capsulorrexe , Complicações Intraoperatórias , Modelos Animais , Duração da Cirurgia , Suínos
14.
Transl Vis Sci Technol ; 8(4): 2, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31293821

RESUMO

PURPOSE: We determine whether haptic feedback improves surgical performance and outcome during simulated a preretinal membrane peeling procedure. METHODS: A haptic-enabled virtual reality preretinal membrane peeling simulator was developed using a surgical cockpit with two multifinger haptic devices. Six subjects (three trained retina surgeons and three nonsurgeons) performed the preretinal membrane peeling surgical procedure using two modes of operation: visual and haptic feedback, and visual feedback only. RESULTS: Task completion time, tool tip path trajectory, tool-retina collision force, and retinal damage were all reduced with haptic feedback used and compared to modes where haptic feedback was disabled. CONCLUSIONS: Haptic feedback improves efficiency and safety during preretinal membrane peeling simulation. TRANSLATIONAL RELEVANCE: These findings highlight the potential benefit of haptic feedback for improving performance and safety of vitreoretinal surgery.

15.
Int J Med Robot ; 14(6): e1949, 2018 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-30152081

RESUMO

BACKGROUND: With the development of laser-assisted platforms, the outcomes of cataract surgery have been improved by automating several procedures. The cataract-extraction step continues to be manually performed, but due to deficiencies in sensing capabilities, surgical complications such as posterior capsule rupture and incomplete cataract removal remain. METHODS: An optical coherence tomography (OCT) system is integrated into our intraocular robotic interventional surgical system (IRISS) robot. The OCT images are used for preoperative planning and intraoperative intervention in a series of automated procedures. Real-time intervention allows surgeons to evaluate the progress and override the operation. RESULTS: The developed system was validated by performing lens extraction on 30 postmortem pig eyes. Complete lens extraction was achieved on 25 eyes, and "almost complete" extraction was achieved on the remainder due to an inability to image small lens particles behind the iris. No capsule rupture was found. CONCLUSION: The IRISS successfully demonstrated semiautomated OCT-guided lens removal with real-time supervision and intervention.


Assuntos
Extração de Catarata/instrumentação , Catarata , Tomografia de Coerência Óptica/instrumentação , Animais , Automação , Extração de Catarata/métodos , Desenho de Equipamento , Humanos , Procedimentos Cirúrgicos Robóticos , Software , Suínos , Tomografia de Coerência Óptica/métodos
16.
Int J Med Robot ; 14(1)2018 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-28762253

RESUMO

BACKGROUND: Since the advent of robotic-assisted surgery, the value of using robotic systems to assist in surgical procedures has been repeatedly demonstrated. However, existing technologies are unable to perform complete, multi-step procedures from start to finish. Many intraocular surgical steps continue to be manually performed. METHODS: An intraocular robotic interventional surgical system (IRISS) capable of performing various intraocular surgical procedures was designed, fabricated, and evaluated. Methods were developed to evaluate the performance of the remote centers of motion (RCMs) using a stereo-camera setup and to assess the accuracy and precision of positioning the tool tip using an optical coherence tomography (OCT) system. RESULTS: The IRISS can simultaneously manipulate multiple surgical instruments, change between mounted tools using an onboard tool-change mechanism, and visualize the otherwise invisible RCMs to facilitate alignment of the RCM to the surgical incision. The accuracy of positioning the tool tip was measured to be 0.205±0.003 mm. The IRISS was evaluated by trained surgeons in a remote surgical theatre using post-mortem pig eyes and shown to be effective in completing many key steps in a variety of intraocular surgical procedures as well as being capable of performing an entire cataract extraction from start to finish. CONCLUSIONS: The IRISS represents a necessary step towards fully automated intraocular surgery and demonstrated accurate and precise master-slave manipulation for cataract removal and-through visual feedback-retinal vein cannulation.


Assuntos
Extração de Catarata/instrumentação , Procedimentos Cirúrgicos Robóticos/instrumentação , Animais , Fenômenos Biomecânicos , Calibragem , Extração de Catarata/métodos , Desenho Assistido por Computador , Computadores , Desenho de Equipamento , Humanos , Movimento (Física) , Reprodutibilidade dos Testes , Procedimentos Cirúrgicos Robóticos/métodos , Software , Estresse Mecânico , Cirurgia Assistida por Computador/métodos , Suínos , Tomografia de Coerência Óptica
17.
J Dyn Syst Meas Control ; 138(1): 0110011-1100111, 2016 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-27222600

RESUMO

Rotor unbalance, common phenomenon of rotational systems, manifests itself as a periodic disturbance synchronized with the rotor's angular velocity. In active magnetic bearing (AMB) systems, feedback control is required to stabilize the open-loop unstable electromagnetic levitation. Further, feedback action can be added to suppress the repeatable runout but maintain closed-loop stability. In this paper, a plug-in time-varying resonator is designed by inverting cascaded notch filters. This formulation allows flexibility in designing the internal model for appropriate disturbance rejection. The plug-in structure ensures that stability can be maintained for varying rotor speeds. Experimental results of an AMB-rotor system are presented.

18.
IEEE ASME Trans Mechatron ; 20(4): 1616-1623, 2015 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-26478693

RESUMO

This paper develops an automated vascular access system (A-VAS) with novel vision-based vein and needle detection methods and real-time pressure feedback for murine drug delivery. Mouse tail vein injection is a routine but critical step for preclinical imaging applications. Due to the small vein diameter and external disturbances such as tail hair, pigmentation, and scales, identifying vein location is difficult and manual injections usually result in poor repeatability. To improve the injection accuracy, consistency, safety, and processing time, A-VAS was developed to overcome difficulties in vein detection noise rejection, robustness in needle tracking, and visual servoing integration with the mechatronics system.

19.
Ophthalmic Res ; 46(1): 25-30, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-21109761

RESUMO

PURPOSE: Robotic intraocular microsurgery requires a remote center of motion (RCM) at the site of ocular penetration. We designed and tested the Hexapod Surgical System (HSS), a microrobot mounted on the da Vinci macrorobot for intraocular microsurgery. MATERIAL AND METHODS: Translations and rotations of the HSS were tested for range of motion and stability. Precision and dexterity were assessed by pointing and inserting a coupled probe into holes of various sizes. The stability of a nonmechanical RCM was quantified. HSS functionalities were observed on porcine eyes. RESULTS: The HSS maximal translations were 10 (x and y axes) and 5 cm (z axis). The maximal rotations were 15 and 22° (x and y axes). The precision was within 0.5 mm away from targets in 26/30 tests and maximal in 16/30 tests. The mean translational and rotational stability at the tip of the probe were 1.2 (0.6-1.9) and 1 mm (0-2), respectively. The average dexterity times were 5.2 (4.4-6.5), 7.1 (5.6-10.8) and 12.3 s (7.8-21.7) for 5-, 2- and 1-mm holes, respectively. The RCM was stable (within 0.1 mm). A vitreous cutter coupled to the HSS moved into porcine eyes through a sclerotomy with a stable RCM. CONCLUSION: The HSS provides an RCM dedicated for intraocular robotic surgery with a high level of precision and dexterity. Although it can be further improved, the micro-macro robotic system is a feasible approach for ocular surgery.


Assuntos
Microcirurgia/métodos , Robótica/instrumentação , Cirurgia Assistida por Computador/instrumentação , Vitrectomia/instrumentação , Animais , Fenômenos Biomecânicos , Desenho de Equipamento , Suínos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA