Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 73
Filtrar
1.
Int J Med Robot ; 19(2): e2476, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-36302228

RESUMO

BACKGROUND: Neonate patients have a reduced thoracic cavity, making thoracoscopic procedures even more challenging than their adult counterparts. METHODS: We evaluated five control strategies for robot-assisted thoracoscopic surgical looping in simulations and experiments with a physical robotic system in a neonate surgical phantom. The strategies are composed of state-of-the-art constrained optimization and a novel looping force feedback term. RESULTS: All control strategies allowed users to successfully perform looping. A user study in simulation showed that the proposed strategy was superior in terms of Physical demand p < 0.05 $\left(p< 0.05\right)$ and task duration p < 0.05 $\left(p< 0.05\right)$ . The cumulative sum analysis of inexperienced users shows that the proposed looping force feedback can speed up the learning. Results with surgeons did not show a significant difference among control strategies. CONCLUSIONS: Assistive strategies in looping show promise and further work is needed to extend these benefits to other subtasks in robot-aided surgical suturing.


Assuntos
Procedimentos Cirúrgicos Robóticos , Cirurgiões , Adulto , Recém-Nascido , Humanos , Procedimentos Cirúrgicos Robóticos/métodos , Simulação por Computador , Suturas
2.
Microsyst Nanoeng ; 8: 74, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35812804

RESUMO

To provide quantitative feedback on surgical progress to ophthalmologists practicing inner limiting membrane (ILM) peeling, we developed an artificial eye module comprising a quartz crystal resonator (QCR) force sensor and a strain body that serves as a uniform force transmitter beneath a retinal model. Although a sufficiently large initial force must be loaded onto the QCR force sensor assembly to achieve stable contact with the strain body, the highly sensitive and wide dynamic-range property of this sensor enables the eye module to detect the slight forceps contact force. A parallel-plate strain body is used to achieve a uniform force sensitivity over the 4-mm-diameter ILM peeling region. Combining these two components allowed for a measurable force range of 0.22 mN to 29.6 N with a sensitivity error within -11.3 to 4.2% over the ILM peeling area. Using this eye module, we measured the applied force during a simulation involving artificial ILM peeling by an untrained individual and compensated for the long-term drift of the obtained force data using a newly developed algorithm. The compensated force data clearly captured the characteristics of several types of motion sequences observed from video recordings of the eye bottom using an ophthalmological microscope. As a result, we succeeded in extracting feature values that can be potentially related to trainee skill level, such as the mean and standard deviation of the pushing and peeling forces, corresponding, in the case of an untrained operator, to 122.6 ± 95.2 and 20.4 ± 13.2 mN, respectively.

3.
J Clin Med ; 11(14)2022 Jul 17.
Artigo em Inglês | MEDLINE | ID: mdl-35887909

RESUMO

The value of kinematic data for skill assessment is being investigated. This is the first virtual reality simulator developed for liver surgery. This simulator was coded in C++ using PhysX and FleX with a novel cutting algorithm and used a patient data-derived model and two instruments functioning as ultrasonic shears. The simulator was evaluated by nine expert surgeons and nine surgical novices. Each participant performed a simulated metastasectomy after training. Kinematic data were collected for the instrument position. Each participant completed a survey. The expert participants had a mean age of 47 years and 9/9 were certified in surgery. Novices had a mean age of 30 years and 0/9 were certified surgeons. The mean path length (novice 0.76 ± 0.20 m vs. expert 0.46 ± 0.16 m, p = 0.008), movements (138 ± 45 vs. 84 ± 32, p = 0.043) and time (174 ± 44 s vs. 102 ± 42 s, p = 0.004) were significantly different for the two participant groups. There were no significant differences in activating the instrument (107 ± 25 vs. 109 ± 53). Participants considered the simulator realistic (6.5/7) (face validity), appropriate for education (5/7) (content validity) with an effective interface (6/7), consistent motion (5/7) and realistic soft tissue behavior (5/7). This study showed that the simulator differentiates between experts and novices. Simulation may be an effective way to obtain kinematic data.

4.
PLoS One ; 17(7): e0271171, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35816482

RESUMO

Among increasing eye diseases, glaucoma may hurt the optic nerves and lead to vision loss, the treatment of which is to reduce intraocular pressure (IOP). In this research, we introduce a new concept of the surgery simulator for Minimally Invasive Glaucoma Surgery (MIGS). The concept is comprised of an anterior eye model and a fluidic circulatory system. The model made of flexible material includes a channel like the Schlemm's canal (SC) and a membrane like the trabecular meshwork (TM) covering the SC. The system can monitor IOP in the model by a pressure sensor. In one of the MIGS procedures, the TM is cleaved to reduce the IOP. Using the simulator, ophthalmologists can practice the procedure and measure the IOP. First, considering the characteristics of human eyes, we defined requirements and target performances for the simulator. Next, we designed and manufactured the prototype. Using the prototype, we measured the IOP change before and after cleaving the TM. Finally, we demonstrated the availability by comparing experimental results and target performances. This simulator is also expected to be used for evaluations and developments of new MIGS instruments and ophthalmic surgery robots in addition to the surgical training of ophthalmologists.


Assuntos
Glaucoma , Próteses Visuais , Glaucoma/cirurgia , Humanos , Pressão Intraocular , Microfluídica , Malha Trabecular/fisiologia
5.
Proc IEEE Inst Electr Electron Eng ; 110(7): 893-908, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-36588782

RESUMO

Intraocular surgery, one of the most challenging discipline of microsurgery, requires sensory and motor skills at the limits of human physiological capabilities combined with tremendously difficult requirements for accuracy and steadiness. Nowadays, robotics combined with advanced imaging has opened conspicuous and significant directions in advancing the field of intraocular microsurgery. Having patient treatment with greater safety and efficiency as the final goal, similar to other medical applications, robotics has a real potential to fundamentally change microsurgery by combining human strengths with computer and sensor-based technology in an information-driven environment. Still in its early stages, robotic assistance for intraocular microsurgery has been accepted with precaution in the operating room and successfully tested in a limited number of clinical trials. However, owing to its demonstrated capabilities including hand tremor reduction, haptic feedback, steadiness, enhanced dexterity, micrometer-scale accuracy, and others, microsurgery robotics has evolved as a very promising trend in advancing retinal surgery. This paper will analyze the advances in retinal robotic microsurgery, its current drawbacks and limitations, as well as the possible new directions to expand retinal microsurgery to techniques currently beyond human boundaries or infeasible without robotics.

6.
Int J Comput Assist Radiol Surg ; 16(4): 589-595, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33723706

RESUMO

PURPOSE: The Johns Hopkins-Intuitive Gesture and Skill Assessment Working Set (JIGSAWS) dataset is used to develop robotic surgery skill assessment tools, but there has been no detailed analysis of this dataset. The aim of this study is to perform a learning curve analysis of the existing JIGSAWS dataset. METHODS: Five trials were performed in JIGSAWS by eight participants (four novices, two intermediates and two experts) for three exercises (suturing, knot-tying and needle passing). Global Rating Scores and time, path length and movements were analyzed quantitatively and qualitatively by graphical analysis. RESULTS: There are no significant differences in Global Rating Scale scores over time. Time in the suturing exercise and path length in needle passing had significant differences. Other kinematic parameters were not significantly different. Qualitative analysis shows a learning curve only for suturing. Cumulative sum analysis suggests completion of the learning curve for suturing by trial 4. CONCLUSIONS: The existing JIGSAWS dataset does not show a quantitative learning curve for Global Rating Scale scores, or most kinematic parameters which may be due in part to the limited size of the dataset. Qualitative analysis shows a learning curve for suturing. Cumulative sum analysis suggests completion of the suturing learning curve by trial 4. An expanded dataset is needed to facilitate subset analyses.


Assuntos
Competência Clínica , Gestos , Laparoscopia/educação , Laparoscopia/métodos , Curva de Aprendizado , Movimento (Física) , Procedimentos Cirúrgicos Robóticos/educação , Procedimentos Cirúrgicos Robóticos/métodos , Técnicas de Sutura , Algoritmos , Fenômenos Biomecânicos , Cirurgia Geral/educação , Humanos , Suturas
7.
Int J Comput Assist Radiol Surg ; 15(12): 2017-2025, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-33025366

RESUMO

PURPOSE: The JIGSAWS dataset is a fixed dataset of robot-assisted surgery kinematic data used to develop predictive models of skill. The purpose of this study is to analyze the relationships of self-defined skill level with global rating scale scores and kinematic data (time, path length and movements) from three exercises (suturing, knot-tying and needle passing) (right and left hands) in the JIGSAWS dataset. METHODS: Global rating scale scores are reported in the JIGSAWS dataset and kinematic data were calculated using ROVIMAS software. Self-defined skill levels are in the dataset (novice, intermediate, expert). Correlation coefficients (global rating scale-skill level and global rating scale-kinematic parameters) were calculated. Kinematic parameters were compared among skill levels. RESULTS: Global rating scale scores correlated with skill in the knot-tying exercise (r = 0.55, p = 0.0005). In the suturing exercise, time, path length (left) and movements (left) were significantly different (p < 0.05) for novices and experts. For knot-tying, time, path length (right and left) and movements (right) differed significantly for novices and experts. For needle passing, no kinematic parameter was significantly different comparing novices and experts. The only kinematic parameter that correlated with global rating scale scores is time in the knot-tying exercise. CONCLUSION: Global rating scale scores weakly correlate with skill level and kinematic parameters. The ability of kinematic parameters to differentiate among self-defined skill levels is inconsistent. Additional data are needed to enhance the dataset and facilitate subset analyses and future model development.


Assuntos
Competência Clínica , Laparoscopia/educação , Procedimentos Cirúrgicos Robóticos , Treinamento por Simulação , Software , Fenômenos Biomecânicos , Gestos , Humanos , Movimento (Física) , Técnicas de Sutura/educação , Suturas
8.
Int J Med Educ ; 11: 97-106, 2020 May 18.
Artigo em Inglês | MEDLINE | ID: mdl-32425176

RESUMO

OBJECTIVES: To evaluate the effect of simulator fidelity on procedure skill training through a review of existing studies. METHODS: MEDLINE, OVID and EMBASE databases were searched between January 1990 and January 2019. Search terms included "simulator fidelity and comparison" and "low fidelity" and "high fidelity" and "comparison" and "simulator". Author classification of low- and high-fidelity was used for non-laparoscopic procedures. Laparoscopic simulators are classified using a proposed schema. All included studies used a randomized methodology with two or more groups and were written in English. Data was abstracted to a standard data sheet and critically appraised from 17 eligible full papers. RESULTS: Of 17 studies, eight were for laparoscopic and nine for other skill training. Studies employed evaluation methodologies, including subjective and objective measures. The evaluation was conducted once in 13/17 studies and before-after in 4/17. Didactic training only or control groups were used in 5/17 studies, while 10/17 studies included two groups only. Skill acquisition and simulator fidelity were different for the level of training in 1/17 studies. Simulation training was followed by clinical evaluation or a live animal evaluation in 3/17 studies. Low-fidelity training was not inferior to training with a high-fidelity simulator in 15/17 studies. CONCLUSIONS: Procedure skill after training with low fidelity simulators was not inferior to skill after training with high fidelity simulators in 15/17 studies. Some data suggest that the effectiveness of different fidelity simulators depends on the level of training of participants and requires further study.


Assuntos
Competência Clínica , Educação Médica , Laparoscopia/educação , Treinamento por Simulação , Cirurgiões/educação , Educação Médica/métodos , Educação Médica/normas , Avaliação Educacional , Humanos , Laparoscopia/métodos , Laparoscopia/psicologia , Reprodutibilidade dos Testes , Treinamento por Simulação/métodos , Treinamento por Simulação/normas
9.
Int J Comput Assist Radiol Surg ; 15(8): 1257-1265, 2020 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-32445129

RESUMO

PURPOSE: The manual generation of training data for the semantic segmentation of medical images using deep neural networks is a time-consuming and error-prone task. In this paper, we investigate the effect of different levels of realism on the training of deep neural networks for semantic segmentation of robotic instruments. An interactive virtual-reality environment was developed to generate synthetic images for robot-aided endoscopic surgery. In contrast with earlier works, we use physically based rendering for increased realism. METHODS: Using a virtual reality simulator that replicates our robotic setup, three synthetic image databases with an increasing level of realism were generated: flat, basic, and realistic (using the physically-based rendering). Each of those databases was used to train 20 instances of a UNet-based semantic-segmentation deep-learning model. The networks trained with only synthetic images were evaluated on the segmentation of 160 endoscopic images of a phantom. The networks were compared using the Dwass-Steel-Critchlow-Fligner nonparametric test. RESULTS: Our results show that the levels of realism increased the mean intersection-over-union (mIoU) of the networks on endoscopic images of a phantom ([Formula: see text]). The median mIoU values were 0.235 for the flat dataset, 0.458 for the basic, and 0.729 for the realistic. All the networks trained with synthetic images outperformed naive classifiers. Moreover, in an ablation study, we show that the mIoU of physically based rendering is superior to texture mapping ([Formula: see text]) of the instrument (0.606), the background (0.685), and the background and instruments combined (0.672). CONCLUSIONS: Using physical-based rendering to generate synthetic images is an effective approach to improve the training of neural networks for the semantic segmentation of surgical instruments in endoscopic images. Our results show that this strategy can be an essential step in the broad applicability of deep neural networks in semantic segmentation tasks and help bridge the domain gap in machine learning.


Assuntos
Aprendizado de Máquina , Redes Neurais de Computação , Procedimentos Cirúrgicos Robóticos/educação , Treinamento por Simulação , Bases de Dados Factuais , Endoscopia , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imagens de Fantasmas
10.
Appl Opt ; 59(4): 991-997, 2020 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-32225236

RESUMO

Two types of phase-shifting algorithms were developed for simultaneous measurement of the surface and thickness variation of an optical flat. During wavelength tuning, phase-shift nonlinearity can cause a spatially nonuniform error and spatially uniform DC drift error. A 19-sample algorithm was developed that eliminates the effect of the spatially uniform error by expanding the 17-sample algorithm with characteristic polynomial theory. The 19-sample algorithm was then altered to measure the surface shape of the optical flat by rotation of the characteristic diagram. The surface shape and thickness variation were measured with these two algorithms and a wavelength-tuning Fizeau interferometer.

11.
Int J Comput Assist Radiol Surg ; 15(1): 41-47, 2020 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-31422553

RESUMO

OBJECTIVE: Conventional surgical assistance and skill analysis for suturing mostly focus on the motions of the tools. As the quality of the suturing is determined by needle motions relative to the tissues, having knowledge of the needle motion would be useful for surgical assistance and skill analysis. As the first step toward demonstrating the usefulness of the knowledge of the needle motion, we developed a needle detection algorithm. METHODS: Owing to the small needle size, attaching sensors to it is difficult. Therefore, we developed a real-time video-based needle detection algorithm using a region-based convolutional neural network. RESULTS: Our method successfully detected the needle with an average precision of 89.2%. The needle was robustly detected even when the needle was heavily occluded by the tools and/or the blood vessels during microvascular anastomosis. However, there were some incorrect detections, including partial detection. CONCLUSION: To the best of our knowledge, this is the first time deep neural networks have been applied to real-time needle detection. In the future, we will develop a needle pose estimation algorithm using the predicted needle location toward computer-aided surgical assistance and surgical skill analysis.


Assuntos
Algoritmos , Agulhas , Redes Neurais de Computação , Cirurgia Assistida por Computador/métodos , Técnicas de Sutura/instrumentação , Humanos , Duração da Cirurgia
12.
Int J Med Robot ; 16(2): e2053, 2020 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-31677353

RESUMO

BACKGROUND: With the increasing presence of surgical robots minimally invasive surgery, there is a growing necessity of a versatile surgical system for deep and narrow workspaces. METHODS: We developed a versatile system for constrained workspaces called SmartArm. It has two industrial-type robotic arms with flexible tools attached to its distal tip, with a total of nine active degrees-of-freedom. The system has a control algorithm based on constrained optimization that allows the safe generation of task constraints and intuitive teleoperation. RESULTS: The SmartArm system is evaluated in a master-slave experiment in which a medically untrained user operates the robot to suture the dura mater membrane at the skull base of a realistic head phantom. Our results show that the user could accomplish the task proficiently, with speed and accuracy comparable to manual suturing by surgeons. Conclusions We demonstrated the integration and validation of the SmartArm.


Assuntos
Microcirurgia/instrumentação , Procedimentos Cirúrgicos Robóticos/instrumentação , Algoritmos , Fenômenos Biomecânicos , Desenho de Equipamento , Humanos , Laparoscopia/métodos , Microcirurgia/métodos , Procedimentos Cirúrgicos Minimamente Invasivos , Imagens de Fantasmas , Procedimentos Cirúrgicos Robóticos/métodos , Software , Cirurgiões
13.
Int J Comput Assist Radiol Surg ; 14(10): 1663-1671, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-31177422

RESUMO

PURPOSE: Annotation of surgical activities becomes increasingly important for many recent applications such as surgical workflow analysis, surgical situation awareness, and the design of the operating room of the future, especially to train machine learning methods in order to develop intelligent assistance. Currently, annotation is mostly performed by observers with medical background and is incredibly costly and time-consuming, creating a major bottleneck for the above-mentioned technologies. In this paper, we propose a way to eliminate, or at least limit, the human intervention in the annotation process. METHODS: Meaningful information about interaction between objects is inherently available in virtual reality environments. We propose a strategy to convert automatically this information into annotations in order to provide as output individual surgical process models. VALIDATION: We implemented our approach through a peg-transfer task simulator and compared it to manual annotations. To assess the impact of our contribution, we studied both intra- and inter-observer variability. RESULTS AND CONCLUSION: In average, manual annotations took more than 12 min for 1 min of video to achieve low-level physical activity annotation, whereas automatic annotation is achieved in less than a second for the same video period. We also demonstrated that manual annotation introduced mistakes as well as intra- and inter-observer variability that our method is able to suppress due to the high precision and reproducibility.


Assuntos
Aprendizado de Máquina , Modelos Anatômicos , Cirurgia Assistida por Computador/métodos , Humanos , Salas Cirúrgicas , Reprodutibilidade dos Testes , Realidade Virtual
14.
Micromachines (Basel) ; 10(5)2019 Apr 30.
Artigo em Inglês | MEDLINE | ID: mdl-31052324

RESUMO

Three-dimensional (3D) microfluidic channels, which simulate human tissues such as blood vessels, are useful in surgical simulator models for evaluating surgical devices and training novice surgeons. However, animal models and current artificial models do not sufficiently mimic the anatomical and mechanical properties of human tissues. Therefore, we established a novel fabrication method to fabricate an eye model for use as a surgical simulator. For the glaucoma surgery task, the eye model consists of a sclera with a clear cornea; a 3D microchannel with a width of 200-500 µm, representing the Schlemm's canal (SC); and a thin membrane with a thickness of 40-132 µm, representing the trabecular meshwork (TM). The sclera model with a clear cornea and SC was fabricated by 3D molding. Blow molding was used to fabricate the TM to cover the inner surface of the sclera part. Soft materials with controllable mechanical behaviors were used to fabricate the sclera and TM parts to mimic the mechanical properties of human tissues. Additionally, to simulate the surgery with constraints similar to those in a real operation, the eye model was installed on a skull platform. Therefore, in this paper, we propose an integration method for fabricating an eye model that has a 3D microchannel representing the SC and a membrane representing the TM, to develop a glaucoma model for training novice surgeons.

15.
Int J Med Robot ; 15(1): e1953, 2019 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-30117272

RESUMO

BACKGROUND: Integrating simulators with robotic surgical procedures could assist in designing and testing of novel robotic control algorithms and further enhance patient-specific pre-operative planning and training for robotic surgeries. METHODS: A virtual reality simulator, developed to perform the transsphenoidal resection of pituitary gland tumours, tested the usability of robotic interfaces and control algorithms. It used position-based dynamics to allow soft-tissue deformation and resection with haptic feedback; dynamic motion scaling control was also incorporated into the simulator. RESULTS: Neurosurgeons and residents performed the surgery under constant and dynamic motion scaling conditions (CMS vs DMS). DMS increased dexterity and reduced the risk of damage to healthy brain tissue. Post-experimental questionnaires indicated that the system was well-evaluated by experts. CONCLUSION: The simulator was intuitively and realistically operated. It increased the safety and accuracy of the procedure without affecting intervention time. Future research can investigate incorporating this simulation into a real micro-surgical robotic system.


Assuntos
Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/cirurgia , Simulação por Computador , Procedimentos Cirúrgicos Robóticos/métodos , Realidade Virtual , Algoritmos , Encéfalo/diagnóstico por imagem , Desenho de Equipamento , Humanos , Movimento (Física) , Movimento , Neurocirurgia , Interface Usuário-Computador
16.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 1723-1726, 2018 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-30440727

RESUMO

Vitreoretinal surgery is one of the most difficult surgical operations, even for experienced surgeons. Thus, a master-slave eye surgical robot has been developed to assist the surgeon in safely performing vitreoretinal surgeries; however, in the master-slave control, the robotic positioning accuracy depends on the surgeon's coordination skills. This paper proposes a new method of autonomous robotic positioning using the shadow of the surgical instrument. First, the microscope image is segmented into three regions-namely, a micropipette, its shadow, and the eye ground-using a Gaussian mixture model (GMM). The tips of the micropipette and its shadow are then extracted from the contour lines of the segmented regions. The micropipette is then autonomously moved down to the simulated eye ground until the distance between the tips of micropipette and its shadow in the microscopic image reaches a predefined threshold. To handle possible occlusions, the tip of the shadow is estimated using a Kalman filter. Experiments to evaluate the robotic positioning accuracy in the vertical direction were performed. The results show that the autonomous positioning using the Kalman filter enhanced the accuracy of robotic positioning.


Assuntos
Procedimentos Cirúrgicos Robóticos , Cirurgia Vitreorretiniana , Humanos , Procedimentos Cirúrgicos Robóticos/instrumentação , Procedimentos Cirúrgicos Robóticos/métodos , Cirurgia Vitreorretiniana/instrumentação , Cirurgia Vitreorretiniana/métodos , Cirurgia Vitreorretiniana/normas
17.
J Biomech ; 77: 146-154, 2018 08 22.
Artigo em Inglês | MEDLINE | ID: mdl-30031649

RESUMO

Concurrent use of finite element (FE) and musculoskeletal (MS) modeling techniques is capable of considering the interactions between prosthetic mechanics and subject dynamics after a total knee replacement (TKR) surgery is performed. However, it still has not been performed in terms of favorable prediction accuracy and systematic experimental validation. In this study, we presented a methodology to develop a subject-specific FE-MS model of a human right lower extremity including the interactions among the subject-specific MS model, the knee joint model with ligament bundles, and the deformable FE prosthesis model. In order to evaluate its accuracy, the FE-MS model was compared with a traditional hinge-constraint MS model and experimentally verified over a gait cycle. Both models achieved good temporal agreement between the predicted muscle force and the electromyography results, though the magnitude on models is different. A higher predicted accuracy, quantified by the root-mean-square error (RMSE) and the squared Pearson correlation coefficient (r2), was found in the FE-MS model (RMSE = 177.2 N, r2 = 0.90) when compared with the MS model (RMSE = 224.1 N, r2 = 0.81) on the total tibiofemoral contact force. The contact mechanics, including the contact area, pressure, and stress were synchronously simulated, and the maximum contact pressure, 22.06 MPa, occurred on the medial side of the tibial insert without exceeding the yield strength of the ultra-high-molecular-weight polyethylene, 24.79 MPa. The approach outlines an accurate knee joint biomechanics analysis and provides an effective method of applying individualized prosthesis design and verification in TKR.


Assuntos
Artroplastia do Joelho , Análise de Elementos Finitos , Fenômenos Mecânicos , Modelagem Computacional Específica para o Paciente , Idoso de 80 Anos ou mais , Fenômenos Biomecânicos , Marcha , Humanos , Masculino , Pressão , Desenho de Prótese , Estresse Mecânico
18.
J Laparoendosc Adv Surg Tech A ; 28(7): 906-911, 2018 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-29893626

RESUMO

AIMS: Our aims were to develop a training system for camera assistants (CA), and evaluate participants' performance as CA. METHODS: A questionnaire on essential requirements to be a good CA was administered to experts in pediatric endoscopic surgery. An infant-sized box trainer with several markers and lines inside was developed. Participants performed marker capturing and line-tracing tasks using a 5-mm 30° scope. A postexperimental questionnaire on the developed system was administered. The task completion time was measured. RESULTS: The 5-point evaluation scale was used for each item in the questionnaire survey of experts. The abilities to maintain a horizontal line (mean score: 4.5) and to center the target in a specified rectangle on the monitor (4.5) as well as having a full understanding of the operative procedure (4.3) were ranked as highly important. Fifty-two participants, including 5 surgical residents, were enrolled in the evaluation experiment. The completion time of capturing the markers was significantly longer in the resident group than in the nonresident group (244 versus 124 seconds, P = .04), but that of tracing the lines was not significantly different between the groups. The postexperimental questionnaire showed that the participants felt that the line-tracing tasks (3.7) were more difficult than marker-capturing tasks (2.9). CONCLUSIONS: Being proficient in manipulating a camera and having adequate knowledge of operative procedures are essential requirements to be a good CA. The ability was different between the resident and nonresident groups even in a simple task such as marker capturing.


Assuntos
Competência Clínica , Educação de Pós-Graduação em Medicina/métodos , Internato e Residência , Laparoscopia/educação , Especialidades Cirúrgicas/educação , Cirurgia Assistida por Computador/educação , Humanos , Lactente , Cirurgia Assistida por Computador/instrumentação
19.
Opt Express ; 26(8): 10870-10878, 2018 Apr 16.
Artigo em Inglês | MEDLINE | ID: mdl-29716017

RESUMO

Wavelength-tuning interferometry has been widely used for measuring the thickness variation of optical devices used in the semiconductor industry. However, in wavelength-tuning interferometry, the nonlinearity of phase shift causes a spatially uniform error in the calculated phase distribution. In this study, the spatially uniform error is formulated using Taylor series. A new 9-sample phase-shifting algorithm is proposed, with which the uniform spatial phase error can be eliminated. The characteristics of 9-sample algorithm is discussed using Fourier representation and RMS error analysis. Finally, optical-thickness variation of transparent plate is measured using the proposed algorithm and wavelength-tuning Fizeau interferometer and the error is compared with 7-sample algorithm.

20.
PLoS One ; 13(5): e0196131, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29758028

RESUMO

The present study was performed to establish a novel ocular surgery simulator for training in peeling of the inner limited membrane (ILM). This simulator included a next-generation artificial ILM with mechanical properties similar to the natural ILM that could be peeled underwater in the same manner as in actual surgery. An artificial eye consisting of a fundus and eyeball parts was fabricated. The artificial eye was installed in the eye surgery simulator. The fundus part was mounted in the eyeball, which consisted of an artificial sclera, retina, and ILM. To measure the thickness of the fabricated ILM on the artificial retina, we calculated the distance of the step height as the thickness of the artificial ILM. Two experienced ophthalmologists then assessed the fabricated ILM by sensory evaluation. The minimum thickness of the artificial ILM was 1.9 ± 0.3 µm (n = 3). We were able to perform the peeling task with the ILM in water. Based on the sensory evaluation, an ILM with a minimum thickness and 1000 degrees of polymerization was suitable for training. We installed the eye model on an ocular surgery simulator, which allowed for the performance of a sequence of operations similar to ILM peeling. In conclusion, we developed a novel ocular surgery simulator for ILM peeling. The artificial ILM was peeled underwater in the same manner as in an actual operation.


Assuntos
Simulação por Computador , Membrana Epirretiniana/cirurgia , Fundo de Olho , Membranas Artificiais , Procedimentos Cirúrgicos Oftalmológicos , Perfurações Retinianas/cirurgia , Água/química , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA