Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 73
Filtrar
1.
Int J Med Robot ; 19(2): e2476, 2023 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-36302228

RESUMEN

BACKGROUND: Neonate patients have a reduced thoracic cavity, making thoracoscopic procedures even more challenging than their adult counterparts. METHODS: We evaluated five control strategies for robot-assisted thoracoscopic surgical looping in simulations and experiments with a physical robotic system in a neonate surgical phantom. The strategies are composed of state-of-the-art constrained optimization and a novel looping force feedback term. RESULTS: All control strategies allowed users to successfully perform looping. A user study in simulation showed that the proposed strategy was superior in terms of Physical demand p < 0.05 $\left(p< 0.05\right)$ and task duration p < 0.05 $\left(p< 0.05\right)$ . The cumulative sum analysis of inexperienced users shows that the proposed looping force feedback can speed up the learning. Results with surgeons did not show a significant difference among control strategies. CONCLUSIONS: Assistive strategies in looping show promise and further work is needed to extend these benefits to other subtasks in robot-aided surgical suturing.


Asunto(s)
Procedimientos Quirúrgicos Robotizados , Cirujanos , Adulto , Recién Nacido , Humanos , Procedimientos Quirúrgicos Robotizados/métodos , Simulación por Computador , Suturas
2.
J Clin Med ; 11(14)2022 Jul 17.
Artículo en Inglés | MEDLINE | ID: mdl-35887909

RESUMEN

The value of kinematic data for skill assessment is being investigated. This is the first virtual reality simulator developed for liver surgery. This simulator was coded in C++ using PhysX and FleX with a novel cutting algorithm and used a patient data-derived model and two instruments functioning as ultrasonic shears. The simulator was evaluated by nine expert surgeons and nine surgical novices. Each participant performed a simulated metastasectomy after training. Kinematic data were collected for the instrument position. Each participant completed a survey. The expert participants had a mean age of 47 years and 9/9 were certified in surgery. Novices had a mean age of 30 years and 0/9 were certified surgeons. The mean path length (novice 0.76 ± 0.20 m vs. expert 0.46 ± 0.16 m, p = 0.008), movements (138 ± 45 vs. 84 ± 32, p = 0.043) and time (174 ± 44 s vs. 102 ± 42 s, p = 0.004) were significantly different for the two participant groups. There were no significant differences in activating the instrument (107 ± 25 vs. 109 ± 53). Participants considered the simulator realistic (6.5/7) (face validity), appropriate for education (5/7) (content validity) with an effective interface (6/7), consistent motion (5/7) and realistic soft tissue behavior (5/7). This study showed that the simulator differentiates between experts and novices. Simulation may be an effective way to obtain kinematic data.

3.
Microsyst Nanoeng ; 8: 74, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35812804

RESUMEN

To provide quantitative feedback on surgical progress to ophthalmologists practicing inner limiting membrane (ILM) peeling, we developed an artificial eye module comprising a quartz crystal resonator (QCR) force sensor and a strain body that serves as a uniform force transmitter beneath a retinal model. Although a sufficiently large initial force must be loaded onto the QCR force sensor assembly to achieve stable contact with the strain body, the highly sensitive and wide dynamic-range property of this sensor enables the eye module to detect the slight forceps contact force. A parallel-plate strain body is used to achieve a uniform force sensitivity over the 4-mm-diameter ILM peeling region. Combining these two components allowed for a measurable force range of 0.22 mN to 29.6 N with a sensitivity error within -11.3 to 4.2% over the ILM peeling area. Using this eye module, we measured the applied force during a simulation involving artificial ILM peeling by an untrained individual and compensated for the long-term drift of the obtained force data using a newly developed algorithm. The compensated force data clearly captured the characteristics of several types of motion sequences observed from video recordings of the eye bottom using an ophthalmological microscope. As a result, we succeeded in extracting feature values that can be potentially related to trainee skill level, such as the mean and standard deviation of the pushing and peeling forces, corresponding, in the case of an untrained operator, to 122.6 ± 95.2 and 20.4 ± 13.2 mN, respectively.

4.
PLoS One ; 17(7): e0271171, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35816482

RESUMEN

Among increasing eye diseases, glaucoma may hurt the optic nerves and lead to vision loss, the treatment of which is to reduce intraocular pressure (IOP). In this research, we introduce a new concept of the surgery simulator for Minimally Invasive Glaucoma Surgery (MIGS). The concept is comprised of an anterior eye model and a fluidic circulatory system. The model made of flexible material includes a channel like the Schlemm's canal (SC) and a membrane like the trabecular meshwork (TM) covering the SC. The system can monitor IOP in the model by a pressure sensor. In one of the MIGS procedures, the TM is cleaved to reduce the IOP. Using the simulator, ophthalmologists can practice the procedure and measure the IOP. First, considering the characteristics of human eyes, we defined requirements and target performances for the simulator. Next, we designed and manufactured the prototype. Using the prototype, we measured the IOP change before and after cleaving the TM. Finally, we demonstrated the availability by comparing experimental results and target performances. This simulator is also expected to be used for evaluations and developments of new MIGS instruments and ophthalmic surgery robots in addition to the surgical training of ophthalmologists.


Asunto(s)
Glaucoma , Prótesis Visuales , Glaucoma/cirugía , Humanos , Presión Intraocular , Microfluídica , Malla Trabecular/fisiología
5.
Proc IEEE Inst Electr Electron Eng ; 110(7): 893-908, 2022 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-36588782

RESUMEN

Intraocular surgery, one of the most challenging discipline of microsurgery, requires sensory and motor skills at the limits of human physiological capabilities combined with tremendously difficult requirements for accuracy and steadiness. Nowadays, robotics combined with advanced imaging has opened conspicuous and significant directions in advancing the field of intraocular microsurgery. Having patient treatment with greater safety and efficiency as the final goal, similar to other medical applications, robotics has a real potential to fundamentally change microsurgery by combining human strengths with computer and sensor-based technology in an information-driven environment. Still in its early stages, robotic assistance for intraocular microsurgery has been accepted with precaution in the operating room and successfully tested in a limited number of clinical trials. However, owing to its demonstrated capabilities including hand tremor reduction, haptic feedback, steadiness, enhanced dexterity, micrometer-scale accuracy, and others, microsurgery robotics has evolved as a very promising trend in advancing retinal surgery. This paper will analyze the advances in retinal robotic microsurgery, its current drawbacks and limitations, as well as the possible new directions to expand retinal microsurgery to techniques currently beyond human boundaries or infeasible without robotics.

6.
Int J Comput Assist Radiol Surg ; 16(4): 589-595, 2021 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-33723706

RESUMEN

PURPOSE: The Johns Hopkins-Intuitive Gesture and Skill Assessment Working Set (JIGSAWS) dataset is used to develop robotic surgery skill assessment tools, but there has been no detailed analysis of this dataset. The aim of this study is to perform a learning curve analysis of the existing JIGSAWS dataset. METHODS: Five trials were performed in JIGSAWS by eight participants (four novices, two intermediates and two experts) for three exercises (suturing, knot-tying and needle passing). Global Rating Scores and time, path length and movements were analyzed quantitatively and qualitatively by graphical analysis. RESULTS: There are no significant differences in Global Rating Scale scores over time. Time in the suturing exercise and path length in needle passing had significant differences. Other kinematic parameters were not significantly different. Qualitative analysis shows a learning curve only for suturing. Cumulative sum analysis suggests completion of the learning curve for suturing by trial 4. CONCLUSIONS: The existing JIGSAWS dataset does not show a quantitative learning curve for Global Rating Scale scores, or most kinematic parameters which may be due in part to the limited size of the dataset. Qualitative analysis shows a learning curve for suturing. Cumulative sum analysis suggests completion of the suturing learning curve by trial 4. An expanded dataset is needed to facilitate subset analyses.


Asunto(s)
Competencia Clínica , Gestos , Laparoscopía/educación , Laparoscopía/métodos , Curva de Aprendizaje , Movimiento (Física) , Procedimientos Quirúrgicos Robotizados/educación , Procedimientos Quirúrgicos Robotizados/métodos , Técnicas de Sutura , Algoritmos , Fenómenos Biomecánicos , Cirugía General/educación , Humanos , Suturas
7.
Int J Comput Assist Radiol Surg ; 15(12): 2017-2025, 2020 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-33025366

RESUMEN

PURPOSE: The JIGSAWS dataset is a fixed dataset of robot-assisted surgery kinematic data used to develop predictive models of skill. The purpose of this study is to analyze the relationships of self-defined skill level with global rating scale scores and kinematic data (time, path length and movements) from three exercises (suturing, knot-tying and needle passing) (right and left hands) in the JIGSAWS dataset. METHODS: Global rating scale scores are reported in the JIGSAWS dataset and kinematic data were calculated using ROVIMAS software. Self-defined skill levels are in the dataset (novice, intermediate, expert). Correlation coefficients (global rating scale-skill level and global rating scale-kinematic parameters) were calculated. Kinematic parameters were compared among skill levels. RESULTS: Global rating scale scores correlated with skill in the knot-tying exercise (r = 0.55, p = 0.0005). In the suturing exercise, time, path length (left) and movements (left) were significantly different (p < 0.05) for novices and experts. For knot-tying, time, path length (right and left) and movements (right) differed significantly for novices and experts. For needle passing, no kinematic parameter was significantly different comparing novices and experts. The only kinematic parameter that correlated with global rating scale scores is time in the knot-tying exercise. CONCLUSION: Global rating scale scores weakly correlate with skill level and kinematic parameters. The ability of kinematic parameters to differentiate among self-defined skill levels is inconsistent. Additional data are needed to enhance the dataset and facilitate subset analyses and future model development.


Asunto(s)
Competencia Clínica , Laparoscopía/educación , Procedimientos Quirúrgicos Robotizados , Entrenamiento Simulado , Programas Informáticos , Fenómenos Biomecánicos , Gestos , Humanos , Movimiento (Física) , Técnicas de Sutura/educación , Suturas
8.
Int J Comput Assist Radiol Surg ; 15(8): 1257-1265, 2020 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-32445129

RESUMEN

PURPOSE: The manual generation of training data for the semantic segmentation of medical images using deep neural networks is a time-consuming and error-prone task. In this paper, we investigate the effect of different levels of realism on the training of deep neural networks for semantic segmentation of robotic instruments. An interactive virtual-reality environment was developed to generate synthetic images for robot-aided endoscopic surgery. In contrast with earlier works, we use physically based rendering for increased realism. METHODS: Using a virtual reality simulator that replicates our robotic setup, three synthetic image databases with an increasing level of realism were generated: flat, basic, and realistic (using the physically-based rendering). Each of those databases was used to train 20 instances of a UNet-based semantic-segmentation deep-learning model. The networks trained with only synthetic images were evaluated on the segmentation of 160 endoscopic images of a phantom. The networks were compared using the Dwass-Steel-Critchlow-Fligner nonparametric test. RESULTS: Our results show that the levels of realism increased the mean intersection-over-union (mIoU) of the networks on endoscopic images of a phantom ([Formula: see text]). The median mIoU values were 0.235 for the flat dataset, 0.458 for the basic, and 0.729 for the realistic. All the networks trained with synthetic images outperformed naive classifiers. Moreover, in an ablation study, we show that the mIoU of physically based rendering is superior to texture mapping ([Formula: see text]) of the instrument (0.606), the background (0.685), and the background and instruments combined (0.672). CONCLUSIONS: Using physical-based rendering to generate synthetic images is an effective approach to improve the training of neural networks for the semantic segmentation of surgical instruments in endoscopic images. Our results show that this strategy can be an essential step in the broad applicability of deep neural networks in semantic segmentation tasks and help bridge the domain gap in machine learning.


Asunto(s)
Aprendizaje Automático , Redes Neurales de la Computación , Procedimientos Quirúrgicos Robotizados/educación , Entrenamiento Simulado , Bases de Datos Factuales , Endoscopía , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Fantasmas de Imagen
9.
Int J Med Educ ; 11: 97-106, 2020 May 18.
Artículo en Inglés | MEDLINE | ID: mdl-32425176

RESUMEN

OBJECTIVES: To evaluate the effect of simulator fidelity on procedure skill training through a review of existing studies. METHODS: MEDLINE, OVID and EMBASE databases were searched between January 1990 and January 2019. Search terms included "simulator fidelity and comparison" and "low fidelity" and "high fidelity" and "comparison" and "simulator". Author classification of low- and high-fidelity was used for non-laparoscopic procedures. Laparoscopic simulators are classified using a proposed schema. All included studies used a randomized methodology with two or more groups and were written in English. Data was abstracted to a standard data sheet and critically appraised from 17 eligible full papers. RESULTS: Of 17 studies, eight were for laparoscopic and nine for other skill training. Studies employed evaluation methodologies, including subjective and objective measures. The evaluation was conducted once in 13/17 studies and before-after in 4/17. Didactic training only or control groups were used in 5/17 studies, while 10/17 studies included two groups only. Skill acquisition and simulator fidelity were different for the level of training in 1/17 studies. Simulation training was followed by clinical evaluation or a live animal evaluation in 3/17 studies. Low-fidelity training was not inferior to training with a high-fidelity simulator in 15/17 studies. CONCLUSIONS: Procedure skill after training with low fidelity simulators was not inferior to skill after training with high fidelity simulators in 15/17 studies. Some data suggest that the effectiveness of different fidelity simulators depends on the level of training of participants and requires further study.


Asunto(s)
Competencia Clínica , Educación Médica , Laparoscopía/educación , Entrenamiento Simulado , Cirujanos/educación , Educación Médica/métodos , Educación Médica/normas , Evaluación Educacional , Humanos , Laparoscopía/métodos , Laparoscopía/psicología , Reproducibilidad de los Resultados , Entrenamiento Simulado/métodos , Entrenamiento Simulado/normas
10.
Appl Opt ; 59(4): 991-997, 2020 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-32225236

RESUMEN

Two types of phase-shifting algorithms were developed for simultaneous measurement of the surface and thickness variation of an optical flat. During wavelength tuning, phase-shift nonlinearity can cause a spatially nonuniform error and spatially uniform DC drift error. A 19-sample algorithm was developed that eliminates the effect of the spatially uniform error by expanding the 17-sample algorithm with characteristic polynomial theory. The 19-sample algorithm was then altered to measure the surface shape of the optical flat by rotation of the characteristic diagram. The surface shape and thickness variation were measured with these two algorithms and a wavelength-tuning Fizeau interferometer.

11.
Int J Med Robot ; 16(2): e2053, 2020 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-31677353

RESUMEN

BACKGROUND: With the increasing presence of surgical robots minimally invasive surgery, there is a growing necessity of a versatile surgical system for deep and narrow workspaces. METHODS: We developed a versatile system for constrained workspaces called SmartArm. It has two industrial-type robotic arms with flexible tools attached to its distal tip, with a total of nine active degrees-of-freedom. The system has a control algorithm based on constrained optimization that allows the safe generation of task constraints and intuitive teleoperation. RESULTS: The SmartArm system is evaluated in a master-slave experiment in which a medically untrained user operates the robot to suture the dura mater membrane at the skull base of a realistic head phantom. Our results show that the user could accomplish the task proficiently, with speed and accuracy comparable to manual suturing by surgeons. Conclusions We demonstrated the integration and validation of the SmartArm.


Asunto(s)
Microcirugia/instrumentación , Procedimientos Quirúrgicos Robotizados/instrumentación , Algoritmos , Fenómenos Biomecánicos , Diseño de Equipo , Humanos , Laparoscopía/métodos , Microcirugia/métodos , Procedimientos Quirúrgicos Mínimamente Invasivos , Fantasmas de Imagen , Procedimientos Quirúrgicos Robotizados/métodos , Programas Informáticos , Cirujanos
12.
Int J Comput Assist Radiol Surg ; 15(1): 41-47, 2020 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-31422553

RESUMEN

OBJECTIVE: Conventional surgical assistance and skill analysis for suturing mostly focus on the motions of the tools. As the quality of the suturing is determined by needle motions relative to the tissues, having knowledge of the needle motion would be useful for surgical assistance and skill analysis. As the first step toward demonstrating the usefulness of the knowledge of the needle motion, we developed a needle detection algorithm. METHODS: Owing to the small needle size, attaching sensors to it is difficult. Therefore, we developed a real-time video-based needle detection algorithm using a region-based convolutional neural network. RESULTS: Our method successfully detected the needle with an average precision of 89.2%. The needle was robustly detected even when the needle was heavily occluded by the tools and/or the blood vessels during microvascular anastomosis. However, there were some incorrect detections, including partial detection. CONCLUSION: To the best of our knowledge, this is the first time deep neural networks have been applied to real-time needle detection. In the future, we will develop a needle pose estimation algorithm using the predicted needle location toward computer-aided surgical assistance and surgical skill analysis.


Asunto(s)
Algoritmos , Agujas , Redes Neurales de la Computación , Cirugía Asistida por Computador/métodos , Técnicas de Sutura/instrumentación , Humanos , Tempo Operativo
13.
Int J Comput Assist Radiol Surg ; 14(10): 1663-1671, 2019 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-31177422

RESUMEN

PURPOSE: Annotation of surgical activities becomes increasingly important for many recent applications such as surgical workflow analysis, surgical situation awareness, and the design of the operating room of the future, especially to train machine learning methods in order to develop intelligent assistance. Currently, annotation is mostly performed by observers with medical background and is incredibly costly and time-consuming, creating a major bottleneck for the above-mentioned technologies. In this paper, we propose a way to eliminate, or at least limit, the human intervention in the annotation process. METHODS: Meaningful information about interaction between objects is inherently available in virtual reality environments. We propose a strategy to convert automatically this information into annotations in order to provide as output individual surgical process models. VALIDATION: We implemented our approach through a peg-transfer task simulator and compared it to manual annotations. To assess the impact of our contribution, we studied both intra- and inter-observer variability. RESULTS AND CONCLUSION: In average, manual annotations took more than 12 min for 1 min of video to achieve low-level physical activity annotation, whereas automatic annotation is achieved in less than a second for the same video period. We also demonstrated that manual annotation introduced mistakes as well as intra- and inter-observer variability that our method is able to suppress due to the high precision and reproducibility.


Asunto(s)
Aprendizaje Automático , Modelos Anatómicos , Cirugía Asistida por Computador/métodos , Humanos , Quirófanos , Reproducibilidad de los Resultados , Realidad Virtual
14.
Micromachines (Basel) ; 10(5)2019 Apr 30.
Artículo en Inglés | MEDLINE | ID: mdl-31052324

RESUMEN

Three-dimensional (3D) microfluidic channels, which simulate human tissues such as blood vessels, are useful in surgical simulator models for evaluating surgical devices and training novice surgeons. However, animal models and current artificial models do not sufficiently mimic the anatomical and mechanical properties of human tissues. Therefore, we established a novel fabrication method to fabricate an eye model for use as a surgical simulator. For the glaucoma surgery task, the eye model consists of a sclera with a clear cornea; a 3D microchannel with a width of 200-500 µm, representing the Schlemm's canal (SC); and a thin membrane with a thickness of 40-132 µm, representing the trabecular meshwork (TM). The sclera model with a clear cornea and SC was fabricated by 3D molding. Blow molding was used to fabricate the TM to cover the inner surface of the sclera part. Soft materials with controllable mechanical behaviors were used to fabricate the sclera and TM parts to mimic the mechanical properties of human tissues. Additionally, to simulate the surgery with constraints similar to those in a real operation, the eye model was installed on a skull platform. Therefore, in this paper, we propose an integration method for fabricating an eye model that has a 3D microchannel representing the SC and a membrane representing the TM, to develop a glaucoma model for training novice surgeons.

15.
Int J Med Robot ; 15(1): e1953, 2019 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-30117272

RESUMEN

BACKGROUND: Integrating simulators with robotic surgical procedures could assist in designing and testing of novel robotic control algorithms and further enhance patient-specific pre-operative planning and training for robotic surgeries. METHODS: A virtual reality simulator, developed to perform the transsphenoidal resection of pituitary gland tumours, tested the usability of robotic interfaces and control algorithms. It used position-based dynamics to allow soft-tissue deformation and resection with haptic feedback; dynamic motion scaling control was also incorporated into the simulator. RESULTS: Neurosurgeons and residents performed the surgery under constant and dynamic motion scaling conditions (CMS vs DMS). DMS increased dexterity and reduced the risk of damage to healthy brain tissue. Post-experimental questionnaires indicated that the system was well-evaluated by experts. CONCLUSION: The simulator was intuitively and realistically operated. It increased the safety and accuracy of the procedure without affecting intervention time. Future research can investigate incorporating this simulation into a real micro-surgical robotic system.


Asunto(s)
Neoplasias Encefálicas/diagnóstico por imagen , Neoplasias Encefálicas/cirugía , Simulación por Computador , Procedimientos Quirúrgicos Robotizados/métodos , Realidad Virtual , Algoritmos , Encéfalo/diagnóstico por imagen , Diseño de Equipo , Humanos , Movimiento (Física) , Movimiento , Neurocirugia , Interfaz Usuario-Computador
16.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 1723-1726, 2018 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-30440727

RESUMEN

Vitreoretinal surgery is one of the most difficult surgical operations, even for experienced surgeons. Thus, a master-slave eye surgical robot has been developed to assist the surgeon in safely performing vitreoretinal surgeries; however, in the master-slave control, the robotic positioning accuracy depends on the surgeon's coordination skills. This paper proposes a new method of autonomous robotic positioning using the shadow of the surgical instrument. First, the microscope image is segmented into three regions-namely, a micropipette, its shadow, and the eye ground-using a Gaussian mixture model (GMM). The tips of the micropipette and its shadow are then extracted from the contour lines of the segmented regions. The micropipette is then autonomously moved down to the simulated eye ground until the distance between the tips of micropipette and its shadow in the microscopic image reaches a predefined threshold. To handle possible occlusions, the tip of the shadow is estimated using a Kalman filter. Experiments to evaluate the robotic positioning accuracy in the vertical direction were performed. The results show that the autonomous positioning using the Kalman filter enhanced the accuracy of robotic positioning.


Asunto(s)
Procedimientos Quirúrgicos Robotizados , Cirugía Vitreorretiniana , Humanos , Procedimientos Quirúrgicos Robotizados/instrumentación , Procedimientos Quirúrgicos Robotizados/métodos , Cirugía Vitreorretiniana/instrumentación , Cirugía Vitreorretiniana/métodos , Cirugía Vitreorretiniana/normas
17.
J Biomech ; 77: 146-154, 2018 08 22.
Artículo en Inglés | MEDLINE | ID: mdl-30031649

RESUMEN

Concurrent use of finite element (FE) and musculoskeletal (MS) modeling techniques is capable of considering the interactions between prosthetic mechanics and subject dynamics after a total knee replacement (TKR) surgery is performed. However, it still has not been performed in terms of favorable prediction accuracy and systematic experimental validation. In this study, we presented a methodology to develop a subject-specific FE-MS model of a human right lower extremity including the interactions among the subject-specific MS model, the knee joint model with ligament bundles, and the deformable FE prosthesis model. In order to evaluate its accuracy, the FE-MS model was compared with a traditional hinge-constraint MS model and experimentally verified over a gait cycle. Both models achieved good temporal agreement between the predicted muscle force and the electromyography results, though the magnitude on models is different. A higher predicted accuracy, quantified by the root-mean-square error (RMSE) and the squared Pearson correlation coefficient (r2), was found in the FE-MS model (RMSE = 177.2 N, r2 = 0.90) when compared with the MS model (RMSE = 224.1 N, r2 = 0.81) on the total tibiofemoral contact force. The contact mechanics, including the contact area, pressure, and stress were synchronously simulated, and the maximum contact pressure, 22.06 MPa, occurred on the medial side of the tibial insert without exceeding the yield strength of the ultra-high-molecular-weight polyethylene, 24.79 MPa. The approach outlines an accurate knee joint biomechanics analysis and provides an effective method of applying individualized prosthesis design and verification in TKR.


Asunto(s)
Artroplastia de Reemplazo de Rodilla , Análisis de Elementos Finitos , Fenómenos Mecánicos , Modelación Específica para el Paciente , Anciano de 80 o más Años , Fenómenos Biomecánicos , Marcha , Humanos , Masculino , Presión , Diseño de Prótesis , Estrés Mecánico
18.
J Laparoendosc Adv Surg Tech A ; 28(7): 906-911, 2018 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-29893626

RESUMEN

AIMS: Our aims were to develop a training system for camera assistants (CA), and evaluate participants' performance as CA. METHODS: A questionnaire on essential requirements to be a good CA was administered to experts in pediatric endoscopic surgery. An infant-sized box trainer with several markers and lines inside was developed. Participants performed marker capturing and line-tracing tasks using a 5-mm 30° scope. A postexperimental questionnaire on the developed system was administered. The task completion time was measured. RESULTS: The 5-point evaluation scale was used for each item in the questionnaire survey of experts. The abilities to maintain a horizontal line (mean score: 4.5) and to center the target in a specified rectangle on the monitor (4.5) as well as having a full understanding of the operative procedure (4.3) were ranked as highly important. Fifty-two participants, including 5 surgical residents, were enrolled in the evaluation experiment. The completion time of capturing the markers was significantly longer in the resident group than in the nonresident group (244 versus 124 seconds, P = .04), but that of tracing the lines was not significantly different between the groups. The postexperimental questionnaire showed that the participants felt that the line-tracing tasks (3.7) were more difficult than marker-capturing tasks (2.9). CONCLUSIONS: Being proficient in manipulating a camera and having adequate knowledge of operative procedures are essential requirements to be a good CA. The ability was different between the resident and nonresident groups even in a simple task such as marker capturing.


Asunto(s)
Competencia Clínica , Educación de Postgrado en Medicina/métodos , Internado y Residencia , Laparoscopía/educación , Especialidades Quirúrgicas/educación , Cirugía Asistida por Computador/educación , Humanos , Lactante , Cirugía Asistida por Computador/instrumentación
19.
Opt Express ; 26(8): 10870-10878, 2018 Apr 16.
Artículo en Inglés | MEDLINE | ID: mdl-29716017

RESUMEN

Wavelength-tuning interferometry has been widely used for measuring the thickness variation of optical devices used in the semiconductor industry. However, in wavelength-tuning interferometry, the nonlinearity of phase shift causes a spatially uniform error in the calculated phase distribution. In this study, the spatially uniform error is formulated using Taylor series. A new 9-sample phase-shifting algorithm is proposed, with which the uniform spatial phase error can be eliminated. The characteristics of 9-sample algorithm is discussed using Fourier representation and RMS error analysis. Finally, optical-thickness variation of transparent plate is measured using the proposed algorithm and wavelength-tuning Fizeau interferometer and the error is compared with 7-sample algorithm.

20.
Int J Comput Assist Radiol Surg ; 13(9): 1419-1428, 2018 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-29752636

RESUMEN

PURPOSE: Surgical processes are generally only studied by identifying differences in populations such as participants or level of expertise. But the similarity between this population is also important in understanding the process. We therefore proposed to study these two aspects. METHODS: In this article, we show how similarities in process workflow within a population can be identified as sequential surgical signatures. To this purpose, we have proposed a pattern mining approach to identify these signatures. VALIDATION: We validated our method with a data set composed of seventeen micro-surgical suturing tasks performed by four participants with two levels of expertise. RESULTS: We identified sequential surgical signatures specific to each participant, shared between participants with and without the same level of expertise. These signatures are also able to perfectly define the level of expertise of the participant who performed a new micro-surgical suturing task. However, it is more complicated to determine who the participant is, and the method correctly determines this information in only 64% of cases. CONCLUSION: We show for the first time the concept of sequential surgical signature. This new concept has the potential to further help to understand surgical procedures and provide useful knowledge to define future CAS systems.


Asunto(s)
Reconocimiento de Normas Patrones Automatizadas , Cirugía Asistida por Computador , Técnicas de Sutura , Flujo de Trabajo , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA