Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 40
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-38748053

RESUMO

PURPOSE: In this paper, we present a novel approach to the automatic evaluation of open surgery skills using depth cameras. This work is intended to show that depth cameras achieve similar results to RGB cameras, which is the common method in the automatic evaluation of open surgery skills. Moreover, depth cameras offer advantages such as robustness to lighting variations, camera positioning, simplified data compression, and enhanced privacy, making them a promising alternative to RGB cameras. METHODS: Experts and novice surgeons completed two simulators of open suturing. We focused on hand and tool detection and action segmentation in suturing procedures. YOLOv8 was used for tool detection in RGB and depth videos. Furthermore, UVAST and MSTCN++ were used for action segmentation. Our study includes the collection and annotation of a dataset recorded with Azure Kinect. RESULTS: We demonstrated that using depth cameras in object detection and action segmentation achieves comparable results to RGB cameras. Furthermore, we analyzed 3D hand path length, revealing significant differences between experts and novice surgeons, emphasizing the potential of depth cameras in capturing surgical skills. We also investigated the influence of camera angles on measurement accuracy, highlighting the advantages of 3D cameras in providing a more accurate representation of hand movements. CONCLUSION: Our research contributes to advancing the field of surgical skill assessment by leveraging depth cameras for more reliable and privacy evaluations. The findings suggest that depth cameras can be valuable in assessing surgical skills and provide a foundation for future research in this area.

2.
Acad Med ; 99(4S Suppl 1): S89-S94, 2024 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-38207081

RESUMO

PURPOSE: Successful implementation of precision education systems requires widespread adoption and seamless integration of new technologies with unique data streams that facilitate real-time performance feedback. This paper explores the use of sensor technology to quantify hands-on clinical skills. The goal is to shorten the learning curve through objective and actionable feedback. METHOD: A sensor-enabled clinical breast examination (CBE) simulator was used to capture force and video data from practicing clinicians (N = 152). Force-by-time markers from the sensor data and a machine learning algorithm were used to parse physicians' CBE performance into periods of search and palpation and then these were used to investigate distinguishing characteristics of successful versus unsuccessful attempts to identify masses in CBEs. RESULTS: Mastery performance from successful physicians showed stable levels of speed and force across the entire CBE and a 15% increase in force when in palpation mode compared with search mode. Unsuccessful physicians failed to search with sufficient force to detect deep masses ( F [5,146] = 4.24, P = .001). While similar proportions of male and female physicians reached the highest performance level, males used more force as noted by higher palpation to search force ratios ( t [63] = 2.52, P = .014). CONCLUSIONS: Sensor technology can serve as a useful pathway to assess hands-on clinical skills and provide data-driven feedback. When using a sensor-enabled simulator, the authors found specific haptic approaches that were associated with successful CBE outcomes. Given this study's findings, continued exploration of sensor technology in support of precision education for hands-on clinical skills is warranted.


Assuntos
Palpação , Médicos , Humanos , Masculino , Feminino , Programas de Rastreamento , Mãos
3.
Int J Comput Assist Radiol Surg ; 19(1): 83-86, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37278834

RESUMO

PURPOSE: This work uses deep learning algorithms to provide automated feedback on the suture with intracorporeal knot exercise in the fundamentals of laparoscopic surgery simulator. Different metrics were designed to provide informative feedback to the user on how to complete the task more efficiently. The automation of the feedback will allow students to practice at any time without the supervision of experts. METHODS: Five residents and five senior surgeons participated in the study. Object detection, image classification, and semantic segmentation deep learning algorithms were used to collect statistics on the practitioner's performance. Three task-specific metrics were defined. The metrics refer to the way the practitioner holds the needle before the insertion to the Penrose drain, and the amount of movement of the Penrose drain during the needle's insertion. RESULTS: Good agreement between the human labeling and the different algorithms' performance and metric values was achieved. The difference between the scores of the senior surgeons and the surgical residents was statistically significant for one of the metrics. CONCLUSION: We developed a system that provides performance metrics of the intracorporeal suture exercise. These metrics can help surgical residents practice independently and receive informative feedback on how they entered the needle into the Penrose.


Assuntos
Laparoscopia , Técnicas de Sutura , Humanos , Técnicas de Sutura/educação , Competência Clínica , Laparoscopia/métodos , Algoritmos , Suturas
4.
J Biomed Inform ; 144: 104446, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37467836

RESUMO

OBJECTIVE: This study aims to explore speech as an alternative modality for human activity recognition (HAR) in medical settings. While current HAR technologies rely on video and sensory modalities, they are often unsuitable for the medical environment due to interference from medical personnel, privacy concerns, and environmental limitations. Therefore, we propose an end-to-end, fully automatic objective checklist validation framework that utilizes medical personnel's uttered speech to recognize and document the executed actions in a checklist format. METHODS: Our framework records, processes, and analyzes medical personnel's speech to extract valuable information about performed actions. This information is then used to fill the corresponding rubrics in the checklist automatically. RESULTS: Our approach to activity recognition outperformed the online expert examiner, achieving an F1 score of 0.869 on verbal tasks and an ICC score of 0.822 with an offline examiner. Furthermore, the framework successfully identified communication failures and medical errors made by physicians and nurses. CONCLUSION: Implementing a speech-based framework in medical settings, such as the emergency room and operation room, holds promise for improving care delivery and enabling the development of automated assistive technologies in various medical domains. By leveraging speech as a modality for HAR, we can overcome the limitations of existing technologies and enhance workflow efficiency and patient safety.


Assuntos
Médicos , Fala , Humanos , Comunicação , Lista de Checagem , Segurança do Paciente
5.
Surg Endosc ; 37(8): 6476-6482, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37253868

RESUMO

BACKGROUND: Fundamentals of Laparoscopic Surgery (FLS) box trainer is a well-accepted method for training and evaluating laparoscopic skills. It mandates an observer that will measure and evaluate the trainee's performance. Measuring performance in the Peg Transfer task includes time and penalty for dropping pegs. This study aimed to assess whether computer vision (CV) may be used to automatically measure performance in the FLS box trainer. METHODS: Four groups of metrics were defined and measured automatically using CV. Validity was assessed by dividing participants to 3 groups of experience levels. Twenty-seven participants were recorded performing the Peg Transfer task 2-4 times, amounting to 72 videos. Frames were sampled from the videos and labeled to create an image dataset. Using these images, we trained a deep neural network (YOLOv4) to detect the different objects in the video. We developed an evaluation system that tracks the transfer of the triangles and produces a feedback report with the metrics being the main criteria. The metric groups were Time, Grasper Movement Speed, Path Efficiency, and Grasper Coordination. The performance was compared based on their last video (3 participants were excluded due to technical issues). RESULTS: The ANOVA tests show that for all metrics except one, the variance in performance can be explained by the experience level of participants. Senior surgeons and residents significantly outperform students and interns on almost every metric. Senior surgeons usually outperform residents, but the gap is not always significant. CONCLUSION: The statistical analysis shows that the metrics can differentiate between the experts and novices performing the task in several aspects. Thus, they may provide a more detailed performance analysis than is currently used. Moreover, these metrics calculation is automatic and relies solely on the video camera of the FLS trainer. As a result, they allow independent training and assessment.


Assuntos
Laparoscopia , Interface Usuário-Computador , Humanos , Simulação por Computador , Competência Clínica , Computadores , Laparoscopia/métodos , Análise e Desempenho de Tarefas
6.
Int J Comput Assist Radiol Surg ; 18(7): 1279-1285, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37253925

RESUMO

PURPOSE: This research aims to facilitate the use of state-of-the-art computer vision algorithms for the automated training of surgeons and the analysis of surgical footage. By estimating 2D hand poses, we model the movement of the practitioner's hands, and their interaction with surgical instruments, to study their potential benefit for surgical training. METHODS: We leverage pre-trained models on a publicly available hands dataset to create our own in-house dataset of 100 open surgery simulation videos with 2D hand poses. We also assess the ability of pose estimations to segment surgical videos into gestures and tool-usage segments and compare them to kinematic sensors and I3D features. Furthermore, we introduce 6 novel surgical dexterity proxies stemming from domain experts' training advice, all of which our framework can automatically detect given raw video footage. RESULTS: State-of-the-art gesture segmentation accuracy of 88.35% on the open surgery simulation dataset is achieved with the fusion of 2D poses and I3D features from multiple angles. The introduced surgical skill proxies presented significant differences for novices compared to experts and produced actionable feedback for improvement. CONCLUSION: This research demonstrates the benefit of pose estimations for open surgery by analyzing their effectiveness in gesture segmentation and skill assessment. Gesture segmentation using pose estimations achieved comparable results to physical sensors while being remote and markerless. Surgical dexterity proxies that rely on pose estimation proved they can be used to work toward automated training feedback. We hope our findings encourage additional collaboration on novel skill proxies to make surgical training more efficient.


Assuntos
Algoritmos , Mãos , Humanos , Retroalimentação , Mãos/cirurgia , Simulação por Computador , Movimento , Gestos
7.
J Surg Res ; 283: 500-506, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36436286

RESUMO

INTRODUCTION: Video-based review of surgical procedures has proven to be useful in training by enabling efficiency in the qualitative assessment of surgical skill and intraoperative decision-making. Current video segmentation protocols focus largely on procedural steps. Although some operations are more complex than others, many of the steps in any given procedure involve an intricate choreography of basic maneuvers such as suturing, knot tying, and cutting. The use of these maneuvers at certain procedural steps can convey information that aids in the assessment of the complexity of the procedure, surgical preference, and skill. Our study aims to develop and evaluate an algorithm to identify these maneuvers. METHODS: A standard deep learning architecture was used to differentiate between suture throws, knot ties, and suture cutting on a data set comprised of videos from practicing clinicians (N = 52) who participated in a simulated enterotomy repair. Perception of the added value to traditional artificial intelligence segmentation was explored by qualitatively examining the utility of identifying maneuvers in a subset of steps for an open colon resection. RESULTS: An accuracy of 84% was reached in differentiating maneuvers. The precision in detecting the basic maneuvers was 87.9%, 60%, and 90.9% for suture throws, knot ties, and suture cutting, respectively. The qualitative concept mapping confirmed realistic scenarios that could benefit from basic maneuver identification. CONCLUSIONS: Basic maneuvers can indicate error management activity or safety measures and allow for the assessment of skill. Our deep learning algorithm identified basic maneuvers with reasonable accuracy. Such models can aid in artificial intelligence-assisted video review by providing additional information that can complement traditional video segmentation protocols.


Assuntos
Inteligência Artificial , Competência Clínica , Algoritmos , Procedimentos Neurocirúrgicos , Colo , Técnicas de Sutura/educação
8.
Int J Comput Assist Radiol Surg ; 17(8): 1497-1505, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35759176

RESUMO

PURPOSE: The goal of this work is to use multi-camera video to classify open surgery tools as well as identify which tool is held in each hand. Multi-camera systems help prevent occlusions in open surgery video data. Furthermore, combining multiple views such as a top-view camera covering the full operative field and a close-up camera focusing on hand motion and anatomy may provide a more comprehensive view of the surgical workflow. However, multi-camera data fusion poses a new challenge: A tool may be visible in one camera and not the other. Thus, we defined the global ground truth as the tools being used regardless their visibility. Therefore, tools that are out of the image should be remembered for extensive periods of time while the system responds quickly to changes visible in the video. METHODS: Participants (n = 48) performed a simulated open bowel repair. A top-view and a close-up cameras were used. YOLOv5 was used for tool and hand detection. A high-frequency LSTM with a 1-second window at 30 frames per second (fps) and a low-frequency LSTM with a 40-second window at 3 fps were used for spatial, temporal, and multi-camera integration. RESULTS: The accuracy and F1 of the six systems were: top-view (0.88/0.88), close-up (0.81,0.83), both cameras (0.9/0.9), high-fps LSTM (0.92/0.93), low-fps LSTM (0.9/0.91), and our final architecture the multi-camera classifier(0.93/0.94). CONCLUSION: Since each camera in a multi-camera system may have a partial view of the procedure, we defined a 'global ground truth.' Defining this at the data labeling phase emphasized this requirement at the learning phase, eliminating the need for any heuristic decisions. By combining a system with a high fps and a low fps from the multiple camera array, we improved the classification abilities of the global ground truth.


Assuntos
Mãos , Mãos/cirurgia , Humanos , Movimento (Física)
9.
Int J Comput Assist Radiol Surg ; 17(6): 965-979, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35419721

RESUMO

PURPOSE: The use of motion sensors is emerging as a means for measuring surgical performance. Motion sensors are typically used for calculating performance metrics and assessing skill. The aim of this study was to identify surgical gestures and tools used during an open surgery suturing simulation based on motion sensor data. METHODS: Twenty-five participants performed a suturing task on a variable tissue simulator. Electromagnetic motion sensors were used to measure their performance. The current study compares GRU and LSTM networks, which are known to perform well on other kinematic datasets, as well as MS-TCN++, which was developed for video data and was adapted in this work for motion sensors data. Finally, we extended all architectures for multi-tasking. RESULTS: In the gesture recognition task the MS-TCN++ has the highest performance with accuracy of [Formula: see text] and F1-Macro of [Formula: see text], edit distance of [Formula: see text] and F1@10 of [Formula: see text] In the tool usage recognition task for the right hand, MS-TCN++ performs the best in most metrics with an accuracy score of [Formula: see text], F1-Macro of [Formula: see text], F1@10 of [Formula: see text], and F1@25 of [Formula: see text]. The multi-task GRU performs best in all metrics in the left-hand case, with an accuracy of [Formula: see text], edit distance of [Formula: see text], F1-Macro of [Formula: see text], F1@10 of [Formula: see text], and F1@25 of [Formula: see text]. CONCLUSION: In this study, using motion sensor data, we automatically identified the surgical gestures and the tools used during an open surgery suturing simulation. Our methods may be used for computing more detailed performance metrics and assisting in automatic workflow analysis. MS-TCN++ performed better in gesture recognition as well as right-hand tool recognition, while the multi-task GRU provided better results in the left-hand case. It should be noted that our multi-task GRU network is significantly smaller and has achieved competitive results in the rest of the tasks as well.


Assuntos
Gestos , Suturas , Fenômenos Biomecânicos , Humanos , Movimento (Física)
10.
Int J Comput Assist Radiol Surg ; 17(3): 437-448, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-35103921

RESUMO

PURPOSE: The goal of this study was to develop a new reliable open surgery suturing simulation system for training medical students in situations where resources are limited or in the domestic setup. Namely, we developed an algorithm for tools and hands localization as well as identifying the interactions between them based on simple webcam video data, calculating motion metrics for assessment of surgical skill. METHODS: Twenty-five participants performed multiple suturing tasks using our simulator. The YOLO network was modified to a multi-task network for the purpose of tool localization and tool-hand interaction detection. This was accomplished by splitting the YOLO detection heads so that they supported both tasks with minimal addition to computer run-time. Furthermore, based on the outcome of the system, motion metrics were calculated. These metrics included traditional metrics such as time and path length as well as new metrics assessing the technique participants use for holding the tools. RESULTS: The dual-task network performance was similar to that of two networks, while computational load was only slightly bigger than one network. In addition, the motion metrics showed significant differences between experts and novices. CONCLUSION: While video capture is an essential part of minimal invasive surgery, it is not an integral component of open surgery. Thus, new algorithms, focusing on the unique challenges open surgery videos present, are required. In this study, a dual-task network was developed to solve both a localization task and a hand-tool interaction task. The dual network may be easily expanded to a multi-task network, which may be useful for images with multiple layers and for evaluating the interaction between these different layers.


Assuntos
Competência Clínica , Laparoscopia , Humanos , Laparoscopia/métodos , Técnicas de Sutura , Suturas , Análise e Desempenho de Tarefas
11.
Surgery ; 167(4): 699-703, 2020 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-31685234

RESUMO

BACKGROUND: Vessel ligation with a knot is one of the most fundamental tasks surgeons must master. We developed a simulator designed to enable novices to acquire and refine gentle knot tying capabilities. METHODS: A bench-top, knot-tying simulator with computer-acquired assessment was tested on expert surgeons and surgery residents at an academic medical center during the years 2016 to 2018. Each participant tied a total of 8 knots in different settings (superficial versus deep) and techniques (1-handed versus 2 hands). The simulator measured vertical forces and task completion time. RESULTS: Fifteen experienced surgeons and 30 surgery residents were recruited. The expert group exerted considerably less total force during placement of the knots than the novice residents (3.8 ± 2.0 vs 9.2 ± 6.1 N, respectively; P = .0005) and the peak force exerted upward was less in the expert group (1.31 ± 0.6 vs 1.75 ± 0.84 N; P = .02). The experts also completed the task in less time (10.9 ± 3.4 vs 18.3 ± 7.2 seconds; P = 0.001). CONCLUSION: The simulator can offer residency programs a low-cost, bench-top platform to train and assess objectively the knot-tying capabilities of surgery residents.


Assuntos
Cirurgia Geral/educação , Internato e Residência , Ligadura/educação , Treinamento por Simulação , Cirurgiões , Competência Clínica , Humanos , Fatores de Tempo
12.
Am J Surg ; 220(1): 100-104, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-31806168

RESUMO

BACKGROUND: Technological advances have led to the development of state-of-the-art simulators for training surgeons; few train basic surgical skills, such as vessel ligation. METHODS: A novel low-cost bench-top simulator with auditory and visual feedback that measures forces exerted during knot tying was tested on 14 surgical residents. Pre- and post-training values for total force exerted during knot tying, maximum pulling and pushing forces and completion time were compared. RESULTS: Mean time to reach proficiency during training was 11:26 min, with a mean of 15 consecutive knots. Mean total applied force for each knot were 35% lower post-training than pre-training (7.5 vs. 11.54 N (N), respectively, p = 0.039). Mean upward peak force was significantly lower after, compared to before, training (1.29 vs. 2.12 N, respectively, p = 0.004). CONCLUSIONS: Simulator training with visual and auditory force feedback improves knot-tying skills of novice surgeons.


Assuntos
Internato e Residência , Conhecimento Psicológico de Resultados , Ligadura/educação , Treinamento por Simulação , Técnicas de Sutura/educação , Adulto , Competência Clínica , Feminino , Humanos , Masculino
13.
IEEE Trans Biomed Eng ; 65(7): 1585-1594, 2018 07.
Artigo em Inglês | MEDLINE | ID: mdl-28489529

RESUMO

OBJECTIVE: The human haptic system uses a set of reproducible and subconscious hand maneuvers to identify objects. Similar subconscious maneuvers are used during medical palpation for screening and diagnosis. The goal of this work was to develop a mathematical model that can be used to describe medical palpation techniques. METHODS: Palpation data were measured using a two-dimensional array of force sensors. A novel algorithm for estimating the hand position from force data was developed. The hand position data were then modeled using multivariate autoregressive models. Analysis of these models provided palpation direction and frequency as well as palpation type. The models were tested and validated using three different data sets: simulated data, a simplified experiment in which participant followed a known pattern, and breast simulator palpation data. RESULTS: Simulated data showed that the minimal error in estimating palpation direction and frequency is achieved when the sampling frequency is five to ten times the palpation frequency. The classification accuracy was for the simplified experiment and for the breast simulator data. CONCLUSION: Proper palpation is one of the vital components of many hands-on clinical examinations. In this study, an algorithm for characterizing medical palpation was developed. The algorithm measured palpation frequency and direction for the first time and provided classification of palpation type. SIGNIFICANCE: These newly developed models can be used for quantifying and assessing clinical technique, and consequently, lead to improved performance in palpation-based exams. Furthermore, they provide a general tool for the study of human haptics.


Assuntos
Modelos Biológicos , Palpação , Processamento de Sinais Assistido por Computador , Tato/fisiologia , Adulto , Algoritmos , Mama/fisiologia , Educação Médica Continuada , Feminino , Mãos/fisiologia , Humanos , Masculino , Modelos Estatísticos , Pressão
14.
Am J Surg ; 215(6): 995-999, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-29229379

RESUMO

BACKGROUND: This study explores the long-term effectiveness of a newly developed clinical skills curriculum. METHODS: Students (N = 40) were exposed to a newly developed, simulation-based, clinical breast exam (CBE) curriculum. The same students returned one year later to perform the CBE and were compared to a convenience sample of medical students (N = 15) attending a national conferences. All students were given a clinical vignette and performed the CBE. CBE techniques were video recorded. Chi-squared tests were used to assess differences in CBE technique. RESULTS: Students exposed to a structured curriculum performed physical examination techniques more consistent with national guidelines than the random, national student sample. Structured curriculum students were more organized, likely to use two hands, a linear search pattern, and include the nipple-areolar complex during the CBE compared to national sample (p < 0.01). CONCLUSIONS: Students exposed to a structured skills curriculum more consistently performed the CBE according to national guidelines. The variability in technique compared with the national sample of students calls for major improvements in adoption and implementation of structured skills curricula.


Assuntos
Doenças Mamárias/diagnóstico , Competência Clínica , Currículo , Educação de Graduação em Medicina/métodos , Guias como Assunto , Exame Físico/métodos , Estudantes de Medicina , Avaliação Educacional , Feminino , Humanos , Masculino
15.
J Surg Res ; 220: 385-390, 2017 12.
Artigo em Inglês | MEDLINE | ID: mdl-29180207

RESUMO

BACKGROUND: The aim of this study was to assess performance measurement validity of our newly developed robotic surgery task trainer. We hypothesized that residents would exhibit wide variations in their intercohort performance as well as a measurable difference compared to surgeons in fellowship training. MATERIALS AND METHODS: Our laboratory synthesized a model of a pelvic tumor that simulates unexpected bleeding. Surgical residents and fellows of varying specialties completed a demographic survey and were allowed 20 minutes to resect the tumor using the da Vinci robot and achieve hemostasis. At a standardized event in the simulation, venous bleeding began, and participants attempted hemostasis using suture ligation. A motion tracking system, using electromagnetic sensors, recorded participants' hand movements. A postparticipation Likert scale survey evaluated participants' assessment of the model's realism and usefulness. RESULTS: Three of the seven residents (postgraduate year 2-5), and the fellow successfully resected the tumor in the allotted time. Residents showed high variability in performance and blood loss (125-700 mL) both within their cohort and compared to the fellow (150 mL blood). All participants rated the model as having high realism and utility for trainees. CONCLUSIONS: The results support that our bleeding pelvic tumor simulator has the ability to discriminate resident performance in robotic surgery. The combination of motion, decision-making, and blood loss metrics offers a multilevel performance assessment, analyzing both technical and decision-making abilities.


Assuntos
Cirurgia Geral/educação , Treinamento com Simulação de Alta Fidelidade , Desempenho Acadêmico , Feminino , Hemorragia/cirurgia , Humanos , Masculino , Robótica
16.
Ann Surg ; 266(6): 1069-1074, 2017 12.
Artigo em Inglês | MEDLINE | ID: mdl-27655241

RESUMO

OBJECTIVE: Develop new performance evaluation standards for the clinical breast examination (CBE). SUMMARY BACKGROUND DATA: There are several, technical aspects of a proper CBE. Our recent work discovered a significant, linear relationship between palpation force and CBE accuracy. This article investigates the relationship between other technical aspects of the CBE and accuracy. METHODS: This performance assessment study involved data collection from physicians (n = 553) attending 3 different clinical meetings between 2013 and 2014: American Society of Breast Surgeons, American Academy of Family Physicians, and American College of Obstetricians and Gynecologists. Four, previously validated, sensor-enabled breast models were used for clinical skills assessment. Models A and B had solitary, superficial, 2 cm and 1 cm soft masses, respectively. Models C and D had solitary, deep, 2 cm hard and moderately firm masses, respectively. Finger movements (search technique) from 1137 CBE video recordings were independently classified by 2 observers. Final classifications were compared with CBE accuracy. RESULTS: Accuracy rates were model A = 99.6%, model B = 89.7%, model C = 75%, and model D = 60%. Final classification categories for search technique included rubbing movement, vertical movement, piano fingers, and other. Interrater reliability was (k = 0.79). Rubbing movement was 4 times more likely to yield an accurate assessment (odds ratio 3.81, P < 0.001) compared with vertical movement and piano fingers. Piano fingers had the highest failure rate (36.5%). Regression analysis of search pattern, search technique, palpation force, examination time, and 6 demographic variables, revealed that search technique independently and significantly affected CBE accuracy (P < 0.001). CONCLUSIONS: Our results support measurement and classification of CBE techniques and provide the foundation for a new paradigm in teaching and assessing hands-on clinical skills. The newly described piano fingers palpation technique was noted to have unusually high failure rates. Medical educators should be aware of the potential differences in effectiveness for various CBE techniques.


Assuntos
Neoplasias da Mama/diagnóstico , Competência Clínica , Palpação/métodos , Feminino , Dedos/fisiologia , Ginecologia , Humanos , Masculino , Movimento , Obstetrícia , Palpação/classificação , Palpação/normas , Médicos de Família , Cirurgiões
17.
J Surg Res ; 205(1): 192-7, 2016 09.
Artigo em Inglês | MEDLINE | ID: mdl-27621018

RESUMO

BACKGROUND: The study aim was to identify residents' coordination between dominant and nondominant hands while grasping for sutures in a laparoscopic ventral hernia repair procedure simulation. We hypothesize residents will rely on their dominant and nondominant hands unequally while grasping for suture. METHODS: Surgical residents had 15 min to complete the mesh securing and mesh tacking steps of a laparoscopic ventral hernia repair procedure. Procedure videos were coded for manual coordination events during the active suture grasping phase. Manual coordination events were defined as: active motion of dominant, nondominant, or both hands; and bimanual or unimanual manipulation of hands. A chi-square test was used to discriminate between coordination choices. RESULTS: Thirty-six residents (postgraduate year, 1-5) participated in the study. Residents changed manual coordination types during active suture grasping 500 times, ranging between 5 and 24 events (M = 13.9 events, standard deviation [SD] = 4.4). Bimanual coordination was used most (40%) and required the most time on average (M = 20.6 s, SD = 27.2), while unimanual nondominant coordination was used least (2.2%; M = 7.9 s, SD = 6.9). Residents relied on their dominant and nondominant hands unequally (P < 0.001). During 24% of events, residents depended on their nondominant hand (n = 120), which was predominantly used to operate the suture passer device. CONCLUSIONS: Residents appeared to actively coordinate both dominant and nondominant hands almost half of the time to complete suture grasping. Bimanual task durations took longer than other tasks on average suggesting these tasks were characteristically longer or switching hands required a greater degree of coordination. Future work is necessary to understand how task completion time and overall performance are affected by residents' hand utilization and switching between dominant and nondominant hands in surgical tasks.


Assuntos
Lateralidade Funcional , Cirurgia Geral/normas , Mãos/fisiologia , Desempenho Psicomotor , Feminino , Humanos , Internato e Residência , Masculino
18.
Stud Health Technol Inform ; 220: 175-8, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27046574

RESUMO

Hemorrhages are the leading cause of potentially survivable combat mortalities when patients are unable to reach a treatment facility in time. New tourniquet devices have been developed to combat hemorrhages in the field. However, there is a lack in training systems to properly teach and assess users on tourniquet device application. We have developed an objective feedback system applicable to various full body manikins. We tested the system with expert users and received improvement feedback and verified the system's usefulness in instructing and assessing correct tourniquet device use.


Assuntos
Instrução por Computador/instrumentação , Medicina de Emergência/educação , Hemorragia/prevenção & controle , Treinamento com Simulação de Alta Fidelidade/métodos , Manequins , Torniquetes , Instrução por Computador/métodos , Avaliação Educacional/métodos , Medicina de Emergência/instrumentação , Virilha/lesões , Humanos , Medicina Militar/educação , Medicina Militar/instrumentação , Militares , Terapia Assistida por Computador/instrumentação , Terapia Assistida por Computador/métodos , Interface Usuário-Computador
19.
Stud Health Technol Inform ; 220: 193-8, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27046577

RESUMO

Sensor enabled simulators may help in training and assessing clinical skill. Their are imitations on the locations current sensors can be placed without interfering with the clinical examination. In this study novel fabric force sensors were developed and tested. These sensors are soft and flexible and undetectable when placed in different locations in the simulator. Five sensors were added to our current sensor enabled breast simulator. Eight participants performed the clinical breast examination on the simulator and documented their findings. There was a significant relationship for both clinical breast examination time (r(6) = 0.99, p < 0.001) and average force (r(6) = 0.92, p < 0.005) between our current sensors and the new fabric sensors. In addition the senors were not noticed by the participants. These new sensors provide new methods to measure and assess clinical skill and performance.


Assuntos
Neoplasias da Mama/diagnóstico , Treinamento com Simulação de Alta Fidelidade/métodos , Palpação/instrumentação , Tato , Transdutores de Pressão , Competência Clínica , Vestuário , Desenho de Equipamento , Análise de Falha de Equipamento , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Palpação/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Estresse Mecânico
20.
Stud Health Technol Inform ; 220: 199-204, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27046578

RESUMO

In this study new metrics were developed for assessing the performance of surgical knots. By adding sensors to a knot tying simulator we were able to measure the forces used while performing this basic and essential skill. Data were collected for both superficial tying and deep tying of square knots using the one hand and two hands techniques. Participants used significantly more force when tying a deep knot compared to a superficial knot (3.79N and 1.6N respectively). Different patterns for upward and downward forces were identified and showed that although most of the time upward forces are used (72% of the time), the downward forces are just as large. These data can be crucial for improving the safeness of knot tying. Combing these metrics with known metrics based on knot tensiometry and motion data may help provide feedback and objective assessment of knot tying skills.


Assuntos
Competência Clínica , Ligadura/instrumentação , Manometria/instrumentação , Sistemas Microeletromecânicos/instrumentação , Técnicas de Sutura/classificação , Transdutores , Feminino , Humanos , Ligadura/classificação , Masculino , Pressão , Estresse Mecânico , Suturas , Análise e Desempenho de Tarefas , Resistência à Tração
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA