Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 39
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-39008232

RESUMO

PURPOSE: Video-based intra-abdominal instrument tracking for laparoscopic surgeries is a common research area. However, the tracking can only be done with instruments that are actually visible in the laparoscopic image. By using extra-abdominal cameras to detect trocars and classify their occupancy state, additional information about the instrument location, whether an instrument is still in the abdomen or not, can be obtained. This can enhance laparoscopic workflow understanding and enrich already existing intra-abdominal solutions. METHODS: A data set of four laparoscopic surgeries recorded with two time-synchronized extra-abdominal 2D cameras was generated. The preprocessed and annotated data were used to train a deep learning-based network architecture consisting of a trocar detection, a centroid tracker and a temporal model to provide the occupancy state of all trocars during the surgery. RESULTS: The trocar detection model achieves an F1 score of 95.06 ± 0.88 % . The prediction of the occupancy state yields an F1 score of 89.29 ± 5.29 % , providing a first step towards enhanced surgical workflow understanding. CONCLUSION: The current method shows promising results for the extra-abdominal tracking of trocars and their occupancy state. Future advancements include the enlargement of the data set and incorporation of intra-abdominal imaging to facilitate accurate assignment of instruments to trocars.

2.
Stud Health Technol Inform ; 315: 463-467, 2024 Jul 24.
Artigo em Inglês | MEDLINE | ID: mdl-39049302

RESUMO

Integration of smartphone technology with the patient call-bell system provides the opportunity to enhance patient safety by supporting nurses' ability to communicate and prioritize care delivery directly. However, challenges are associated with achieving a balance between alarm support and alarm fatigue, including distracting nurses from patient care or desensitizing the nurse to other alarms and calls [1]. Our hospitals have quantitative and anecdotal reports of seriously high volumes of wireless alerts on the nurses' smartphones. Nurses have complained that the phones are generating too much noise to consume or timely prioritize. Preliminary alarm inventory revealed the Bed Exit wireless alert as a leading contributor of signal volume across many units and hospitals. The lack of standard policies and workflow improvement processes has increased nuisance alarms, making these Health Information Technologies less useful and safe. Using system data, workflow observations, and nursing interviews, Singh and Sittig's HIT Safety framework [2] was applied to identify and prioritize sociotechnical factors and interventions that impact the end-to-end Bed Exit alarm workflow. This study reviews the application of sociotechnical models and frameworks to reduce wireless calls without introducing risk and impacting patient care.


Assuntos
Alarmes Clínicos , Humanos , Segurança do Paciente , Smartphone , Fluxo de Trabalho , Sistemas de Comunicação no Hospital
3.
Artigo em Inglês | MEDLINE | ID: mdl-38862745

RESUMO

PURPOSE: Even though workflow analysis in the operating room has come a long way, current systems are still limited to research. In the quest for a robust, universal setup, hardly any attention has been given to the dimension of audio despite its numerous advantages, such as low costs, location, and sight independence, or little required processing power. METHODOLOGY: We present an approach for audio-based event detection that solely relies on two microphones capturing the sound in the operating room. Therefore, a new data set was created with over 63 h of audio recorded and annotated at the University Hospital rechts der Isar. Sound files were labeled, preprocessed, augmented, and subsequently converted to log-mel-spectrograms that served as a visual input for an event classification using pretrained convolutional neural networks. RESULTS: Comparing multiple architectures, we were able to show that even lightweight models, such as MobileNet, can already provide promising results. Data augmentation additionally improved the classification of 11 defined classes, including inter alia different types of coagulation, operating table movements as well as an idle class. With the newly created audio data set, an overall accuracy of 90%, a precision of 91% and a F1-score of 91% were achieved, demonstrating the feasibility of an audio-based event recognition in the operating room. CONCLUSION: With this first proof of concept, we demonstrated that audio events can serve as a meaningful source of information that goes beyond spoken language and can easily be integrated into future workflow recognition pipelines using computational inexpensive architectures.

4.
BMC Health Serv Res ; 23(1): 1313, 2023 Nov 28.
Artigo em Inglês | MEDLINE | ID: mdl-38017443

RESUMO

BACKGROUND: Due to the growing economic pressure, there is an increasing interest in the optimization of operational processes within surgical operating rooms (ORs). Surgical departments are frequently dealing with limited resources, complex processes with unexpected events as well as constantly changing conditions. In order to use available resources efficiently, existing workflows and processes have to be analyzed and optimized continuously. Structural and procedural changes without prior data-driven analyses may impair the performance of the OR team and the overall efficiency of the department. The aim of this study is to develop an adaptable software toolset for surgical workflow analysis and perioperative process optimization in arthroscopic surgery. METHODS: In this study, the perioperative processes of arthroscopic interventions have been recorded and analyzed subsequently. A total of 53 arthroscopic operations were recorded at a maximum care university hospital (UH) and 66 arthroscopic operations were acquired at a special outpatient clinic (OC). The recording includes regular perioperative processes (i.a. patient positioning, skin incision, application of wound dressing) and disruptive influences on these processes (e.g. telephone calls, missing or defective instruments, etc.). For this purpose, a software tool was developed ('s.w.an Suite Arthroscopic toolset'). Based on the data obtained, the processes of the maximum care provider and the special outpatient clinic have been analyzed in terms of performance measures (e.g. Closure-To-Incision-Time), efficiency (e.g. activity duration, OR resource utilization) as well as intra-process disturbances and then compared to one another. RESULTS: Despite many similar processes, the results revealed considerable differences in performance indices. The OC required significantly less time than UH for surgical preoperative (UH: 30:47 min, OC: 26:01 min) and postoperative phase (UH: 15:04 min, OC: 9:56 min) as well as changeover time (UH: 32:33 min, OC: 6:02 min). In addition, these phases result in the Closure-to-Incision-Time, which lasted longer at the UH (UH: 80:01 min, OC: 41:12 min). CONCLUSION: The perioperative process organization, team collaboration, and the avoidance of disruptive factors had a considerable influence on the progress of the surgeries. Furthermore, differences in terms of staffing and spatial capacities could be identified. Based on the acquired process data (such as the duration for different surgical steps or the number of interfering events) and the comparison of different arthroscopic departments, approaches for perioperative process optimization to decrease the time of work steps and reduce disruptive influences were identified.


Assuntos
Artroscopia , Salas Cirúrgicas , Humanos , Fluxo de Trabalho , Hospitais Universitários
5.
J Appl Clin Med Phys ; 24(7): e13961, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-36920871

RESUMO

PURPOSE: Online Adaptive Radiation Therapy (oART) follows a different treatment paradigm than conventional radiotherapy, and because of this, the resources, implementation, and workflows needed are unique. The purpose of this report is to outline our institution's experience establishing, organizing, and implementing an oART program using the Ethos therapy system. METHODS: We include resources used, operational models utilized, program creation timelines, and our institutional experiences with the implementation and operation of an oART program. Additionally, we provide a detailed summary of our first year's clinical experience where we delivered over 1000 daily adaptive fractions. For all treatments, the different stages of online adaption, primary patient set-up, initial kV-CBCT acquisition, contouring review and edit of influencer structures, target review and edits, plan evaluation and selection, Mobius3D 2nd check and adaptive QA, 2nd kV-CBCT for positional verification, treatment delivery, and patient leaving the room, were analyzed. RESULTS: We retrospectively analyzed data from 97 patients treated from August 2021-August 2022. One thousand six hundred seventy seven individual fractions were treated and analyzed, 632(38%) were non-adaptive and 1045(62%) were adaptive. Seventy four of the 97 patients (76%) were treated with standard fractionation and 23 (24%) received stereotactic treatments. For the adaptive treatments, the generated adaptive plan was selected in 92% of treatments. On average(±std), adaptive sessions took 34.52 ± 11.42 min from start to finish. The entire adaptive process (from start of contour generation to verification CBCT), performed by the physicist (and physician on select days), was 19.84 ± 8.21 min. CONCLUSION: We present our institution's experience commissioning an oART program using the Ethos therapy system. It took us 12 months from project inception to the treatment of our first patient and 12 months to treat 1000 adaptive fractions. Retrospective analysis of delivered fractions showed that the average overall treatment time was approximately 35 min and the average time for the adaptive component of treatment was approximately 20 min.


Assuntos
Planejamento da Radioterapia Assistida por Computador , Tomografia Computadorizada de Feixe Cônico Espiral , Humanos , Estudos Retrospectivos , Fracionamento da Dose de Radiação , Dosagem Radioterapêutica
6.
Med Image Anal ; 86: 102770, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36889206

RESUMO

PURPOSE: Surgical workflow and skill analysis are key technologies for the next generation of cognitive surgical assistance systems. These systems could increase the safety of the operation through context-sensitive warnings and semi-autonomous robotic assistance or improve training of surgeons via data-driven feedback. In surgical workflow analysis up to 91% average precision has been reported for phase recognition on an open data single-center video dataset. In this work we investigated the generalizability of phase recognition algorithms in a multicenter setting including more difficult recognition tasks such as surgical action and surgical skill. METHODS: To achieve this goal, a dataset with 33 laparoscopic cholecystectomy videos from three surgical centers with a total operation time of 22 h was created. Labels included framewise annotation of seven surgical phases with 250 phase transitions, 5514 occurences of four surgical actions, 6980 occurences of 21 surgical instruments from seven instrument categories and 495 skill classifications in five skill dimensions. The dataset was used in the 2019 international Endoscopic Vision challenge, sub-challenge for surgical workflow and skill analysis. Here, 12 research teams trained and submitted their machine learning algorithms for recognition of phase, action, instrument and/or skill assessment. RESULTS: F1-scores were achieved for phase recognition between 23.9% and 67.7% (n = 9 teams), for instrument presence detection between 38.5% and 63.8% (n = 8 teams), but for action recognition only between 21.8% and 23.3% (n = 5 teams). The average absolute error for skill assessment was 0.78 (n = 1 team). CONCLUSION: Surgical workflow and skill analysis are promising technologies to support the surgical team, but there is still room for improvement, as shown by our comparison of machine learning algorithms. This novel HeiChole benchmark can be used for comparable evaluation and validation of future work. In future studies, it is of utmost importance to create more open, high-quality datasets in order to allow the development of artificial intelligence and cognitive robotics in surgery.


Assuntos
Inteligência Artificial , Benchmarking , Humanos , Fluxo de Trabalho , Algoritmos , Aprendizado de Máquina
7.
Empir Softw Eng ; 28(1): 7, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36420321

RESUMO

Despite the ubiquity of data science, we are far from rigorously understanding how coding in data science is performed. Even though the scientific literature has hinted at the iterative and explorative nature of data science coding, we need further empirical evidence to understand this practice and its workflows in detail. Such understanding is critical to recognise the needs of data scientists and, for instance, inform tooling support. To obtain a deeper understanding of the iterative and explorative nature of data science coding, we analysed 470 Jupyter notebooks publicly available in GitHub repositories. We focused on the extent to which data scientists transition between different types of data science activities, or steps (such as data preprocessing and modelling), as well as the frequency and co-occurrence of such transitions. For our analysis, we developed a dataset with the help of five data science experts, who manually annotated the data science steps for each code cell within the aforementioned 470 notebooks. Using the first-order Markov chain model, we extracted the transitions and analysed the transition probabilities between the different steps. In addition to providing deeper insights into the implementation practices of data science coding, our results provide evidence that the steps in a data science workflow are indeed iterative and reveal specific patterns. We also evaluated the use of the annotated dataset to train machine-learning classifiers to predict the data science step(s) of a given code cell. We investigate the representativeness of the classification by comparing the workflow analysis applied to (a) the predicted data set and (b) the data set labelled by experts, finding an F1-score of about 71% for the 10-class data science step prediction problem.

8.
Intell Based Med ; 8: 100107, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38523618

RESUMO

Operation notes are a crucial component of patient care. However, writing them manually is prone to human error, particularly in high pressured clinical environments. Automatic generation of operation notes from video recordings can alleviate some of the administrative burdens, improve accuracy, and provide additional information. To achieve this for endoscopic pituitary surgery, 27-steps were identified via expert consensus. Then, for the 97-videos recorded for this study, a timestamp of each step was annotated by an expert surgeon. To automatically determine whether a step is present in a video, a three-stage architecture was created. Firstly, for each step, a convolution neural network was used for binary image classification on each frame of a video. Secondly, for each step, the binary frame classifications were passed to a discriminator for binary video classification. Thirdly, for each video, the binary video classifications were passed to an accumulator for multi-label step classification. The architecture was trained on 77-videos, and tested on 20-videos, where a 0.80 weighted-F1 score was achieved. The classifications were inputted into a clinically based predefined template, and further enriched with additional video analytics. This work therefore demonstrates automatic generation of operative notes from surgical videos is feasible, and can assist surgeons during documentation.

9.
J Imaging ; 8(10)2022 Sep 21.
Artigo em Inglês | MEDLINE | ID: mdl-36286350

RESUMO

Robotic assistance is applied in orthopedic interventions for pedicle screw placement (PSP). While current robots do not act autonomously, they are expected to have higher autonomy under surgeon supervision in the mid-term. Augmented reality (AR) is promising to support this supervision and to enable human-robot interaction (HRI). To outline a futuristic scenario for robotic PSP, the current workflow was analyzed through literature review and expert discussion. Based on this, a hypothetical workflow of the intervention was developed, which additionally contains the analysis of the necessary information exchange between human and robot. A video see-through AR prototype was designed and implemented. A robotic arm with an orthopedic drill mock-up simulated the robotic assistance. The AR prototype included a user interface to enable HRI. The interface provides data to facilitate understanding of the robot's "intentions", e.g., patient-specific CT images, the current workflow phase, or the next planned robot motion. Two-dimensional and three-dimensional visualization illustrated patient-specific medical data and the drilling process. The findings of this work contribute a valuable approach in terms of addressing future clinical needs and highlighting the importance of AR support for HRI.

10.
Med Image Anal ; 82: 102611, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36162336

RESUMO

Surgical workflow anticipation is an essential task for computer-assisted intervention (CAI) systems. It aims at predicting the future surgical phase and instrument occurrence, providing support for intra-operative decision-support system. Recent studies have promoted the development of the anticipation task by transforming it into a remaining time prediction problem, but without factoring the surgical instruments' behaviors and their interactions with surrounding anatomies in the network design. In this paper, we propose an Instrument Interaction Aware Anticipation Network (IIA-Net) to overcome the previous deficiency while retaining the merits of two-stage models through using spatial feature extractor and temporal model. Spatially, feature extractor utilizes tooltips' movement to extracts the instrument-instrument interaction, which helps model concentrate on the surgeon's actions. On the other hand, it introduces the segmentation map to capture the rich instrument-surrounding features about the instrument surroundings. Temporally, the temporal model applies the causal dilated multi-stage temporal convolutional network to capture the long-term dependency in the long and untrimmed surgical videos with a large receptive field. Our IIA-Net enforces an online inference with reliable predictions even with severe noise and artifacts in the recorded videos and presence signals. Extensive experiments on Cholec80 dataset demonstrate the performance of our proposed method exceeds the state-of-the-art method by a large margin (1.03 v.s. 1.12 for MAEw, 1.40 v.s. 1.75 for MAEin and 2.14 v.s. 2.68 for MAEe). For reproduction purposes, all the original codes are made public at https://github.com/Flaick/Surgical-Workflow-Anticipation.


Assuntos
Artefatos , Redes Neurais de Computação , Humanos , Fluxo de Trabalho , Instrumentos Cirúrgicos , Processamento de Imagem Assistida por Computador/métodos
11.
Front Surg ; 9: 756522, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35586509

RESUMO

Objective: Surgical efficiency and variability are critical contributors to optimal outcomes, patient experience, care team experience, and total cost to treat per disease episode. Opportunities remain to develop scalable, objective methods to quantify surgical behaviors that maximize efficiency and reduce variability. Such objective measures can then be used to provide surgeons with timely and user-specific feedbacks to monitor performances and facilitate training and learning. In this study, we used objective task-level analysis to identify dominant contributors toward surgical efficiency and variability across the procedural steps of robotic-assisted sleeve gastrectomy (RSG) over a five-year period for a single surgeon. These results enable actionable insights that can both complement those from population level analyses and be tailored to an individual surgeon's practice and experience. Methods: Intraoperative video recordings of 77 RSG procedures performed by a single surgeon from 2015 to 2019 were reviewed and segmented into surgical tasks. Surgeon-initiated events when controlling the robotic-assisted surgical system were used to compute objective metrics. A series of multi-staged regression analysis were used to determine: if any specific tasks or patient body mass index (BMI) statistically impacted procedure duration; which objective metrics impacted critical task efficiency; and which task(s) statistically contributed to procedure variability. Results: Stomach dissection was found to be the most significant contributor to procedure duration (ß = 0.344, p< 0.001; R = 0.81, p< 0.001) followed by surgical inactivity and stomach stapling. Patient BMI was not found to be statistically significantly correlated with procedure duration (R = -0.01, p = 0.90). Energy activation rate, a robotic system event-based metric, was identified as a dominant feature in predicting stomach dissection duration and differentiating earlier and later case groups. Reduction of procedure variability was observed between earlier (2015-2016) and later (2017-2019) groups (IQR = 14.20 min vs. 6.79 min). Stomach dissection was found to contribute most to procedure variability (ß = 0.74, p < 0.001). Conclusions: A surgical task-based objective analysis was used to identify major contributors to surgical efficiency and variability. We believe this data-driven method will enable clinical teams to quantify surgeon-specific performance and identify actionable opportunities focused on the dominant surgical tasks impacting overall procedure efficiency and consistency.

12.
Int J Comput Assist Radiol Surg ; 17(5): 849-856, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35353299

RESUMO

PURPOSE: We tackle the problem of online surgical phase recognition in laparoscopic procedures, which is key in developing context-aware supporting systems. We propose a novel approach to take temporal context in surgical videos into account by precise modeling of temporal neighborhoods. METHODS: We propose a two-stage model to perform phase recognition. A CNN model is used as a feature extractor to project RGB frames into a high-dimensional feature space. We introduce a novel paradigm for surgical phase recognition which utilizes graph neural networks to incorporate temporal information. Unlike recurrent neural networks and temporal convolution networks, our graph-based approach offers a more generic and flexible way for modeling temporal relationships. Each frame is a node in the graph, and the edges in the graph are used to define temporal connections among the nodes. The flexible configuration of temporal neighborhood comes at the price of losing temporal order. To mitigate this, our approach takes temporal orders into account by encoding frame positions, which is important to reliably predict surgical phases. RESULTS: Experiments are carried out on the public Cholec80 dataset that contains 80 annotated videos. The experimental results highlight the superior performance of the proposed approach compared to the state-of-the-art models on this dataset. CONCLUSION: A novel approach for formulating video-based surgical phase recognition is presented. The results indicate that temporal information can be incorporated using graph-based models, and positional encoding is important to efficiently utilize temporal information. Graph networks open possibilities to use evidence theory for uncertainty analysis in surgical phase recognition.


Assuntos
Laparoscopia , Redes Neurais de Computação , Humanos , Laparoscopia/métodos , Fluxo de Trabalho
13.
JMIR Hum Factors ; 9(1): e28783, 2022 Jan 04.
Artigo em Inglês | MEDLINE | ID: mdl-34643530

RESUMO

BACKGROUND: The hospitalist workday is cognitively demanding and dominated by activities away from patients' bedsides. Although mobile technologies are offered as solutions, clinicians report lower expectations of mobile technology after actual use. OBJECTIVE: The purpose of this study is to better understand opportunities for integrating mobile technology and apps into hospitalists' workflows. We aim to identify difficult tasks and contextual factors that introduce inefficiencies and characterize hospitalists' perspectives on mobile technology and apps. METHODS: We conducted a workflow analysis based on semistructured interviews. At a Midwestern US medical center, we recruited physicians and nurse practitioners from hospitalist and inpatient teaching teams and internal medicine residents. Interviews focused on tasks perceived as frequent, redundant, and difficult. Additionally, participants were asked to describe opportunities for mobile technology interventions. We analyzed contributing factors, impacted workflows, and mobile app ideas. RESULTS: Over 3 months, we interviewed 12 hospitalists. Participants collectively identified chart reviews, orders, and documentation as the most frequent, redundant, and difficult tasks. Based on those tasks, the intake, discharge, and rounding workflows were characterized as difficult and inefficient. The difficulty was associated with a lack of access to electronic health records at the bedside. Contributing factors for inefficiencies were poor usability and inconsistent availability of health information technology combined with organizational policies. Participants thought mobile apps designed to improve team communications would be most beneficial. Based on our analysis, mobile apps focused on data entry and presentation supporting specific tasks should also be prioritized. CONCLUSIONS: Based on our results, there are prioritized opportunities for mobile technology to decrease difficulty and increase the efficiency of hospitalists' workflows. Mobile technology and task-specific mobile apps with enhanced usability could decrease overreliance on hospitalists' memory and fragmentation of clinical tasks across locations. This study informs the design and implementation processes of future health information technologies to improve continuity in hospital-based medicine.

14.
Stud Health Technol Inform ; 284: 531-533, 2021 Dec 15.
Artigo em Inglês | MEDLINE | ID: mdl-34920587

RESUMO

The objective of this study was to clarify gaze information patterns of nurses gathering patient information using electronic health records. We recorded the electronic health record screen on which nurses' gazes were presented using an eye tracker and analyzed the recorded images. The analysis revealed two types of gaze information patterns of nurses engaged in patient information gathering. However, no regularity was observed in the gaze information patterns of the nurses viewing the electronic health record sections after selecting a patient.


Assuntos
Registros Eletrônicos de Saúde , Humanos
15.
Diagnostics (Basel) ; 11(11)2021 Nov 16.
Artigo em Inglês | MEDLINE | ID: mdl-34829475

RESUMO

During brain tumor resection surgery, it is essential to determine the tumor borders as the extent of resection is important for post-operative patient survival. The current process of removing a tissue sample for frozen section analysis has several shortcomings that might be overcome by confocal laser endomicroscopy (CLE). CLE is a promising new technology enabling the digital in vivo visualization of tissue structures in near real-time. Research on the socio-organizational impact of introducing this new methodology to routine care in neurosurgery and neuropathology is scarce. We analyzed a potential clinical workflow employing CLE by comparing it to the current process. Additionally, a small expert survey was conducted to collect data on the opinion of clinical staff working with CLE. While CLE can contribute to a workload reduction for neuropathologists and enable a shorter process and a more efficient use of resources, the effort for neurosurgeons and surgery assistants might increase. Experts agree that CLE offers huge potential for better diagnosis and therapy but also see challenges, especially due to the current state of experimental use, including a risk for misinterpretations and the need for special training. Future studies will show whether CLE can become part of routine care.

16.
Int J Comput Assist Radiol Surg ; 16(7): 1111-1119, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-34013464

RESUMO

PURPOSE: Automatic segmentation and classification of surgical activity is crucial for providing advanced support in computer-assisted interventions and autonomous functionalities in robot-assisted surgeries. Prior works have focused on recognizing either coarse activities, such as phases, or fine-grained activities, such as gestures. This work aims at jointly recognizing two complementary levels of granularity directly from videos, namely phases and steps. METHODS: We introduce two correlated surgical activities, phases and steps, for the laparoscopic gastric bypass procedure. We propose a multi-task multi-stage temporal convolutional network (MTMS-TCN) along with a multi-task convolutional neural network (CNN) training setup to jointly predict the phases and steps and benefit from their complementarity to better evaluate the execution of the procedure. We evaluate the proposed method on a large video dataset consisting of 40 surgical procedures (Bypass40). RESULTS: We present experimental results from several baseline models for both phase and step recognition on the Bypass40. The proposed MTMS-TCN method outperforms single-task methods in both phase and step recognition by 1-2% in accuracy, precision and recall. Furthermore, for step recognition, MTMS-TCN achieves a superior performance of 3-6% compared to LSTM-based models on all metrics. CONCLUSION: In this work, we present a multi-task multi-stage temporal convolutional network for surgical activity recognition, which shows improved results compared to single-task models on a gastric bypass dataset with multi-level annotations. The proposed method shows that the joint modeling of phases and steps is beneficial to improve the overall recognition of each type of activity.


Assuntos
Derivação Gástrica/métodos , Laparoscopia/métodos , Redes Neurais de Computação , Procedimentos Cirúrgicos Robóticos/métodos , Humanos
17.
Clin Neurol Neurosurg ; 205: 106628, 2021 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-33895619

RESUMO

INTRODUCTION: Intraoperative digital subtraction angiography (ioDSA) allows early treatment evaluation after neurovascular procedures. However, the value and efficiency of this procedure has been discussed controversially. We have evaluated the additional value of hybrid operating room equipped with an Artis Zeego robotic c-arm regarding cost, efficiency and workflow. Furthermore, we have performed a risk-benefit analysis and compared it with indocyanine green (ICG) angiography. METHODS: For 3 consecutive years, we examined all neurovascular patients, treated in the hybrid operating theater in a risk-benefit analysis. After using microdoppler and ICG angiography for best operative result, every patient received an additional ioDSA to look for remnants or unfavorable clip placement which might lead to a change of operating strategy or results. Furthermore, a workflow-analysis reviewing operating steps, staff positioning, costs, technical errors or complications were conducted on randomly selected cases. RESULTS: 54 patients were enrolled in the risk-benefit analysis, 22 in the workflow analysis. The average duration of a cerebrovascular operation was 4 h 58 min 2 min 35 s accounted for ICG angiography, 46 min 4 s for ioDSA. Adverse events occurred during one ioDSA. In risk-benefit analysis, ioDSA was able to detect a perfusion rest in 2 out of 43 cases (4,7%) of aneurysm surgery, which could not have been visualized by ICG angiography before. In arterio-venous-malformation (AVM) surgery, one of 11 examined patients (7,7%) showed a remnant in ioDSA and resulted in additional resection. The average cost of an ioDSA in Ulm University can be estimated with 1928,00€. CONCLUSION: According to our results ioDSA associated complications are low. Relevant findings in ioDSA can potentially avoid additional intervention, however, due to the high costs and lower availability, the main advantage might lie in the treatment of selected patients with complexes neurovascular pathologies since ICG angiography is equally safe but associated with lower costs and better availability.

18.
Sensors (Basel) ; 21(4)2021 Feb 13.
Artigo em Inglês | MEDLINE | ID: mdl-33668544

RESUMO

Surgeons' procedural skills and intraoperative decision making are key elements of clinical practice. However, the objective assessment of these skills remains a challenge to this day. Surgical workflow analysis (SWA) is emerging as a powerful tool to solve this issue in surgical educational environments in real time. Typically, SWA makes use of video signals to automatically identify the surgical phase. We hypothesize that the analysis of surgeons' speech using natural language processing (NLP) can provide deeper insight into the surgical decision-making processes. As a preliminary step, this study proposes to use audio signals registered in the educational operating room (OR) to classify the phases of a laparoscopic cholecystectomy (LC). To do this, we firstly created a database with the transcriptions of audio recorded in surgical educational environments and their corresponding phase. Secondly, we compared the performance of four feature extraction techniques and four machine learning models to find the most appropriate model for phase recognition. The best resulting model was a support vector machine (SVM) coupled to a hidden-Markov model (HMM), trained with features obtained with Word2Vec (82.95% average accuracy). The analysis of this model's confusion matrix shows that some phrases are misplaced due to the similarity in the words used. The study of the model's temporal component suggests that further attention should be paid to accurately detect surgeons' normal conversation. This study proves that speech-based classification of LC phases can be effectively achieved. This lays the foundation for the use of audio signals for SWA, to create a framework of LC to be used in surgical training, especially for the training and assessment of procedural and decision-making skills (e.g., to assess residents' procedural knowledge and their ability to react to adverse situations).


Assuntos
Colecistectomia Laparoscópica , Competência Clínica , Cirurgia Geral , Reconhecimento Automatizado de Padrão , Cirurgia Geral/normas , Humanos , Salas Cirúrgicas , Fala
19.
J Med Syst ; 44(12): 206, 2020 Nov 10.
Artigo em Inglês | MEDLINE | ID: mdl-33174093

RESUMO

Adolescents are disproportionately affected by sexually transmitted infections (STIs). Failure to diagnose and treat STIs in a timely manner may result in serious sequelae. Adolescents frequently access the emergency department (ED) for care. Although ED-based STI screening is acceptable to both patients and clinicians, understanding how best to implement STI screening processes into the ED clinical workflow without compromising patient safety or efficiency is critical. The objective of this study was to conduct direct observations documenting current workflow processes and tasks during patient visits at six Pediatric Emergency Care Applied Research Network (PECARN) EDs for site-specific integration of STI electronically-enhanced screening processes. Workflow observations were captured via TaskTracker, a time and motion electronic data collection application that allows researchers to categorize general work processes and record multitasking by providing a timestamp of when tasks began and ended. Workflow was captured during 118 patient visits across six PECARN EDs. The average time to initial assessment by the most senior provider was 76 min (range 59-106 min, SD = 43 min). Care teams were consistent across sites, and included attending physicians, advanced practice providers, nurses, registration clerks, technicians, and students. A timeline belt comparison was performed. Across most sites, the most promising implementation of a STI screening tool was in the patient examination room following the initial patient assessment by the nurse.


Assuntos
Serviço Hospitalar de Emergência , Infecções Sexualmente Transmissíveis , Adolescente , Criança , Humanos , Programas de Rastreamento , Infecções Sexualmente Transmissíveis/diagnóstico , Fluxo de Trabalho
20.
Visc Med ; 36(6): 450-455, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-33447600

RESUMO

BACKGROUND: Artificial intelligence (AI) has recently achieved considerable success in different domains including medical applications. Although current advances are expected to impact surgery, up until now AI has not been able to leverage its full potential due to several challenges that are specific to that field. SUMMARY: This review summarizes data-driven methods and technologies needed as a prerequisite for different AI-based assistance functions in the operating room. Potential effects of AI usage in surgery will be highlighted, concluding with ongoing challenges to enabling AI for surgery. KEY MESSAGES: AI-assisted surgery will enable data-driven decision-making via decision support systems and cognitive robotic assistance. The use of AI for workflow analysis will help provide appropriate assistance in the right context. The requirements for such assistance must be defined by surgeons in close cooperation with computer scientists and engineers. Once the existing challenges will have been solved, AI assistance has the potential to improve patient care by supporting the surgeon without replacing him or her.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA