Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 101
Filter
1.
Sci Rep ; 14(1): 10136, 2024 05 02.
Article in English | MEDLINE | ID: mdl-38698049

ABSTRACT

Exocrine and endocrine pancreas are interconnected anatomically and functionally, with vasculature facilitating bidirectional communication. Our understanding of this network remains limited, largely due to two-dimensional histology and missing combination with three-dimensional imaging. In this study, a multiscale 3D-imaging process was used to analyze a porcine pancreas. Clinical computed tomography, digital volume tomography, micro-computed tomography and Synchrotron-based propagation-based imaging were applied consecutively. Fields of view correlated inversely with attainable resolution from a whole organism level down to capillary structures with a voxel edge length of 2.0 µm. Segmented vascular networks from 3D-imaging data were correlated with tissue sections stained by immunohistochemistry and revealed highly vascularized regions to be intra-islet capillaries of islets of Langerhans. Generated 3D-datasets allowed for three-dimensional qualitative and quantitative organ and vessel structure analysis. Beyond this study, the method shows potential for application across a wide range of patho-morphology analyses and might possibly provide microstructural blueprints for biotissue engineering.


Subject(s)
Imaging, Three-Dimensional , Multimodal Imaging , Pancreas , Animals , Imaging, Three-Dimensional/methods , Pancreas/diagnostic imaging , Pancreas/blood supply , Swine , Multimodal Imaging/methods , X-Ray Microtomography/methods , Islets of Langerhans/diagnostic imaging , Islets of Langerhans/blood supply , Tomography, X-Ray Computed/methods
2.
Surg Endosc ; 38(5): 2483-2496, 2024 May.
Article in English | MEDLINE | ID: mdl-38456945

ABSTRACT

OBJECTIVE: Evaluation of the benefits of a virtual reality (VR) environment with a head-mounted display (HMD) for decision-making in liver surgery. BACKGROUND: Training in liver surgery involves appraising radiologic images and considering the patient's clinical information. Accurate assessment of 2D-tomography images is complex and requires considerable experience, and often the images are divorced from the clinical information. We present a comprehensive and interactive tool for visualizing operation planning data in a VR environment using a head-mounted-display and compare it to 3D visualization and 2D-tomography. METHODS: Ninety medical students were randomized into three groups (1:1:1 ratio). All participants analyzed three liver surgery patient cases with increasing difficulty. The cases were analyzed using 2D-tomography data (group "2D"), a 3D visualization on a 2D display (group "3D") or within a VR environment (group "VR"). The VR environment was displayed using the "Oculus Rift ™" HMD technology. Participants answered 11 questions on anatomy, tumor involvement and surgical decision-making and 18 evaluative questions (Likert scale). RESULTS: Sum of correct answers were significantly higher in the 3D (7.1 ± 1.4, p < 0.001) and VR (7.1 ± 1.4, p < 0.001) groups than the 2D group (5.4 ± 1.4) while there was no difference between 3D and VR (p = 0.987). Times to answer in the 3D (6:44 ± 02:22 min, p < 0.001) and VR (6:24 ± 02:43 min, p < 0.001) groups were significantly faster than the 2D group (09:13 ± 03:10 min) while there was no difference between 3D and VR (p = 0.419). The VR environment was evaluated as most useful for identification of anatomic anomalies, risk and target structures and for the transfer of anatomical and pathological information to the intraoperative situation in the questionnaire. CONCLUSIONS: A VR environment with 3D visualization using a HMD is useful as a surgical training tool to accurately and quickly determine liver anatomy and tumor involvement in surgery.


Subject(s)
Imaging, Three-Dimensional , Tomography, X-Ray Computed , Virtual Reality , Humans , Tomography, X-Ray Computed/methods , Female , Male , Hepatectomy/methods , Hepatectomy/education , Adult , Young Adult , Clinical Decision-Making , User-Computer Interface , Liver Neoplasms/surgery , Liver Neoplasms/diagnostic imaging
3.
ArXiv ; 2024 Feb 23.
Article in English | MEDLINE | ID: mdl-36945687

ABSTRACT

Validation metrics are key for the reliable tracking of scientific progress and for bridging the current chasm between artificial intelligence (AI) research and its translation into practice. However, increasing evidence shows that particularly in image analysis, metrics are often chosen inadequately in relation to the underlying research problem. This could be attributed to a lack of accessibility of metric-related knowledge: While taking into account the individual strengths, weaknesses, and limitations of validation metrics is a critical prerequisite to making educated choices, the relevant knowledge is currently scattered and poorly accessible to individual researchers. Based on a multi-stage Delphi process conducted by a multidisciplinary expert consortium as well as extensive community feedback, the present work provides the first reliable and comprehensive common point of access to information on pitfalls related to validation metrics in image analysis. Focusing on biomedical image analysis but with the potential of transfer to other fields, the addressed pitfalls generalize across application domains and are categorized according to a newly created, domain-agnostic taxonomy. To facilitate comprehension, illustrations and specific examples accompany each pitfall. As a structured body of information accessible to researchers of all levels of expertise, this work enhances global comprehension of a key topic in image analysis validation.

4.
Surg Endosc ; 38(3): 1379-1389, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38148403

ABSTRACT

BACKGROUND: Image-guidance promises to make complex situations in liver interventions safer. Clinical success is limited by intraoperative organ motion due to ventilation and surgical manipulation. The aim was to assess influence of different ventilatory and operative states on liver motion in an experimental model. METHODS: Liver motion due to ventilation (expiration, middle, and full inspiration) and operative state (native, laparotomy, and pneumoperitoneum) was assessed in a live porcine model (n = 10). Computed tomography (CT)-scans were taken for each pig for each possible combination of factors. Liver motion was measured by the vectors between predefined landmarks along the hepatic vein tree between CT scans after image segmentation. RESULTS: Liver position changed significantly with ventilation. Peripheral regions of the liver showed significantly higher motion (maximal Euclidean motion 17.9 ± 2.7 mm) than central regions (maximal Euclidean motion 12.6 ± 2.1 mm, p < 0.001) across all operative states. The total average motion measured 11.6 ± 0.7 mm (p < 0.001). Between the operative states, the position of the liver changed the most from native state to pneumoperitoneum (14.6 ± 0.9 mm, p < 0.001). From native state to laparotomy comparatively, the displacement averaged 9.8 ± 1.2 mm (p < 0.001). With pneumoperitoneum, the breath-dependent liver motion was significantly reduced when compared to other modalities. Liver motion due to ventilation was 7.7 ± 0.6 mm during pneumoperitoneum, 13.9 ± 1.1 mm with laparotomy, and 13.5 ± 1.4 mm in the native state (p < 0.001 in all cases). CONCLUSIONS: Ventilation and application of pneumoperitoneum caused significant changes in liver position. Liver motion was reduced but clearly measurable during pneumoperitoneum. Intraoperative guidance/navigation systems should therefore account for ventilation and intraoperative changes of liver position and peripheral deformation.


Subject(s)
Organ Motion , Pneumoperitoneum , Swine , Animals , Pneumoperitoneum/diagnostic imaging , Pneumoperitoneum/etiology , Laparotomy , Liver/diagnostic imaging , Liver/surgery , Respiration
5.
Sci Data ; 10(1): 414, 2023 06 24.
Article in English | MEDLINE | ID: mdl-37355750

ABSTRACT

Hyperspectral Imaging (HSI) is a relatively new medical imaging modality that exploits an area of diagnostic potential formerly untouched. Although exploratory translational and clinical studies exist, no surgical HSI datasets are openly accessible to the general scientific community. To address this bottleneck, this publication releases HeiPorSPECTRAL ( https://www.heiporspectral.org ; https://doi.org/10.5281/zenodo.7737674 ), the first annotated high-quality standardized surgical HSI dataset. It comprises 5,758 spectral images acquired with the TIVITA® Tissue and annotated with 20 physiological porcine organs from 8 pigs per organ distributed over a total number of 11 pigs. Each HSI image features a resolution of 480 × 640 pixels acquired over the 500-1000 nm wavelength range. The acquisition protocol has been designed such that the variability of organ spectra as a function of several parameters including the camera angle and the individual can be assessed. A comprehensive technical validation confirmed both the quality of the raw data and the annotations. We envision potential reuse within this dataset, but also its reuse as baseline data for future research questions outside this dataset. Measurement(s) Spectral Reflectance Technology Type(s) Hyperspectral Imaging Sample Characteristic - Organism Sus scrofa.


Subject(s)
Hyperspectral Imaging , Swine , Swine/anatomy & histology , Animals
6.
Med Image Anal ; 86: 102770, 2023 05.
Article in English | MEDLINE | ID: mdl-36889206

ABSTRACT

PURPOSE: Surgical workflow and skill analysis are key technologies for the next generation of cognitive surgical assistance systems. These systems could increase the safety of the operation through context-sensitive warnings and semi-autonomous robotic assistance or improve training of surgeons via data-driven feedback. In surgical workflow analysis up to 91% average precision has been reported for phase recognition on an open data single-center video dataset. In this work we investigated the generalizability of phase recognition algorithms in a multicenter setting including more difficult recognition tasks such as surgical action and surgical skill. METHODS: To achieve this goal, a dataset with 33 laparoscopic cholecystectomy videos from three surgical centers with a total operation time of 22 h was created. Labels included framewise annotation of seven surgical phases with 250 phase transitions, 5514 occurences of four surgical actions, 6980 occurences of 21 surgical instruments from seven instrument categories and 495 skill classifications in five skill dimensions. The dataset was used in the 2019 international Endoscopic Vision challenge, sub-challenge for surgical workflow and skill analysis. Here, 12 research teams trained and submitted their machine learning algorithms for recognition of phase, action, instrument and/or skill assessment. RESULTS: F1-scores were achieved for phase recognition between 23.9% and 67.7% (n = 9 teams), for instrument presence detection between 38.5% and 63.8% (n = 8 teams), but for action recognition only between 21.8% and 23.3% (n = 5 teams). The average absolute error for skill assessment was 0.78 (n = 1 team). CONCLUSION: Surgical workflow and skill analysis are promising technologies to support the surgical team, but there is still room for improvement, as shown by our comparison of machine learning algorithms. This novel HeiChole benchmark can be used for comparable evaluation and validation of future work. In future studies, it is of utmost importance to create more open, high-quality datasets in order to allow the development of artificial intelligence and cognitive robotics in surgery.


Subject(s)
Artificial Intelligence , Benchmarking , Humans , Workflow , Algorithms , Machine Learning
7.
HPB (Oxford) ; 25(6): 625-635, 2023 06.
Article in English | MEDLINE | ID: mdl-36828741

ABSTRACT

BACKGROUND: Anastomotic suturing is the Achilles heel of pancreatic surgery. Especially in laparoscopic and robotically assisted surgery, the pancreatic anastomosis should first be trained outside the operating room. Realistic training models are therefore needed. METHODS: Models of the pancreas, small bowel, stomach, bile duct, and a realistic training torso were developed for training of anastomoses in pancreatic surgery. Pancreas models with soft and hard textures, small and large ducts were incrementally developed and evaluated. Experienced pancreatic surgeons (n = 44) evaluated haptic realism, rigidity, fragility of tissues, and realism of suturing and knot tying. RESULTS: In the iterative development process the pancreas models showed high haptic realism and highest realism in suturing (4.6 ± 0.7 and 4.9 ± 0.5 on 1-5 Likert scale, soft pancreas). The small bowel model showed highest haptic realism (4.8 ± 0.4) and optimal wall thickness (0.1 ± 0.4 on -2 to +2 Likert scale) and suturing behavior (0.1 ± 0.4). The bile duct models showed optimal wall thickness (0.3 ± 0.8 and 0.4 ± 0.8 on -2 to +2 Likert scale) and optimal tissue fragility (0 ± 0.9 and 0.3 ± 0.7). CONCLUSION: The biotissue training models showed high haptic realism and realistic suturing behavior. They are suitable for realistic training of anastomoses in pancreatic surgery which may improve patient outcomes.


Subject(s)
Digestive System Surgical Procedures , Laparoscopy , Humans , Suture Techniques , Laparoscopy/education , Anastomosis, Surgical , Pancreas/surgery , Clinical Competence
8.
Int J Surg ; 109(12): 3883-3895, 2023 Dec 01.
Article in English | MEDLINE | ID: mdl-38258996

ABSTRACT

BACKGROUND: Small bowel malperfusion (SBM) can cause high morbidity and severe surgical consequences. However, there is no standardized objective measuring tool for the quantification of SBM. Indocyanine green (ICG) imaging can be used for visualization, but lacks standardization and objectivity. Hyperspectral imaging (HSI) as a newly emerging technology in medicine might present advantages over conventional ICG fluorescence or in combination with it. METHODS: HSI baseline data from physiological small bowel, avascular small bowel and small bowel after intravenous application of ICG was recorded in a total number of 54 in-vivo pig models. Visualizations of avascular small bowel after mesotomy were compared between HSI only (1), ICG-augmented HSI (IA-HSI) (2), clinical evaluation through the eyes of the surgeon (3) and conventional ICG imaging (4). The primary research focus was the localization of resection borders as suggested by each of the four methods. Distances between these borders were measured and histological samples were obtained from the regions in between in order to quantify necrotic changes 6 h after mesotomy for every region. RESULTS: StO2 images (1) were capable of visualizing areas of physiological perfusion and areas of clearly impaired perfusion. However, exact borders where physiological perfusion started to decrease could not be clearly identified. Instead, IA-HSI (2) suggested a sharp-resection line where StO2 values started to decrease. Clinical evaluation (3) suggested a resection line 23 mm (±7 mm) and conventional ICG imaging (4) even suggested a resection line 53 mm (±13 mm) closer towards the malperfused region. Histopathological evaluation of the region that was sufficiently perfused only according to conventional ICG (R3) already revealed a significant increase in pre-necrotic changes in 27% (±9%) of surface area. Therefore, conventional ICG seems less sensitive than IA-HSI with regards to detection of insufficient tissue perfusion. CONCLUSIONS: In this experimental animal study, IA-HSI (2) was superior for the visualization of segmental SBM compared to conventional HSI imaging (1), clinical evaluation (3) or conventional ICG imaging (4) regarding histopathological safety. ICG application caused visual artifacts in the StO2 values of the HSI camera as values significantly increase. This is caused by optical properties of systemic ICG and does not resemble a true increase in oxygenation levels. However, this empirical finding can be used to visualize segmental SBM utilizing ICG as contrast agent in an approach for IA-HSI. Clinical applicability and relevance will have to be explored in clinical trials. LEVEL OF EVIDENCE: Not applicable. Translational animal science. Original article.


Subject(s)
Hyperspectral Imaging , Indocyanine Green , Animals , Swine , Perfusion , Intestines , Contrast Media
9.
Int J Mol Sci ; 23(15)2022 Aug 02.
Article in English | MEDLINE | ID: mdl-35955720

ABSTRACT

Among advanced therapy medicinal products, tissue-engineered products have the potential to address the current critical shortage of donor organs and provide future alternative options in organ replacement therapy. The clinically available tissue-engineered products comprise bradytrophic tissue such as skin, cornea, and cartilage. A sufficient macro- and microvascular network to support the viability and function of effector cells has been identified as one of the main challenges in developing bioartificial parenchymal tissue. Three-dimensional bioprinting is an emerging technology that might overcome this challenge by precise spatial bioink deposition for the generation of a predefined architecture. Bioinks are printing substrates that may contain cells, matrix compounds, and signaling molecules within support materials such as hydrogels. Bioinks can provide cues to promote vascularization, including proangiogenic signaling molecules and cocultured cells. Both of these strategies are reported to enhance vascularization. We review pre-, intra-, and postprinting strategies such as bioink composition, bioprinting platforms, and material deposition strategies for building vascularized tissue. In addition, bioconvergence approaches such as computer simulation and artificial intelligence can support current experimental designs. Imaging-derived vascular trees can serve as blueprints. While acknowledging that a lack of structured evidence inhibits further meta-analysis, this review discusses an end-to-end process for the fabrication of vascularized, parenchymal tissue.


Subject(s)
Bioprinting , Artificial Intelligence , Bioprinting/methods , Computer Simulation , Printing, Three-Dimensional , Tissue Engineering/methods , Tissue Scaffolds/chemistry
11.
Sci Rep ; 12(1): 11028, 2022 06 30.
Article in English | MEDLINE | ID: mdl-35773276

ABSTRACT

Visual discrimination of tissue during surgery can be challenging since different tissues appear similar to the human eye. Hyperspectral imaging (HSI) removes this limitation by associating each pixel with high-dimensional spectral information. While previous work has shown its general potential to discriminate tissue, clinical translation has been limited due to the method's current lack of robustness and generalizability. Specifically, the scientific community is lacking a comprehensive spectral tissue atlas, and it is unknown whether variability in spectral reflectance is primarily explained by tissue type rather than the recorded individual or specific acquisition conditions. The contribution of this work is threefold: (1) Based on an annotated medical HSI data set (9059 images from 46 pigs), we present a tissue atlas featuring spectral fingerprints of 20 different porcine organs and tissue types. (2) Using the principle of mixed model analysis, we show that the greatest source of variability related to HSI images is the organ under observation. (3) We show that HSI-based fully-automatic tissue differentiation of 20 organ classes with deep neural networks is possible with high accuracy (> 95%). We conclude from our study that automatic tissue discrimination based on HSI data is feasible and could thus aid in intraoperative decisionmaking and pave the way for context-aware computer-assisted surgery systems and autonomous robotics.


Subject(s)
Hyperspectral Imaging , Machine Learning , Animals , Neural Networks, Computer , Swine
12.
Stud Health Technol Inform ; 290: 345-349, 2022 Jun 06.
Article in English | MEDLINE | ID: mdl-35673032

ABSTRACT

Operating rooms are a major cost factor in a hospital's budget. Therefore, there is a need for process optimization related to the operating rooms (OR). However, the collection of key figures for process optimization is often done manually by medical staff. This can be erroneous, inaccurate, time consuming, and incomplete. Automated, data-driven approaches are intended to address these problems and help to get the most precise picture possible of what is happening within the OR. At Heidelberg University Hospital (UKHD), a distributed AI based streaming analytics architecture was set up and integrated into the Medical Data Integration Center (MeDIC). This architecture can process, store, and visualize heterogeneous data from different sources. Data from medical devices and the video stream of the wall mounted cameras of four integrated operating rooms are ingested into our system. Aggregated and analyzed in real-time computed key figures including OR state and utilization numbers are visualized in a dashboard for monitoring and decision support. Because of high data protection hurdles the proposed system, especially the video analytics, was trained and tested with statists and did not run during real procedures. Studies to evaluate and test the system during live surgeries are planned.


Subject(s)
Operating Rooms , Hospitals, University , Humans
13.
Med Image Anal ; 80: 102488, 2022 08.
Article in English | MEDLINE | ID: mdl-35667327

ABSTRACT

Semantic image segmentation is an important prerequisite for context-awareness and autonomous robotics in surgery. The state of the art has focused on conventional RGB video data acquired during minimally invasive surgery, but full-scene semantic segmentation based on spectral imaging data and obtained during open surgery has received almost no attention to date. To address this gap in the literature, we are investigating the following research questions based on hyperspectral imaging (HSI) data of pigs acquired in an open surgery setting: (1) What is an adequate representation of HSI data for neural network-based fully automated organ segmentation, especially with respect to the spatial granularity of the data (pixels vs. superpixels vs. patches vs. full images)? (2) Is there a benefit of using HSI data compared to other modalities, namely RGB data and processed HSI data (e.g. tissue parameters like oxygenation), when performing semantic organ segmentation? According to a comprehensive validation study based on 506 HSI images from 20 pigs, annotated with a total of 19 classes, deep learning-based segmentation performance increases - consistently across modalities - with the spatial context of the input data. Unprocessed HSI data offers an advantage over RGB data or processed data from the camera provider, with the advantage increasing with decreasing size of the input to the neural network. Maximum performance (HSI applied to whole images) yielded a mean DSC of 0.90 ((standard deviation (SD)) 0.04), which is in the range of the inter-rater variability (DSC of 0.89 ((standard deviation (SD)) 0.07)). We conclude that HSI could become a powerful image modality for fully-automatic surgical scene understanding with many advantages over traditional imaging, including the ability to recover additional functional tissue information. Our code and pre-trained models are available at https://github.com/IMSY-DKFZ/htc.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted , Animals , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Semantics , Swine
14.
Int J Comput Assist Radiol Surg ; 17(8): 1477-1486, 2022 Aug.
Article in English | MEDLINE | ID: mdl-35624404

ABSTRACT

PURPOSE: As human failure has been shown to be one primary cause for post-operative death, surgical training is of the utmost socioeconomic importance. In this context, the concept of surgical telestration has been introduced to enable experienced surgeons to efficiently and effectively mentor trainees in an intuitive way. While previous approaches to telestration have concentrated on overlaying drawings on surgical videos, we explore the augmented reality (AR) visualization of surgical hands to imitate the direct interaction with the situs. METHODS: We present a real-time hand tracking pipeline specifically designed for the application of surgical telestration. It comprises three modules, dedicated to (1) the coarse localization of the expert's hand and the subsequent (2) segmentation of the hand for AR visualization in the field of view of the trainee and (3) regression of keypoints making up the hand's skeleton. The semantic representation is obtained to offer the ability for structured reporting of the motions performed as part of the teaching. RESULTS: According to a comprehensive validation based on a large data set comprising more than 14,000 annotated images with varying application-relevant conditions, our algorithm enables real-time hand tracking and is sufficiently accurate for the task of surgical telestration. In a retrospective validation study, a mean detection accuracy of 98%, a mean keypoint regression accuracy of 10.0 px and a mean Dice Similarity Coefficient of 0.95 were achieved. In a prospective validation study, it showed uncompromised performance when the sensor, operator or gesture varied. CONCLUSION: Due to its high accuracy and fast inference time, our neural network-based approach to hand tracking is well suited for an AR approach to surgical telestration. Future work should be directed to evaluating the clinical value of the approach.


Subject(s)
Algorithms , Augmented Reality , Hand/surgery , Humans , Neural Networks, Computer , Retrospective Studies
15.
J Tissue Eng ; 13: 20417314221091033, 2022.
Article in English | MEDLINE | ID: mdl-35462988

ABSTRACT

Three-dimensional bioprinting of an endocrine pancreas is a promising future curative treatment for patients with insulin secretion deficiency. In this study, we present an end-to-end concept from the molecular to the macroscopic level. Building-blocks for a hybrid scaffold device of hydrogel and functionalized polycaprolactone were manufactured by 3D-(bio)printing. Pseudoislet formation from INS-1 cells after bioprinting resulted in a viable and proliferative experimental model. Transcriptomics showed an upregulation of proliferative and ß-cell-specific signaling cascades, downregulation of apoptotic pathways, overexpression of extracellular matrix proteins, and VEGF induced by pseudoislet formation and 3D-culture. Co-culture with endothelial cells created a natural cellular niche with enhanced insulin secretion after glucose stimulation. Survival and function of pseudoislets after explantation and extensive scaffold vascularization of both hydrogel and heparinized polycaprolactone were demonstrated in vivo. Computer simulations of oxygen, glucose and insulin flows were used to evaluate scaffold architectures and Langerhans islets at a future perivascular transplantation site.

16.
Med Image Anal ; 76: 102306, 2022 02.
Article in English | MEDLINE | ID: mdl-34879287

ABSTRACT

Recent developments in data science in general and machine learning in particular have transformed the way experts envision the future of surgery. Surgical Data Science (SDS) is a new research field that aims to improve the quality of interventional healthcare through the capture, organization, analysis and modeling of data. While an increasing number of data-driven approaches and clinical applications have been studied in the fields of radiological and clinical data science, translational success stories are still lacking in surgery. In this publication, we shed light on the underlying reasons and provide a roadmap for future advances in the field. Based on an international workshop involving leading researchers in the field of SDS, we review current practice, key achievements and initiatives as well as available standards and tools for a number of topics relevant to the field, namely (1) infrastructure for data acquisition, storage and access in the presence of regulatory constraints, (2) data annotation and sharing and (3) data analytics. We further complement this technical perspective with (4) a review of currently available SDS products and the translational progress from academia and (5) a roadmap for faster clinical translation and exploitation of the full potential of SDS, based on an international multi-round Delphi process.


Subject(s)
Data Science , Machine Learning , Humans
17.
Surg Endosc ; 36(1): 126-134, 2022 01.
Article in English | MEDLINE | ID: mdl-33475848

ABSTRACT

BACKGROUND: Virtual reality (VR) with head-mounted displays (HMD) may improve medical training and patient care by improving display and integration of different types of information. The aim of this study was to evaluate among different healthcare professions the potential of an interactive and immersive VR environment for liver surgery that integrates all relevant patient data from different sources needed for planning and training of procedures. METHODS: 3D-models of the liver, other abdominal organs, vessels, and tumors of a sample patient with multiple hepatic masses were created. 3D-models, clinical patient data, and other imaging data were visualized in a dedicated VR environment with an HMD (IMHOTEP). Users could interact with the data using head movements and a computer mouse. Structures of interest could be selected and viewed individually or grouped. IMHOTEP was evaluated in the context of preoperative planning and training of liver surgery and for the potential of broader surgical application. A standardized questionnaire was voluntarily answered by four groups (students, nurses, resident and attending surgeons). RESULTS: In the evaluation by 158 participants (57 medical students, 35 resident surgeons, 13 attending surgeons and 53 nurses), 89.9% found the VR system agreeable to work with. Participants generally agreed that complex cases in particular could be assessed better (94.3%) and faster (84.8%) with VR than with traditional 2D display methods. The highest potential was seen in student training (87.3%), resident training (84.6%), and clinical routine use (80.3%). Least potential was seen in nursing training (54.8%). CONCLUSIONS: The present study demonstrates that using VR with HMD to integrate all available patient data for the preoperative planning of hepatic resections is a viable concept. VR with HMD promises great potential to improve medical training and operation planning and thereby to achieve improvement in patient care.


Subject(s)
Surgeons , Virtual Reality , Humans , Liver , User-Computer Interface
18.
Obes Surg ; 31(11): 4692-4700, 2021 11.
Article in English | MEDLINE | ID: mdl-34331186

ABSTRACT

PURPOSE: Accurate laparoscopic bowel length measurement (LBLM), which is used primarily in metabolic surgery, remains a challenge. This study aims to three conventional methods for LBLM, namely using visual judgment (VJ), instrument markings (IM), or premeasured tape (PT) to a novel computer-assisted 3D measurement system (BMS). MATERIALS AND METHODS: LBLM methods were compared using a 3D laparoscope on bowel phantoms regarding accuracy (relative error in percent, %), time in seconds (s), and number of bowel grasps. Seventy centimeters were measured seven times. As a control, the first, third, fifth, and seventh measurements were performed with VJ. The interventions IM, PT, and BMS were performed following a randomized order as the second, fourth, and sixth measurements. RESULTS: In total, 63 people participated. BMS showed better accuracy (2.1±3.7%) compared to VJ (8.7±13.7%, p=0.001), PT (4.3±6.8%, p=0.002), and IM (11±15.3%, p<0.001). Participants performed LBLM in a similar amount of time with BMS (175.7±59.7s) and PT (166.5±63.6s, p=0.35), but VJ (64.0±24.0s, p<0.001) and IM (144.9±55.4s, p=0.002) were faster. Number of bowel grasps as a measure for the risk of bowel lesions was similar for BMS (15.8±3.0) and PT (15.9±4.6, p=0.861), whereas VJ required less (14.1±3.4, p=0.004) and IM required more than BMS (22.2±6.9, p<0.001). CONCLUSIONS: PT had higher accuracy than VJ and IM, and lower number of bowel grasps than IM. BMS shows great potential for more reliable LBLM. Until BMS is available in clinical routine, PT should be preferred for LBLM.


Subject(s)
Laparoscopy , Obesity, Morbid , Computers , Humans , Intestines , Obesity, Morbid/surgery
19.
Surgery ; 170(5): 1517-1524, 2021 11.
Article in English | MEDLINE | ID: mdl-34187695

ABSTRACT

BACKGROUND: Pancreatic surgery is associated with considerable morbidity and, consequently, offers a large and complex field for research. To prioritize relevant future scientific projects, it is of utmost importance to identify existing evidence and uncover research gaps. Thus, the aim of this project was to create a systematic and living Evidence Map of Pancreatic Surgery. METHODS: PubMed, the Cochrane Central Register of Controlled Trials, and Web of Science were systematically searched for all randomized controlled trials and systematic reviews on pancreatic surgery. Outcomes from every existing randomized controlled trial were extracted, and trial quality was assessed. Systematic reviews were used to identify an absence of randomized controlled trials. Randomized controlled trials and systematic reviews on identical subjects were grouped according to research topics. A web-based evidence map modeled after a mind map was created to visualize existing evidence. Meta-analyses of specific outcomes of pancreatic surgery were performed for all research topics with more than 3 randomized controlled trials. For partial pancreatoduodenectomy and distal pancreatectomy, pooled benchmarks for outcomes were calculated with a 99% confidence interval. The evidence map undergoes regular updates. RESULTS: Out of 30,860 articles reviewed, 328 randomized controlled trials on 35,600 patients and 332 systematic reviews were included and grouped into 76 research topics. Most randomized controlled trials were from Europe (46%) and most systematic reviews were from Asia (51%). A living meta-analysis of 21 out of 76 research topics (28%) was performed and included in the web-based evidence map. Evidence gaps were identified in 11 out of 76 research topics (14%). The benchmark for mortality was 2% (99% confidence interval: 1%-2%) for partial pancreatoduodenectomy and <1% (99% confidence interval: 0%-1%) for distal pancreatectomy. The benchmark for overall complications was 53% (99%confidence interval: 46%-61%) for partial pancreatoduodenectomy and 59% (99% confidence interval: 44%-80%) for distal pancreatectomy. CONCLUSION: The International Study Group of Pancreatic Surgery Evidence Map of Pancreatic Surgery, which is freely accessible via www.evidencemap.surgery and as a mobile phone app, provides a regularly updated overview of the available literature displayed in an intuitive fashion. Clinical decision making and evidence-based patient information are supported by the primary data provided, as well as by living meta-analyses. Researchers can use the systematic literature search and processed data for their own projects, and funding bodies can base their research priorities on evidence gaps that the map uncovers.


Subject(s)
Digestive System Surgical Procedures , Pancreas/surgery , Evidence-Based Medicine , Humans
20.
NPJ Digit Med ; 4(1): 69, 2021 Apr 12.
Article in English | MEDLINE | ID: mdl-33846548

ABSTRACT

The COVID-19 pandemic has worldwide individual and socioeconomic consequences. Chest computed tomography has been found to support diagnostics and disease monitoring. A standardized approach to generate, collect, analyze, and share clinical and imaging information in the highest quality possible is urgently needed. We developed systematic, computer-assisted and context-guided electronic data capture on the FDA-approved mint LesionTM software platform to enable cloud-based data collection and real-time analysis. The acquisition and annotation include radiological findings and radiomics performed directly on primary imaging data together with information from the patient history and clinical data. As proof of concept, anonymized data of 283 patients with either suspected or confirmed SARS-CoV-2 infection from eight European medical centers were aggregated in data analysis dashboards. Aggregated data were compared to key findings of landmark research literature. This concept has been chosen for use in the national COVID-19 response of the radiological departments of all university hospitals in Germany.

SELECTION OF CITATIONS
SEARCH DETAIL