Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 105
Filter
1.
Neural Netw ; 178: 106469, 2024 Jun 19.
Article in English | MEDLINE | ID: mdl-38925030

ABSTRACT

Robot-assisted surgery is rapidly developing in the medical field, and the integration of augmented reality shows the potential to improve the operation performance of surgeons by providing more visual information. In this paper, we proposed a markerless augmented reality framework to enhance safety by avoiding intra-operative bleeding, which is a high risk caused by collision between surgical instruments and delicate blood vessels (arteries or veins). Advanced stereo reconstruction and segmentation networks are compared to find the best combination to reconstruct the intra-operative blood vessel in 3D space for registration with the pre-operative model, and the minimum distance detection between the instruments and the blood vessel is implemented. A robot-assisted lymphadenectomy is emulated on the da Vinci Research Kit in a dry lab, and ten human subjects perform this operation to explore the usability of the proposed framework. The result shows that the augmented reality framework can help the users to avoid the dangerous collision between the instruments and the delicate blood vessel while not introducing an extra load. It provides a flexible framework that integrates augmented reality into the medical robotic platform to enhance safety during surgery.

2.
Comput Methods Programs Biomed ; 244: 107937, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38006707

ABSTRACT

BACKGROUND AND OBJECTIVE: Safety of robotic surgery can be enhanced through augmented vision or artificial constraints to the robotl motion, and intra-operative depth estimation is the cornerstone of these applications because it provides precise position information of surgical scenes in 3D space. High-quality depth estimation of endoscopic scenes has been a valuable issue, and the development of deep learning provides more possibility and potential to address this issue. METHODS: In this paper, a deep learning-based approach is proposed to recover 3D information of intra-operative scenes. To this aim, a fully 3D encoder-decoder network integrating spatio-temporal layers is designed, and it adopts hierarchical prediction and progressive learning to enhance prediction accuracy and shorten training time. RESULTS: Our network gets the depth estimation accuracy of MAE 2.55±1.51 (mm) and RMSE 5.23±1.40 (mm) using 8 surgical videos with a resolution of 1280×1024, which performs better compared with six other state-of-the-art methods that were trained on the same data. CONCLUSIONS: Our network can implement a promising depth estimation performance in intra-operative scenes using stereo images, allowing the integration in robot-assisted surgery to enhance safety.


Subject(s)
Robotic Surgical Procedures , Motion
3.
PLoS One ; 18(8): e0289777, 2023.
Article in English | MEDLINE | ID: mdl-37561691

ABSTRACT

The microgravity exposure that astronauts undergo during space missions lasting up to 6 months induces biochemical and physiological changes potentially impacting on their health. As a countermeasure, astronauts perform an in-flight training program consisting in different resistive exercises. To train optimally and safely, astronauts need guidance by on-ground specialists via a real-time audio/video system that, however, is subject to a communication delay that increases in proportion to the distance between sender and receiver. The aim of this work was to develop and validate a wearable IMU-based biofeedback system to monitor astronauts in-flight training displaying real-time feedback on exercises execution. Such a system has potential spin-offs also on personalized home/remote training for fitness and rehabilitation. 29 subjects were recruited according to their physical shape and performance criteria to collect kinematics data under ethical committee approval. Tests were conducted to (i) compare the signals acquired with our system to those obtained with the current state-of-the-art inertial sensors and (ii) to assess the exercises classification performance. The magnitude square coherence between the signals collected with the two different systems shows good agreement between the data. Multiple classification algorithms were tested and the best accuracy was obtained using a Multi-Layer Perceptron (MLP). MLP was also able to identify mixed errors during the exercise execution, a scenario that is quite common during training. The resulting system represents a novel low-cost training monitor tool that has space application, but also potential use on Earth for individuals working-out at home or remotely thanks to its ease of use and portability.


Subject(s)
Space Flight , Telemedicine , Weightlessness , Humans , Astronauts , Exercise Therapy
4.
Comput Biol Med ; 163: 107121, 2023 09.
Article in English | MEDLINE | ID: mdl-37311383

ABSTRACT

3D reconstruction of the intra-operative scenes provides precise position information which is the foundation of various safety related applications in robot-assisted surgery, such as augmented reality. Herein, a framework integrated into a known surgical system is proposed to enhance the safety of robotic surgery. In this paper, we present a scene reconstruction framework to restore the 3D information of the surgical site in real time. In particular, a lightweight encoder-decoder network is designed to perform disparity estimation, which is the key component of the scene reconstruction framework. The stereo endoscope of da Vinci Research Kit (dVRK) is adopted to explore the feasibility of the proposed approach, and it provides the possibility for the migration to other Robot Operating System (ROS) based robot platforms due to the strong independence on hardware. The framework is evaluated using three different scenarios, including a public dataset (3018 pairs of endoscopic images), the scene from the dVRK endoscope in our lab as well as a self-made clinical dataset captured from an oncology hospital. Experimental results show that the proposed framework can reconstruct 3D surgical scenes in real time (25 FPS), and achieve high accuracy (2.69 ± 1.48 mm in MAE, 5.47 ± 1.34 mm in RMSE and 0.41 ± 0.23 in SRE, respectively). It demonstrates that our framework can reconstruct intra-operative scenes with high reliability of both accuracy and speed, and the validation of clinical data also shows its potential in surgery. This work enhances the state of art in 3D intra-operative scene reconstruction based on medical robot platforms. The clinical dataset has been released to promote the development of scene reconstruction in the medical image community.


Subject(s)
Robotics , Surgery, Computer-Assisted , Surgery, Computer-Assisted/methods , Reproducibility of Results , Imaging, Three-Dimensional/methods , Minimally Invasive Surgical Procedures
5.
Int J Comput Assist Radiol Surg ; 18(10): 1849-1856, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37083973

ABSTRACT

PURPOSE: Primary central nervous system lymphoma (PCNSL) is a rare, aggressive form of extranodal non-Hodgkin lymphoma. To predict the overall survival (OS) in advance is of utmost importance as it has the potential to aid clinical decision-making. Though radiomics-based machine learning (ML) has demonstrated the promising performance in PCNSL, it demands large amounts of manual feature extraction efforts from magnetic resonance images beforehand. deep learning (DL) overcomes this limitation. METHODS: In this paper, we tailored the 3D ResNet to predict the OS of patients with PCNSL. To overcome the limitation of data sparsity, we introduced data augmentation and transfer learning, and we evaluated the results using r stratified k-fold cross-validation. To explain the results of our model, gradient-weighted class activation mapping was applied. RESULTS: We obtained the best performance (the standard error) on post-contrast T1-weighted (T1Gd)-area under curve [Formula: see text], accuracy [Formula: see text], precision [Formula: see text], recall [Formula: see text] and F1-score [Formula: see text], while compared with ML-based models on clinical data and radiomics data, respectively, further confirming the stability of our model. Also, we observed that PCNSL is a whole-brain disease and in the cases where the OS is less than 1 year, it is more difficult to distinguish the tumor boundary from the normal part of the brain, which is consistent with the clinical outcome. CONCLUSIONS: All these findings indicate that T1Gd can improve prognosis predictions of patients with PCNSL. To the best of our knowledge, this is the first time to use DL to explain model patterns in OS classification of patients with PCNSL. Future work would involve collecting more data of patients with PCNSL, or additional retrospective studies on different patient populations with rare diseases, to further promote the clinical role of our model.


Subject(s)
Brain Neoplasms , Central Nervous System Neoplasms , Deep Learning , Lymphoma , Humans , Retrospective Studies , Lymphoma/diagnostic imaging , Central Nervous System , Central Nervous System Neoplasms/diagnostic imaging , Central Nervous System Neoplasms/therapy
6.
Med Image Anal ; 85: 102751, 2023 04.
Article in English | MEDLINE | ID: mdl-36716700

ABSTRACT

Automatic surgical instrument segmentation of endoscopic images is a crucial building block of many computer-assistance applications for minimally invasive surgery. So far, state-of-the-art approaches completely rely on the availability of a ground-truth supervision signal, obtained via manual annotation, thus expensive to collect at large scale. In this paper, we present FUN-SIS, a Fully-UNsupervised approach for binary Surgical Instrument Segmentation. FUN-SIS trains a per-frame segmentation model on completely unlabelled endoscopic videos, by solely relying on implicit motion information and instrument shape-priors. We define shape-priors as realistic segmentation masks of the instruments, not necessarily coming from the same dataset/domain as the videos. The shape-priors can be collected in various and convenient ways, such as recycling existing annotations from other datasets. We leverage them as part of a novel generative-adversarial approach, allowing to perform unsupervised instrument segmentation of optical-flow images during training. We then use the obtained instrument masks as pseudo-labels in order to train a per-frame segmentation model; to this aim, we develop a learning-from-noisy-labels architecture, designed to extract a clean supervision signal from these pseudo-labels, leveraging their peculiar noise properties. We validate the proposed contributions on three surgical datasets, including the MICCAI 2017 EndoVis Robotic Instrument Segmentation Challenge dataset. The obtained fully-unsupervised results for surgical instrument segmentation are almost on par with the ones of fully-supervised state-of-the-art approaches. This suggests the tremendous potential of the proposed method to leverage the great amount of unlabelled data produced in the context of minimally invasive surgery.


Subject(s)
Image Processing, Computer-Assisted , Robotics , Humans , Image Processing, Computer-Assisted/methods , Endoscopy , Surgical Instruments
7.
Int J Comput Assist Radiol Surg ; 17(12): 2315-2323, 2022 Dec.
Article in English | MEDLINE | ID: mdl-35802223

ABSTRACT

PURPOSE: Advanced developments in the medical field have gradually increased the public demand for surgical skill evaluation. However, this assessment always depends on the direct observation of experienced surgeons, which is time-consuming and variable. The introduction of robot-assisted surgery provides a new possibility for this evaluation paradigm. This paper aims at evaluating surgeon performance automatically with novel evaluation metrics based on different surgical data. METHODS: Urologists ([Formula: see text]) from a hospital were requested to perform a simplified neobladder reconstruction on an ex vivo setup twice with different camera modalities ([Formula: see text]) randomly. They were divided into novices and experts ([Formula: see text], respectively) according to their experience in robot-assisted surgeries. Different performance metrics ([Formula: see text]) are proposed to achieve the surgical skill evaluation, considering both instruments and endoscope. Also, nonparametric tests are adopted to check if there are significant differences when evaluating surgeons performance. RESULTS: When grouping according to four stages of neobladder reconstruction, statistically significant differences can be appreciated in phase 1 ([Formula: see text]) and phase 2 ([Formula: see text]) with normalized time-related metrics and camera movement-related metrics, respectively. On the other hand, considering experience grouping shows that both metrics are able to highlight statistically significant differences between novice and expert performances in the control protocol. It also shows that the camera-related performance of experts is significantly different ([Formula: see text]) when handling the endoscope manually and when it is automatic. CONCLUSION: Surgical skill evaluation, using the approach in this paper, can effectively measure surgical procedures of surgeons with different experience. Preliminary results demonstrate that different surgical data can be fully utilized to improve the reliability of surgical evaluation. It also demonstrates its versatility and potential in the quantitative assessment of various surgical operations.


Subject(s)
Robotic Surgical Procedures , Robotics , Surgeons , Humans , Reproducibility of Results , Clinical Competence , Robotic Surgical Procedures/methods
8.
Front Robot AI ; 8: 707704, 2021.
Article in English | MEDLINE | ID: mdl-34901168

ABSTRACT

Robots for minimally invasive surgery introduce many advantages, but still require the surgeon to alternatively control the surgical instruments and the endoscope. This work aims at providing autonomous navigation of the endoscope during a surgical procedure. The autonomous endoscope motion was based on kinematic tracking of the surgical instruments and integrated with the da Vinci Research Kit. A preclinical usability study was conducted by 10 urologists. They carried out an ex vivo orthotopic neobladder reconstruction twice, using both traditional and autonomous endoscope control. The usability of the system was tested by asking participants to fill standard system usability scales. Moreover, the effectiveness of the method was assessed by analyzing the total procedure time and the time spent with the instruments out of the field of view. The average system usability score overcame the threshold usually identified as the limit to assess good usability (average score = 73.25 > 68). The average total procedure time with the autonomous endoscope navigation was comparable with the classic control (p = 0.85 > 0.05), yet it significantly reduced the time out of the field of view (p = 0.022 < 0.05). Based on our findings, the autonomous endoscope improves the usability of the surgical system, and it has the potential to be an additional and customizable tool for the surgeon that can always take control of the endoscope or leave it to move autonomously.

9.
J Alzheimers Dis ; 80(3): 1025-1038, 2021.
Article in English | MEDLINE | ID: mdl-33646164

ABSTRACT

BACKGROUND: Virtual reality (VR) has recently emerged as a promising means for the administration of cognitive training of seniors at risk of dementia. Immersive VR could result in increased engagement and performances; however, its acceptance in older adults with cognitive deficits still has to be assessed. OBJECTIVE: To assess acceptance and usability of an immersive VR environment requiring real walking and active participants' interaction. METHODS: 58 seniors with mild cognitive impairment (MCI, n = 24) or subjective cognitive decline (SCD, n = 31) performed a shopping task in a virtual supermarket displayed through a head-mounted display. Subjective and objective outcomes were evaluated. RESULTS: Immersive VR was well-accepted by all but one participant (TAM3 positive subscales > 5.33), irrespective of the extent of cognitive decline. Participants enjoyed the experience (spatial presence 3.51±0.50, engagement 3.85±0.68, naturalness 3.85±0.82) and reported negligible side-effects (SSQ: 3.74; q1-q3:0-16.83). The environment was considered extremely realistic, such as to induce potentially harmful behaviors: one participant fell while trying to lean on a virtual shelf. Older participants needed more time to conclude trials. Participants with MCI committed more errors in grocery items' selection and experienced less "perceived control" over the environment. CONCLUSION: Immersive VR was acceptable and enjoyable for older adults in both groups. Cognitive deficits could induce risky behaviors, and cause issues in the interactions with virtual items. Further studies are needed to confirm acceptance of immersive VR in individuals at risk of dementia, and to extend the results to people with more severe symptoms.


Subject(s)
Cognitive Dysfunction/rehabilitation , Neurological Rehabilitation/methods , Patient Acceptance of Health Care , Virtual Reality , Aged , Female , Humans , Male , Middle Aged
10.
Neurorehabil Neural Repair ; 35(4): 334-345, 2021 04.
Article in English | MEDLINE | ID: mdl-33655789

ABSTRACT

BACKGROUND: Robotic systems combined with Functional Electrical Stimulation (FES) showed promising results on upper-limb motor recovery after stroke, but adequately-sized randomized controlled trials (RCTs) are still missing. OBJECTIVE: To evaluate whether arm training supported by RETRAINER, a passive exoskeleton integrated with electromyograph-triggered functional electrical stimulation, is superior to advanced conventional therapy (ACT) of equal intensity in the recovery of arm functions, dexterity, strength, activities of daily living, and quality of life after stroke. METHODS: A single-blind RCT recruiting 72 patients was conducted. Patients, randomly allocated to 2 groups, were trained for 9 weeks, 3 times per week: the experimental group performed task-oriented exercises assisted by RETRAINER for 30 minutes plus ACT (60 minutes), whereas the control group performed only ACT (90 minutes). Patients were assessed before, soon after, and 1 month after the end of the intervention. Outcome measures were as follows: Action Research Arm Test (ARAT), Motricity Index, Motor Activity Log, Box and Blocks Test (BBT), Stroke Specific Quality of Life Scale (SSQoL), and Muscle Research Council. RESULTS: All outcomes but SSQoL significantly improved over time in both groups (P < .001); a significant interaction effect in favor of the experimental group was found for ARAT and BBT. ARAT showed a between-group change of 11.5 points (P = .010) at the end of the intervention, which increased to 13.6 points 1 month after. Patients considered RETRAINER moderately usable (System Usability Score of 61.5 ± 22.8). CONCLUSIONS: Hybrid robotic systems, allowing to perform personalized, intensive, and task-oriented training, with an enriched sensory feedback, was superior to ACT in improving arm functions and dexterity after stroke.


Subject(s)
Electric Stimulation Therapy , Electromyography , Exercise Therapy , Exoskeleton Device , Recovery of Function , Stroke Rehabilitation , Stroke/therapy , Upper Extremity , Activities of Daily Living , Adult , Aged , Aged, 80 and over , Combined Modality Therapy , Electric Stimulation Therapy/instrumentation , Electric Stimulation Therapy/methods , Exercise Therapy/instrumentation , Exercise Therapy/methods , Female , Humans , Male , Middle Aged , Outcome Assessment, Health Care , Quality of Life , Recovery of Function/physiology , Robotics , Single-Blind Method , Stroke/physiopathology , Stroke Rehabilitation/instrumentation , Stroke Rehabilitation/methods , Upper Extremity/physiopathology
11.
Int J Med Robot ; 17(1): 1-11, 2021 Feb.
Article in English | MEDLINE | ID: mdl-33113264

ABSTRACT

PURPOSE: Both safety and accuracy are of vital importance for surgical operation procedures. An efficient way to avoid the singularity of the surgical robot concerning safety issues is to maximize its manipulability in robot-assisted surgery. The goal of this work was to validate a dynamic neural network optimization method for manipulability optimization control of a 7-degree of freedom (DoF) robot in a surgical operation. METHODS: Three different paths, a circle, a sinusoid and a spiral were chosen to simulate typical surgical tasks. The dynamic neural network-based manipulability optimization control was implemented on a 7-DoF robot manipulator. During the surgical operation procedures, the manipulability of the robot manipulator and the accuracy of the surgical operation are recorded for performance validation. RESULTS: By comparison, the dynamic neural network-based manipulability optimization control achieved optimized manipulability but with a loss of the accuracy of trajectory tracking (the global error was 1 mm compare to the 0.5 mm error of non-optimized method). CONCLUSIONS: The method validated in this work achieved optimized manipulability with a loss of error. Future works should be introduced to improve the accuracy of the surgical operation.


Subject(s)
Robotic Surgical Procedures , Humans , Neural Networks, Computer
12.
Neural Netw ; 131: 291-299, 2020 Nov.
Article in English | MEDLINE | ID: mdl-32841835

ABSTRACT

In this paper, an improved recurrent neural network (RNN) scheme is proposed to perform the trajectory control of redundant robot manipulators using remote center of motion (RCM) constraints. Firstly, learning by demonstration is implemented to model the surgical operation skills in the Cartesian space. After that, considering the kinematic constraints associated with the optimization control of redundant manipulators, we propose a novel RNN-based approach to facilitate accurate task tracking based on the general quadratic performance index, which includes managing the constraints on RCM joint angle, and joint velocity, simultaneously. The results of the conducted theoretical analysis confirm that the RCM constraint has been established successfully, and accordingly. The corresponding end-effector tracking errors asymptotically converge to zero. Finally, demonstration experiments are conducted in a laboratory setup environment using KUKA LWR4+ to validate the effectiveness of the proposed control strategy.


Subject(s)
Neural Networks, Computer , Robotic Surgical Procedures/methods , Biomechanical Phenomena , Motion
13.
Assist Technol ; 32(6): 294-305, 2020 11 01.
Article in English | MEDLINE | ID: mdl-30615571

ABSTRACT

Sense of presence (SoP) has recently emerged as one of the key elements promoting the effectiveness of virtual reality-based training programs. In the context of wheelchair simulators (WSs), the effectiveness of the simulation has been sought using different perception and interaction devices, providing the end-users with different levels of SoP. We performed a scoping review searching scientific and grey literature databases with the aim of assessing the extent of published research dealing with SoP and effectiveness of WSs. Sixty-two articles, describing 29 WSs, were included in the review. In spite of promising results, the high heterogeneity of the employed technological solutions, of the training programs and of their outcomes precluded drawing definitive conclusions about the optimal solution for the enhancement of SoP and thus of WSs' effectiveness. Future research should focus on controlled trials in order to help researchers in assessing the most suitable technologies and methodologies for the application of WSs in clinical practice.


Subject(s)
Computer Simulation , Simulation Training , Virtual Reality , Wheelchairs , Feedback , Humans , Touch , User-Computer Interface
14.
Sensors (Basel) ; 19(17)2019 Aug 29.
Article in English | MEDLINE | ID: mdl-31470521

ABSTRACT

As a significant role in healthcare and sports applications, human activity recognition (HAR) techniques are capable of monitoring humans' daily behavior. It has spurred the demand for intelligent sensors and has been giving rise to the explosive growth of wearable and mobile devices. They provide the most availability of human activity data (big data). Powerful algorithms are required to analyze these heterogeneous and high-dimension streaming data efficiently. This paper proposes a novel fast and robust deep convolutional neural network structure (FR-DCNN) for human activity recognition (HAR) using a smartphone. It enhances the effectiveness and extends the information of the collected raw data from the inertial measurement unit (IMU) sensors by integrating a series of signal processing algorithms and a signal selection module. It enables a fast computational method for building the DCNN classifier by adding a data compression module. Experimental results on the sampled 12 complex activities dataset show that the proposed FR-DCNN model is the best method for fast computation and high accuracy recognition. The FR-DCNN model only needs 0.0029 s to predict activity in an online way with 95.27% accuracy. Meanwhile, it only takes 88 s (average) to establish the DCNN classifier on the compressed dataset with less precision loss 94.18%.


Subject(s)
Neural Networks, Computer , Smartphone , Algorithms , Data Compression , Human Activities , Humans
15.
Sensors (Basel) ; 19(17)2019 Aug 21.
Article in English | MEDLINE | ID: mdl-31438529

ABSTRACT

In robot control with physical interaction, like robot-assisted surgery and bilateral teleoperation, the availability of reliable interaction force information has proved to be capable of increasing the control precision and of dealing with the surrounding complex environments. Usually, force sensors are mounted between the end effector of the robot manipulator and the tool for measuring the interaction forces on the tooltip. In this case, the force acquired from the force sensor includes not only the interaction force but also the gravity force of the tool. Hence the tool dynamic identification is required for accurate dynamic simulation and model-based control. Although model-based techniques have already been widely used in traditional robotic arms control, their accuracy is limited due to the lack of specific dynamic models. This work proposes a model-free technique for dynamic identification using multi-layer neural networks (MNN). It utilizes two types of MNN architectures based on both feed-forward networks (FF-MNN) and cascade-forward networks (CF-MNN) to model the tool dynamics. Compared with the model-based technique, i.e., curve fitting (CF), the accuracy of the tool identification is improved. After the identification and calibration, a further demonstration of bilateral teleoperation is presented using a serial robot (LWR4+, KUKA, Germany) and a haptic manipulator (SIGMA 7, Force Dimension, Switzerland). Results demonstrate the promising performance of the model-free tool identification technique using MNN, improving the results provided by model-based methods.

16.
IEEE Trans Biomed Eng ; 66(12): 3290-3300, 2019 12.
Article in English | MEDLINE | ID: mdl-31180833

ABSTRACT

OBJECTIVE: To develop and evaluate a hybrid robotic system for arm recovery after stroke, combining ElectroMyoGraphic (EMG)-triggered functional electrical stimulation (FES) with a passive exoskeleton for upper limb suspension. METHODS: The system was used in a structured exercise program resembling activities of daily life. Exercises execution was continuously controlled using angle sensor data and radio-frequency identification technology. The training program consisted of 27 sessions lasting 30 min each. Seven post-acute stroke patients were recruited from two clinical sites. The efficacy of the system was evaluated in terms of action research arm test, motricity index, motor activity log, and box & blocks tests. Furthermore, kinematics-based and EMG-based outcome measures were derived directly from data collected during training sessions. RESULTS: All patients showed an improvement of motor functions at the end of the training program. After training, the exercises were in most cases executed faster, smoother, and with an increased range of motion. Subjects were able to trigger FES, but in some cases, they did not maintain the voluntary effort during task execution. All subjects but one considered the system usable. CONCLUSION: The preliminary results showed that the system can be used in a clinical environment with positive effects on arm functional recovery. However, only the final results of the currently ongoing clinical trial will unveil the system's full potential. SIGNIFICANCE: The presented hybrid robotic system is highly customizable, allows to monitor the daily performance, requires low supervision of the therapist, and might have the potential to enhance arm recovery after stroke.


Subject(s)
Electric Stimulation Therapy , Exoskeleton Device , Stroke Rehabilitation , Upper Extremity/physiopathology , Adolescent , Adult , Aged , Aged, 80 and over , Electric Stimulation Therapy/instrumentation , Electric Stimulation Therapy/methods , Electromyography , Equipment Design , Female , Humans , Male , Middle Aged , Stroke/physiopathology , Stroke Rehabilitation/instrumentation , Stroke Rehabilitation/methods , Task Performance and Analysis , Young Adult
17.
Int J Comput Assist Radiol Surg ; 14(3): 493-499, 2019 Mar.
Article in English | MEDLINE | ID: mdl-30613910

ABSTRACT

PURPOSE: Glioblastoma multiforme treatment is a challenging task in clinical oncology. Convection- enhanced delivery (CED) is showing encouraging but still suboptimal results due to drug leakages. Numerical models can predict drug distribution within the brain, but require retrieving brain physical properties, such as the axon diameter distribution (ADD), through axon architecture analysis. The goal of this work was to provide an automatic, accurate and fast method for axon segmentation in electronic microscopy images based on fully convolutional neural network (FCNN) as to allow automatic ADD computation. METHODS: The segmentation was performed using a residual FCNN inspired by U-Net and Resnet. The FCNN training was performed exploiting mini-batch gradient descent and the Adam optimizer. The Dice coefficient was chosen as loss function. RESULTS: The proposed segmentation method achieved results comparable with already existing methods for axon segmentation in terms of Information Theoretic Scoring ([Formula: see text]) with a faster training (5 h on the deployed GPU) and without requiring heavy post-processing (testing time was 0.2 s with a non-optimized code). The ADDs computed from the segmented and ground-truth images were statistically equivalent. CONCLUSIONS: The algorithm proposed in this work allowed fast and accurate axon segmentation and ADD computation, showing promising performance for brain microstructure analysis for CED delivery optimization.


Subject(s)
Axons/physiology , Brain Neoplasms/diagnostic imaging , Brain/diagnostic imaging , Glioblastoma/diagnostic imaging , Image Processing, Computer-Assisted/methods , Algorithms , Computers , Convection , Deep Learning , Humans , Models, Theoretical , Neural Networks, Computer , Reproducibility of Results
18.
Front Robot AI ; 6: 55, 2019.
Article in English | MEDLINE | ID: mdl-33501070

ABSTRACT

The integration of intra-operative sensors into surgical robots is a hot research topic since this can significantly facilitate complex surgical procedures by enhancing surgical awareness with real-time tissue information. However, currently available intra-operative sensing technologies are mainly based on image processing and force feedback, which normally require heavy computation or complicated hardware modifications of existing surgical tools. This paper presents the design and integration of electrical bio-impedance sensing into a commercial surgical robot tool, leading to the creation of a novel smart instrument that allows the identification of tissues by simply touching them. In addition, an advanced user interface is designed to provide guidance during the use of the system and to allow augmented-reality visualization of the tissue identification results. The proposed system imposes minor hardware modifications to an existing surgical tool, but adds the capability to provide a wealth of data about the tissue being manipulated. This has great potential to allow the surgeon (or an autonomous robotic system) to better understand the surgical environment. To evaluate the system, a series of ex-vivo experiments were conducted. The experimental results demonstrate that the proposed sensing system can successfully identify different tissue types with 100% classification accuracy. In addition, the user interface was shown to effectively and intuitively guide the user to measure the electrical impedance of the target tissue, presenting the identification results as augmented-reality markers for simple and immediate recognition.

19.
Int J Comput Assist Radiol Surg ; 14(4): 685-696, 2019 Apr.
Article in English | MEDLINE | ID: mdl-30443889

ABSTRACT

PURPOSE: Surgical workflow recognition and context-aware systems could allow better decision making and surgical planning by providing the focused information, which may eventually enhance surgical outcomes. While current developments in computer-assisted surgical systems are mostly focused on recognizing surgical phases, they lack recognition of surgical workflow sequence and other contextual element, e.g., "Instruments." Our study proposes a hybrid approach, i.e., using deep learning and knowledge representation, to facilitate recognition of the surgical workflow. METHODS: We implemented "Deep-Onto" network, which is an ensemble of deep learning models and knowledge management tools, ontology and production rules. As a prototypical scenario, we chose robot-assisted partial nephrectomy (RAPN). We annotated RAPN videos with surgical entities, e.g., "Step" and so forth. We performed different experiments, including the inter-subject variability, to recognize surgical steps. The corresponding subsequent steps along with other surgical contexts, i.e., "Actions," "Phase" and "Instruments," were also recognized. RESULTS: The system was able to recognize 10 RAPN steps with the prevalence-weighted macro-average (PWMA) recall of 0.83, PWMA precision of 0.74, PWMA F1 score of 0.76, and the accuracy of 74.29% on 9 videos of RAPN. CONCLUSION: We found that the combined use of deep learning and knowledge representation techniques is a promising approach for the multi-level recognition of RAPN surgical workflow.


Subject(s)
Deep Learning , Kidney Neoplasms/surgery , Nephrectomy/methods , Robotic Surgical Procedures/methods , Workflow , Humans
20.
Work ; 61(4): 603-625, 2018.
Article in English | MEDLINE | ID: mdl-30507601

ABSTRACT

BACKGROUND: Return to work represents an important milestone for workers who were injured during a workplace accident, especially if the injury results in needing a wheelchair for locomotion. OBJECTIVE: The aim of the study was to design a framework for training novice wheelchair users in regaining autonomy in activities of daily living and in the workplace and for providing medical personnel with objective data on users' health and work-related capabilities. METHODS: The framework design was accomplished following the "Usability Engineering Life Cycle" model. According to it, three subsequent steps defined as "Know your User", "Competitive Analysis" and "Participatory Design" have been carried out to devise the described framework. RESULTS: The needs of the end-users of the framework were identified during the first phase; the Competitive Analysis phase addressed standard care solutions, Virtual Reality-based wheelchair simulators, the current methodologies for the assessment of the health condition of people with disability and the use of semantic technologies in human resources. The Participatory Design phase led to the definition of an integrated user-centred framework supporting the return to work of wheelchair users. CONCLUSION: The results of this work consists in the design of an innovative training process based on virtual reality scenarios and supported by semantic web technologies. In the near future, the design process will proceed in collaboration with the Italian National Institute for Insurance against Accidents at Work (INAIL). The whole framework will be then implemented to support the current vocational rehabilitation process within INAIL premises.


Subject(s)
Disabled Persons/rehabilitation , Rehabilitation, Vocational/methods , Return to Work , Wheelchairs , Activities of Daily Living , Equipment Design , Humans , Semantic Web , Virtual Reality , Workplace
SELECTION OF CITATIONS
SEARCH DETAIL
...