RESUMEN
Femur fractures are a significant worldwide public health concern that affects patients as well as their families because of their high frequency, morbidity, and mortality. When employing computer-aided diagnostic (CAD) technologies, promising results have been shown in the efficiency and accuracy of fracture classification, particularly with the growing use of Deep Learning (DL) approaches. Nevertheless, the complexity is further increased by the need to collect enough input data to train these algorithms and the challenge of interpreting the findings. By improving on the results of the most recent deep learning-based Arbeitsgemeinschaft für Osteosynthesefragen and Orthopaedic Trauma Association (AO/OTA) system classification of femur fractures, this study intends to support physicians in making correct and timely decisions regarding patient care. A state-of-the-art architecture, YOLOv8, was used and refined while paying close attention to the interpretability of the model. Furthermore, data augmentation techniques were involved during preprocessing, increasing the dataset samples through image processing alterations. The fine-tuned YOLOv8 model achieved remarkable results, with 0.9 accuracy, 0.85 precision, 0.85 recall, and 0.85 F1-score, computed by averaging the values among all the individual classes for each metric. This study shows the proposed architecture's effectiveness in enhancing the AO/OTA system's classification of femur fractures, assisting physicians in making prompt and accurate diagnoses.
RESUMEN
The current study presents a multi-task end-to-end deep learning model for real-time blood accumulation detection and tools semantic segmentation from a laparoscopic surgery video. Intraoperative bleeding is one of the most problematic aspects of laparoscopic surgery. It is challenging to control and limits the visibility of the surgical site. Consequently, prompt treatment is required to avoid undesirable outcomes. This system exploits a shared backbone based on the encoder of the U-Net architecture and two separate branches to classify the blood accumulation event and output the segmentation map, respectively. Our main contribution is an efficient multi-task approach that achieved satisfactory results during the test on surgical videos, although trained with only RGB images and no other additional information. The proposed multi-tasking convolutional neural network did not employ any pre- or postprocessing step. It achieved a Dice Score equal to 81.89% for the semantic segmentation task and an accuracy of 90.63% for the event detection task. The results demonstrated that the concurrent tasks were properly combined since the common backbone extracted features proved beneficial for tool segmentation and event detection. Indeed, active bleeding usually happens when one of the instruments closes or interacts with anatomical tissues, and it decreases when the aspirator begins to remove the accumulated blood. Even if different aspects of the presented methodology could be improved, this work represents a preliminary attempt toward an end-to-end multi-task deep learning model for real-time video understanding.
RESUMEN
BACKGROUND: Addressing intraoperative bleeding remains a significant challenge in the field of robotic surgery. This research endeavors to pioneer a groundbreaking solution utilizing convolutional neural networks (CNNs). The objective is to establish a system capable of forecasting instances of intraoperative bleeding during robot-assisted radical prostatectomy (RARP) and promptly notify the surgeon about bleeding risks. METHODS: To achieve this, a multi-task learning (MTL) CNN was introduced, leveraging a modified version of the U-Net architecture. The aim was to categorize video input as either "absence of blood accumulation" (0) or "presence of blood accumulation" (1). To facilitate seamless interaction with the neural networks, the Bleeding Artificial Intelligence-based Detector (BLAIR) software was created using the Python Keras API and built upon the PyQT framework. A subsequent clinical assessment of BLAIR's efficacy was performed, comparing its bleeding identification performance against that of a urologist. Various perioperative variables were also gathered. For optimal MTL-CNN training parameterization, a multi-task loss function was adopted to enhance the accuracy of event detection by taking advantage of surgical tools' semantic segmentation. Additionally, the Multiple Correspondence Analysis (MCA) approach was employed to assess software performance. RESULTS: The MTL-CNN demonstrated a remarkable event recognition accuracy of 90.63%. When evaluating BLAIR's predictive ability and its capacity to pre-warn surgeons of potential bleeding incidents, the density plot highlighted a striking similarity between BLAIR and human assessments. In fact, BLAIR exhibited a faster response. Notably, the MCA analysis revealed no discernible distinction between the software and human performance in accurately identifying instances of bleeding. CONCLUSION: The BLAIR software proved its competence by achieving over 90% accuracy in predicting bleeding events during RARP. This accomplishment underscores the potential of AI to assist surgeons during interventions. This study exemplifies the positive impact AI applications can have on surgical procedures.
RESUMEN
Despite the great potential of Virtual Reality (VR) to arouse emotions, there are no VR affective databases available as it happens for pictures, videos, and sounds. In this paper, we describe the validation of ten affective interactive Virtual Environments (VEs) designed to be used in Virtual Reality. These environments are related to five emotions. The testing phase included using two different experimental setups to deliver the overall experience. The setup did not include any immersive VR technology, because of the ongoing COVID-19 pandemic, but the VEs were designed to run on stereoscopic visual displays. We collected measures related to the participants' emotional experience based on six discrete emotional categories plus neutrality and we included an assessment of the sense of presence related to the different experiences. The results showed how the scenarios can be differentiated according to the emotion aroused. Finally, the comparison between the two experimental setups demonstrated high reliability of the experience and strong adaptability of the scenarios to different contexts of use.
Asunto(s)
Nivel de Alerta/fisiología , COVID-19/psicología , Bases de Datos Factuales/estadística & datos numéricos , Emociones/fisiología , SARS-CoV-2/aislamiento & purificación , Realidad Virtual , Adulto , COVID-19/epidemiología , COVID-19/virología , Emociones/clasificación , Empatía , Femenino , Humanos , Masculino , Pandemias/prevención & control , Estimulación Luminosa/métodos , Reproducibilidad de los Resultados , SARS-CoV-2/fisiología , Adulto JovenRESUMEN
Background and aim of the work Implant dislocation in total hip arthroplasties (THA) is a common concern amongst the orthopedic surgeons and represents the most frequent complication after primary implant. Several causes could be responsible for the dislocation, including the malpositioning of the components. Conventional imaging techniques frequently fail to detect the mechanical source of dislocation mainly because they could not reproduce a dynamic evaluation of the components. The purpose of this study was to elaborate a diagnostic tool capable to virtually assess if the range of movement (ROM) of a THA is free from anterior and/or superior mechanical impingement. The ultimate aim is to give the surgeon the possibility to weigh the mechanical contribution in a THA dislocation. Methods A group of patients who underwent THA revision for acute dislocation was compared to a group of non-dislocating THA. CT scans and a virtual model of each patient was obtained. A software called "Prosthesis Impingement Simulator (PIS)" was developed for simulating the (ROM) of the prosthetic hip. The ROM free of mechanical impingement was compared between the two groups. Results The PIS test could detect the dislocations with a sensitivity of 71,4%, and a specificity of 85,7%. The Fisher's exact test showed a p-value of 0,02. The Chi-square test found a p-value of 0,009. Conclusion The PIS seems to be an effective tool for the determination of hip prosthetic impingement, as the main aid of the software is the exclusion of mechanical causes in the event of a dislocation.
Asunto(s)
Artroplastia de Reemplazo de Cadera , Prótesis de Cadera , Luxaciones Articulares , Programas Informáticos , Artroplastia de Reemplazo de Cadera/efectos adversos , Articulación de la Cadera/cirugía , Prótesis de Cadera/efectos adversos , Humanos , Diseño de Prótesis , ReoperaciónRESUMEN
Computer graphics is-in many cases-about visualizing what you cannot see. However, virtual reality (VR), from its beginnings, aimed at stimulating all human senses: not just the visual channel. Moreover, this set of multisensory stimuli allows users to feel present and able to interact with the virtual environment. In this way, VR aims to deliver experiences that are comparable to real-life ones in their level of detail and stimulation, intensity, and impact. Hence, VR is not only a means to see, but also to feel differently. With the spreading of VR technologies, there is a growing interest in using VR to evoke emotions, including positive and negative ones. This article discusses the current possibilities and the authors' experience collected in the field in trying to elicit emotions through VR. It explores how different design aspects and features can be used, describing their contributions and benefits in the development of affective VR experiences. This work aims at raising awareness of the necessity to consider and explore the full design space that VR technology provides in comparison to traditional media. Additionally, it provides possible tracks of VR affective applications, illustrating how they could impact our emotions and improve our life, and providing guidelines for their development.