Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
J Mol Graph Model ; 126: 108670, 2024 01.
Artículo en Inglés | MEDLINE | ID: mdl-37984193

RESUMEN

Protein-protein interaction occurs on surface patches with some degree of complementary geometric and chemical features. Building on this understanding, this study endeavors to characterize the spike protein of the SARS-CoV-2 virus at the morphological and geometrical levels in its Alpha, Delta, and Omicron variants. In particular, the affinity between different SARS-CoV-2 spike proteins and the ACE2 receptor present on the membrane of the human respiratory system cells is investigated. To achieve an adequate degree of geometrical accuracy, the 3D depth maps of the proteins in exam are filtered by developing an ad-hoc convolutional filter with a kernel implemented as a sphere of varying radius, simulating a ball rolling on the surface (similar to the 'rolling ball' filter). This ball ideally models a hypothetical molecule that could interface with the protein and is inspired by the geometric approach to macromolecule-ligand interactions proposed by Kuntz et al. in 1982. The aim is to mitigate the imperfections and to obtain a smoother surface that could be studied from a geometrical perspective for binding purposes. A set of geometric descriptors, borrowed from the 3D face analysis context is then mapped point-by-point onto protein depth maps. Following a feature extraction phase inspired by Histogram of Oriented Gradients and Local Binary Patterns, the final histogram features are used as input for a Support Vector Machine classifier to automatically classify the proteins according to their surface affinity, where a similarity in shape is observed between ACE2 and the spike protein of the SARS-CoV-2 Omicron variant. Finally, Root Mean Square Error analysis is used to quantify the geometrical affinity between the ACE2 receptor and the respective Receptor Binding Domains of the three SARS-CoV-2 variants, culminating in a geometrical explanation for the higher contagiousness of Omicron relative to the other variants under study.


Asunto(s)
COVID-19 , Humanos , SARS-CoV-2 , Enzima Convertidora de Angiotensina 2 , Glicoproteína de la Espiga del Coronavirus , Unión Proteica , Mutación
2.
J Clin Med ; 12(23)2023 Nov 28.
Artículo en Inglés | MEDLINE | ID: mdl-38068407

RESUMEN

BACKGROUND: Addressing intraoperative bleeding remains a significant challenge in the field of robotic surgery. This research endeavors to pioneer a groundbreaking solution utilizing convolutional neural networks (CNNs). The objective is to establish a system capable of forecasting instances of intraoperative bleeding during robot-assisted radical prostatectomy (RARP) and promptly notify the surgeon about bleeding risks. METHODS: To achieve this, a multi-task learning (MTL) CNN was introduced, leveraging a modified version of the U-Net architecture. The aim was to categorize video input as either "absence of blood accumulation" (0) or "presence of blood accumulation" (1). To facilitate seamless interaction with the neural networks, the Bleeding Artificial Intelligence-based Detector (BLAIR) software was created using the Python Keras API and built upon the PyQT framework. A subsequent clinical assessment of BLAIR's efficacy was performed, comparing its bleeding identification performance against that of a urologist. Various perioperative variables were also gathered. For optimal MTL-CNN training parameterization, a multi-task loss function was adopted to enhance the accuracy of event detection by taking advantage of surgical tools' semantic segmentation. Additionally, the Multiple Correspondence Analysis (MCA) approach was employed to assess software performance. RESULTS: The MTL-CNN demonstrated a remarkable event recognition accuracy of 90.63%. When evaluating BLAIR's predictive ability and its capacity to pre-warn surgeons of potential bleeding incidents, the density plot highlighted a striking similarity between BLAIR and human assessments. In fact, BLAIR exhibited a faster response. Notably, the MCA analysis revealed no discernible distinction between the software and human performance in accurately identifying instances of bleeding. CONCLUSION: The BLAIR software proved its competence by achieving over 90% accuracy in predicting bleeding events during RARP. This accomplishment underscores the potential of AI to assist surgeons during interventions. This study exemplifies the positive impact AI applications can have on surgical procedures.

3.
Clin Oral Investig ; 27(9): 5049-5062, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37369817

RESUMEN

OBJECTIVES: The aim of this study was to analyse changes in facial soft tissue thickness (FSTT) after corrective surgeries for dental malocclusion. The correlation between body mass index (BMI) and sex of patients and their FSTT before undergoing surgery was analysed. MATERIALS AND METHODS: Cone beam computed tomography of seventeen patients that underwent Le Fort I osteotomy in combination with bilateral sagittal split osteotomy were collected. Hard and soft tissue landmarks were selected basing on the interventions. FSTT were computed, and measurements from pre- to post-operative were compared. The relationship between FSTT, sex, and BMI was investigated. RESULTS: Considering the comparison between pre- and post-operative measurements, any significant difference emerged (p > .05). The Pearson's correlation coefficient computed between BMI and the FSTT (pre-operative) showed a correlation in normal-weight patients; the region-specific analysis highlighted a stronger correlation for specific landmarks. Higher median values emerged for women than for men; the subset-based analysis showed that women presented higher values in the malar region, while men presented higher values in the nasal region. CONCLUSIONS: The considered surgeries did not affect the FSTT of the patients; differences related to BMI and sex were found. A collection of FSTT mean values was provided for twenty landmarks of pre- and post-operative of female and male subjects. CLINICAL RELEVANCE: This exploratory analysis gave insights on the behaviour of STT after maxillofacial surgeries that can be applied in the development of predictive methodologies for soft tissue displacements and to study modifications in the facial aspect of the patients.


Asunto(s)
Puntos Anatómicos de Referencia , Maloclusión , Humanos , Masculino , Femenino , Cara/diagnóstico por imagen , Cara/anatomía & histología , Tomografía Computarizada de Haz Cónico , Osteotomía Le Fort/métodos , Cefalometría/métodos
4.
J Pers Med ; 13(3)2023 Feb 25.
Artículo en Inglés | MEDLINE | ID: mdl-36983595

RESUMEN

The current study presents a multi-task end-to-end deep learning model for real-time blood accumulation detection and tools semantic segmentation from a laparoscopic surgery video. Intraoperative bleeding is one of the most problematic aspects of laparoscopic surgery. It is challenging to control and limits the visibility of the surgical site. Consequently, prompt treatment is required to avoid undesirable outcomes. This system exploits a shared backbone based on the encoder of the U-Net architecture and two separate branches to classify the blood accumulation event and output the segmentation map, respectively. Our main contribution is an efficient multi-task approach that achieved satisfactory results during the test on surgical videos, although trained with only RGB images and no other additional information. The proposed multi-tasking convolutional neural network did not employ any pre- or postprocessing step. It achieved a Dice Score equal to 81.89% for the semantic segmentation task and an accuracy of 90.63% for the event detection task. The results demonstrated that the concurrent tasks were properly combined since the common backbone extracted features proved beneficial for tool segmentation and event detection. Indeed, active bleeding usually happens when one of the instruments closes or interacts with anatomical tissues, and it decreases when the aspirator begins to remove the accumulated blood. Even if different aspects of the presented methodology could be improved, this work represents a preliminary attempt toward an end-to-end multi-task deep learning model for real-time video understanding.

5.
World J Urol ; 40(9): 2221-2229, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-35790535

RESUMEN

PURPOSE: To evaluate the role of 3D models on positive surgical margin rate (PSM) rate in patients who underwent robot-assisted radical prostatectomy (RARP) compared to a no-3D control group. Secondarily, we evaluated the postoperative functional and oncological outcomes. METHODS: Prospective study enrolling patients with localized prostate cancer (PCa) undergoing RARP with mp-MRI-based 3D model reconstruction, displayed in a cognitive or augmented-reality fashion, at our Centre from 01/2016 to 01/2020. A control no-3D group was extracted from the last two years of our Institutional RARP database. PSMr between the two groups was evaluated and multivariable linear regression (MLR) models were applied. Finally, Kaplan-Meier estimator was used to calculate biochemical recurrence at 12 months after the intervention. RESULTS: 160 patients were enrolled in the 3D Group, while 640 were selected for the Control Group. A more conservative NS approach was registered in the 3D Group (full NS 20.6% vs 12.7%; intermediate NS 38.1% vs 38.0%; standard NS 41.2% vs 49.2%; p = 0.02). 3D Group patients had lower PSM rates (25 vs. 35.1%, p = 0.01). At MLR models, the availability of 3D technology (p = 0.005) and the absence of extracapsular extension (ECE, p = 0.004) at mp-MRI were independent predictors of lower PSMr. Moreover, 3D model represented a significant protective factor for PSM in patients with ECE or pT3 disease. CONCLUSION: The availability of 3D models during the intervention allows to modulate the NS approach, limiting the occurrence of PSM, especially in patients with ECE at mp-MRI or pT3 PCa.


Asunto(s)
Neoplasias de la Próstata , Procedimientos Quirúrgicos Robotizados , Robótica , Humanos , Masculino , Márgenes de Escisión , Estudios Prospectivos , Prostatectomía , Neoplasias de la Próstata/cirugía
6.
Injury ; 53(7): 2625-2634, 2022 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-35469638

RESUMEN

INTRODUCTION: In recent years, the scientific community focused on developing Computer-Aided Diagnosis (CAD) tools that could improve clinicians' bone fractures diagnosis, primarily based on Convolutional Neural Networks (CNNs). However, the discerning accuracy of fractures' subtypes was far from optimal. The aim of the study was 1) to evaluate a new CAD system based on Vision Transformers (ViT), a very recent and powerful deep learning technique, and 2) to assess whether clinicians' diagnostic accuracy could be improved using this system. MATERIALS AND METHODS: 4207 manually annotated images were used and distributed, by following the AO/OTA classification, in different fracture types. The ViT architecture was used and compared with a classic CNN and a multistage architecture composed of successive CNNs. To demonstrate the reliability of this approach, (1) the attention maps were used to visualize the most relevant areas of the images, (2) the performance of a generic CNN and ViT was compared through unsupervised learning techniques, and (3) 11 clinicians were asked to evaluate and classify 150 proximal femur fractures' images with and without the help of the ViT, then results were compared for potential improvement. RESULTS: The ViT was able to predict 83% of the test images correctly. Precision, recall and F1-score were 0.77 (CI 0.64-0.90), 0.76 (CI 0.62-0.91) and 0.77 (CI 0.64-0.89), respectively. The clinicians' diagnostic improvement was 29% (accuracy 97%; p 0.003) when supported by ViT's predictions, outperforming the algorithm alone. CONCLUSIONS: This paper showed the potential of Vision Transformers in bone fracture classification. For the first time, good results were obtained in sub-fractures classification, outperforming the state of the art. Accordingly, the assisted diagnosis yielded the best results, proving the effectiveness of collaborative work between neural networks and clinicians.


Asunto(s)
Fracturas del Fémur , Redes Neurales de la Computación , Diagnóstico por Computador/métodos , Fracturas del Fémur/diagnóstico por imagen , Fracturas del Fémur/cirugía , Fémur , Humanos , Reproducibilidad de los Resultados
7.
Int J Med Robot ; 18(3): e2387, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-35246913

RESUMEN

INTRODUCTION: The current study presents a deep learning framework to determine, in real-time, position and rotation of a target organ from an endoscopic video. These inferred data are used to overlay the 3D model of patient's organ over its real counterpart. The resulting augmented video flow is streamed back to the surgeon as a support during laparoscopic robot-assisted procedures. METHODS: This framework exploits semantic segmentation and, thereafter, two techniques, based on Convolutional Neural Networks and motion analysis, were used to infer the rotation. RESULTS: The segmentation shows optimal accuracies, with a mean IoU score greater than 80% in all tests. Different performance levels are obtained for rotation, depending on the surgical procedure. DISCUSSION: Even if the presented methodology has various degrees of precision depending on the testing scenario, this work sets the first step for the adoption of deep learning and augmented reality to generalise the automatic registration process.


Asunto(s)
Aprendizaje Profundo , Laparoscopía , Procedimientos Quirúrgicos Robotizados , Robótica , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Laparoscopía/métodos , Redes Neurales de la Computación
8.
Urology ; 164: e312-e316, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-35063460

RESUMEN

Augmented reality robot-assisted partial nephrectomy (AR-RAPN) is limited by the need of a constant manual overlapping of the hyper-accuracy 3D (HA3D) virtual models to the real anatomy. To present our preliminary experience with automatic 3D virtual model overlapping during AR-RAPN. To reach a fully automated HA3D model overlapping, we pursued computer vision strategies, based on the identification of landmarks to link the virtual model. Due to the limited field of view of RAPN, we used the whole kidney as a marker. Moreover, to overcome the limit of similarity of colors between the kidney and its neighboring structures, we super-enhanced the organ, using the NIRF Firefly fluorescence imaging technology. A specifically developed software named "IGNITE" (Indocyanine GreeN automatIc augmenTed rEality) allowed the automatic anchorage of the HA3D model to the real organ, leveraging the enhanced view offered by NIRF technology. Ten automatic AR-RAPN were performed. For all the patients a HA3D model was produced and visualized as AR image inside the robotic console. During all the surgical procedures, the automatic ICG-guided AR technology successfully anchored the virtual model to the real organ without hand-assistance (mean anchorage time: 7 seconds), even when moving the camera throughout the operative field, while zooming and translating the organ. In 7 patients with totally endophytic or posterior lesions, the renal masses were correctly identified with automatic AR technology, performing a successful enucleoresection. No intraoperative or postoperative Clavien >2 complications or positive surgical margins were recorded. Our pilot study provides the first demonstration of the application of computer vision technology for AR procedures, with a software automatically performing a visual concordance during the overlap of 3D models and in vivo anatomy. Its actual limitations, related to the kidney deformations during surgery altering the automatic anchorage, will be overcome implementing the organ recognition with deep learning algorithms.


Asunto(s)
Realidad Aumentada , Procedimientos Quirúrgicos Robotizados , Robótica , Cirugía Asistida por Computador , Computadores , Humanos , Imagenología Tridimensional/métodos , Verde de Indocianina , Nefrectomía/métodos , Proyectos Piloto , Procedimientos Quirúrgicos Robotizados/métodos , Cirugía Asistida por Computador/métodos
9.
Sci Rep ; 11(1): 24108, 2021 12 16.
Artículo en Inglés | MEDLINE | ID: mdl-34916547

RESUMEN

Despite the great potential of Virtual Reality (VR) to arouse emotions, there are no VR affective databases available as it happens for pictures, videos, and sounds. In this paper, we describe the validation of ten affective interactive Virtual Environments (VEs) designed to be used in Virtual Reality. These environments are related to five emotions. The testing phase included using two different experimental setups to deliver the overall experience. The setup did not include any immersive VR technology, because of the ongoing COVID-19 pandemic, but the VEs were designed to run on stereoscopic visual displays. We collected measures related to the participants' emotional experience based on six discrete emotional categories plus neutrality and we included an assessment of the sense of presence related to the different experiences. The results showed how the scenarios can be differentiated according to the emotion aroused. Finally, the comparison between the two experimental setups demonstrated high reliability of the experience and strong adaptability of the scenarios to different contexts of use.


Asunto(s)
Nivel de Alerta/fisiología , COVID-19/psicología , Bases de Datos Factuales/estadística & datos numéricos , Emociones/fisiología , SARS-CoV-2/aislamiento & purificación , Realidad Virtual , Adulto , COVID-19/epidemiología , COVID-19/virología , Emociones/clasificación , Empatía , Femenino , Humanos , Masculino , Pandemias/prevención & control , Estimulación Luminosa/métodos , Reproducibilidad de los Resultados , SARS-CoV-2/fisiología , Adulto Joven
10.
IEEE Comput Graph Appl ; 41(6): 171-178, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34890316

RESUMEN

Computer graphics is-in many cases-about visualizing what you cannot see. However, virtual reality (VR), from its beginnings, aimed at stimulating all human senses: not just the visual channel. Moreover, this set of multisensory stimuli allows users to feel present and able to interact with the virtual environment. In this way, VR aims to deliver experiences that are comparable to real-life ones in their level of detail and stimulation, intensity, and impact. Hence, VR is not only a means to see, but also to feel differently. With the spreading of VR technologies, there is a growing interest in using VR to evoke emotions, including positive and negative ones. This article discusses the current possibilities and the authors' experience collected in the field in trying to elicit emotions through VR. It explores how different design aspects and features can be used, describing their contributions and benefits in the development of affective VR experiences. This work aims at raising awareness of the necessity to consider and explore the full design space that VR technology provides in comparison to traditional media. Additionally, it provides possible tracks of VR affective applications, illustrating how they could impact our emotions and improve our life, and providing guidelines for their development.


Asunto(s)
Realidad Virtual , Gráficos por Computador , Emociones , Humanos , Sensación
11.
Acta Biomed ; 92(5): e2021295, 2021 11 03.
Artículo en Inglés | MEDLINE | ID: mdl-34738593

RESUMEN

Background and aim of the work Implant dislocation in total hip arthroplasties (THA) is a common concern amongst the orthopedic surgeons and represents the most frequent complication after primary implant. Several causes could be responsible for the dislocation, including the malpositioning of the components. Conventional imaging techniques frequently fail to detect the mechanical source of dislocation mainly because they could not reproduce a dynamic evaluation of the components. The purpose of this study was to elaborate a diagnostic tool capable to virtually assess if the range of movement (ROM) of a THA is free from anterior and/or superior mechanical impingement. The ultimate aim is to give the surgeon the possibility to weigh the mechanical contribution in a THA dislocation.   Methods A group of patients who underwent THA revision for acute dislocation was compared to a group of non-dislocating THA. CT scans and a virtual model of each patient was obtained. A software called "Prosthesis Impingement Simulator (PIS)" was developed for simulating the (ROM) of the prosthetic hip. The ROM free of mechanical impingement was compared between the two groups.    Results  The PIS test could detect the dislocations with a sensitivity of 71,4%, and a specificity of 85,7%. The Fisher's exact test showed a p-value of 0,02. The Chi-square test found a p-value of 0,009.   Conclusion The PIS seems to be an effective tool for the determination of hip prosthetic impingement, as the main aid of the software is the exclusion of mechanical causes in the event of a dislocation.


Asunto(s)
Artroplastia de Reemplazo de Cadera , Prótesis de Cadera , Luxaciones Articulares , Programas Informáticos , Artroplastia de Reemplazo de Cadera/efectos adversos , Articulación de la Cadera/cirugía , Prótesis de Cadera/efectos adversos , Humanos , Diseño de Prótesis , Reoperación
12.
Int J Comput Assist Radiol Surg ; 16(9): 1435-1445, 2021 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-34165672

RESUMEN

PURPOSE: The current study aimed to propose a Deep Learning (DL) and Augmented Reality (AR) based solution for a in-vivo robot-assisted radical prostatectomy (RARP), to improve the precision of a published work from our group. We implemented a two-steps automatic system to align a 3D virtual ad-hoc model of a patient's organ with its 2D endoscopic image, to assist surgeons during the procedure. METHODS: This approach was carried out using a Convolutional Neural Network (CNN) based structure for semantic segmentation and a subsequent elaboration of the obtained output, which produced the needed parameters for attaching the 3D model. We used a dataset obtained from 5 endoscopic videos (A, B, C, D, E), selected and tagged by our team's specialists. We then evaluated the most performing couple of segmentation architecture and neural network and tested the overlay performances. RESULTS: U-Net stood out as the most effecting architectures for segmentation. ResNet and MobileNet obtained similar Intersection over Unit (IoU) results but MobileNet was able to elaborate almost twice operations per seconds. This segmentation technique outperformed the results from the former work, obtaining an average IoU for the catheter of 0.894 (σ = 0.076) compared to 0.339 (σ = 0.195). This modifications lead to an improvement also in the 3D overlay performances, in particular in the Euclidean Distance between the predicted and actual model's anchor point, from 12.569 (σ= 4.456) to 4.160 (σ = 1.448) and in the Geodesic Distance between the predicted and actual model's rotations, from 0.266 (σ = 0.131) to 0.169 (σ = 0.073). CONCLUSION: This work is a further step through the adoption of DL and AR in the surgery domain. In future works, we will overcome the limits of this approach and finally improve every step of the surgical procedure.


Asunto(s)
Realidad Aumentada , Aprendizaje Profundo , Humanos , Procesamiento de Imagen Asistido por Computador , Masculino , Redes Neurales de la Computación , Semántica
13.
J Pers Med ; 11(3)2021 Mar 13.
Artículo en Inglés | MEDLINE | ID: mdl-33805736

RESUMEN

Patients with severe facial deformities present serious dysfunctionalities along with an unsatisfactory aesthetic facial appearance. Several methods have been proposed to specifically plan the interventions on the patient's needs, but none of these seem to achieve a sufficient level of accuracy in predicting the resulting facial appearance. In this context, a deep knowledge of what occurs in the face after bony movements in specific surgeries would give the possibility to develop more reliable systems. This study aims to propose a novel 3D approach for the evaluation of soft tissue zygomatic modifications after zygomatic osteotomy; geometrical descriptors usually involved in face analysis tasks, i.e., face recognition and facial expression recognition, are here applied to soft tissue malar region to detect changes in surface shape. As ground truth for zygomatic changes, a zygomatic openness angular measure is adopted. The results show a high sensibility of geometrical descriptors in detecting shape modification of the facial surface, outperforming the results obtained from the angular evaluation.

14.
J Craniomaxillofac Surg ; 49(3): 223-230, 2021 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-33509673

RESUMEN

BACKGROUND: The aim of this prospective study is to objectively assess 3D soft tissue and bone changes of the malar region by using the malar valgization osteotomy in concomitant association with orthognatic surgery. MATERIALS AND METHODS: From January 2015 to January 2018, 10 patients who underwent single stage bilateral malar valgization osteotomy in conjunction with maxillo-mandibular orthognatic procedures for aesthetic and functional correction were evaluated. Clinical and surgical reports were collected and patient satisfaction was evaluated with a VAS score. For each patient, maxillofacial CT-scans were collected 1 month preoperatively (T0) and 6 months after the operation (T1). DICOM data were imported and elaborated in the software MatLab, which creates a 3D soft tissue model of the face. 3D Bone changes were assessed importing DICOM data into iPlan (BrainLAB 3.0) software and the superimposition process was achieved using autofusion. Descriptive statistical analyses were obtained for soft tissue and bone changes. RESULTS: Considering bone assessment the comparison by superimposition between T0 and T1 showed an increase of the distance between bilateral malar prominence (Pr - Pl) and a slight forward movement (87,65 ± 1,55 to 97,60 ± 5,91); p-value 0.007. All of the patients had improvement of α angle, ranging from 36,30 ± 1,70 to 38,45 ± 0,55, p-value 0,04 (αr) and 36,75 ± 1,58 to 38,45 ± 0,35; p-value 0,04 (αl). The distance S increased from 78,05 ± 2,48 to 84,2 ± 1,20; p-value 0,04 (Sr) and 78,65 ± 2,16 to 82,60 ± 0,90 (Sl); p-value 0,03. Considering the soft tissue, the comparison by superimposition between T0 and T1 showed an antero-lateral movement (p-value 0.008 NVL; p-value 0.001 NVR) of the malar bone projection together with an increase in width measurements (p-value 0,05 VL; p-value 0,01 VR). Angular measurement confirmed the pattern of the bony changes (p-value 0.034 αL; p-value 0,05 αR). CONCLUSION: The malar valgization osteotomy in conjunction with orthognatic surgery is effective in improving zygomatic projection contributing to a balanced facial correction in midface hypoplasia.3D geometrical based volume and surface analysis demonstrate an increase in transversal and forward direction. The osteotomy can be safely performed in conjunction with orthognatic procedures.


Asunto(s)
Estética Dental , Huesos Faciales , Humanos , Osteotomía , Estudios Prospectivos , Cigoma/diagnóstico por imagen , Cigoma/cirugía
15.
Eur J Radiol ; 133: 109373, 2020 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-33126175

RESUMEN

PURPOSE: Suspected fractures are among the most common reasons for patients to visit emergency departments and often can be difficult to detect and analyze them on film scans. Therefore, we aimed to design a Deep Learning-based tool able to help doctors in diagnosis of bone fractures, following the hierarchical classification proposed by the Arbeitsgemeinschaft für Osteosynthesefragen (AO) Foundation and the Orthopaedic Trauma Association (OTA). METHODS: 2453 manually annotated images of proximal femur were used for the classification in different fracture types (1133 Unbroken femur, 570 type A, 750 type B). Secondly, the A type fractures were further classified into the types A1, A2, A3. Two approaches were implemented: the first is a fine-tuned InceptionV3 convolutional neural network (CNN), used as a baseline for our own proposed approach; the second is a multistage architecture composed by successive CNNs in cascade, perfectly suited to the hierarchical structure of the AO/OTA classification. Gradient Class Activation Maps (Grad-CAM) where used to visualize the most relevant areas of the images for classification. The averaged ability of the CNN was measured with accuracy, area under receiver operating characteristics curve (AUC), recall, precision and F1-score. The averaged ability of the orthopedists with and without the help of the CNN was measured with accuracy and Cohen's Kappa coefficient. RESULTS: We obtained an averaged accuracy of 0.86 (CI 0.84-0.88) for three classes classification and 0.81 (CI 0.79-0.82) for five classes classification. The average accuracy improvement of specialists was 14 % with and without the CAD (Computer Assisted Diagnosis) system. CONCLUSION: We showed the potential of using a CAD system based on CNN for improving diagnosis accuracy and for helping students with a lower level of expertise. We started our work with proximal femur fractures and we aim to extend it to all bone segments further in the future, in order to implement a tool that could be used in every-day hospital routine.


Asunto(s)
Aprendizaje Profundo , Fémur/diagnóstico por imagen , Humanos , Redes Neurales de la Computación , Radiografía , Rayos X
16.
Proteins ; : e25993, 2020 Aug 11.
Artículo en Inglés | MEDLINE | ID: mdl-32779779

RESUMEN

This article reports on the results of research aimed to translate biometric 3D face recognition concepts and algorithms into the field of protein biophysics in order to precisely and rapidly classify morphological features of protein surfaces. Both human faces and protein surfaces are free-forms and some descriptors used in differential geometry can be used to describe them applying the principles of feature extraction developed for computer vision and pattern recognition. The first part of this study focused on building the protein dataset using a simulation tool and performing feature extraction using novel geometrical descriptors. The second part tested the method on two examples, first involved a classification of tubulin isotypes and the second compared tubulin with the FtsZ protein, which is its bacterial analog. An additional test involved several unrelated proteins. Different classification methodologies have been used: a classic approach with a support vector machine (SVM) classifier and an unsupervised learning with a k-means approach. The best result was obtained with SVM and the radial basis function kernel. The results are significant and competitive with the state-of-the-art protein classification methods. This leads to a new methodological direction in protein structure analysis.

17.
Int J Med Robot ; 16(5): 1-12, 2020 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-32510857

RESUMEN

PURPOSE: The current study aimed to systematically review the literature addressing the use of deep learning (DL) methods in intraoperative surgery applications, focusing on the data collection, the objectives of these tools and, more technically, the DL-based paradigms utilized. METHODS: A literature search with classic databases was performed: we identified, with the use of specific keywords, a total of 996 papers. Among them, we selected 52 for effective analysis, focusing on articles published after January 2015. RESULTS: The preliminary results of the implementation of DL in clinical setting are encouraging. Almost all the surgery sub-fields have seen the advent of artificial intelligence (AI) applications and the results outperformed the previous techniques in the majority of the cases. From these results, a conceptualization of an intelligent operating room (IOR) is also presented. CONCLUSION: This evaluation outlined how AI and, in particular, DL are revolutionizing the surgery field, with numerous applications, such as context detection and room management. This process is evolving years by years into the realization of an IOR, equipped with technologies perfectly suited to drastically improve the surgical workflow.


Asunto(s)
Inteligencia Artificial , Aprendizaje Profundo , Humanos , Quirófanos
18.
Comput Methods Programs Biomed ; 191: 105505, 2020 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-32387863

RESUMEN

BACKGROUND AND OBJECTIVE: We present an original approach to the development of augmented reality (AR) real-time solutions for robotic surgery navigation. The surgeon operating the robotic system through a console and a visor experiences reduced awareness of the operatory scene. In order to improve the surgeon's spatial perception during robot-assisted minimally invasive procedures, we provide him/her with a solid automatic software system to position, rotate and scale in real-time the 3D virtual model of a patient's organ aligned over its image captured by the endoscope. METHODS: We observed that the surgeon may benefit differently from the 3D augmentation during each stage of the surgical procedure; moreover, each stage may present different visual elements that provide specific challenges and opportunities to exploit for organ detection strategies implementation. Hence we integrate different solutions, each dedicated to a specific stage of the surgical procedure, into a single software system. RESULTS: We present a formal model that generalizes our approach, describing a system composed of integrated solutions for AR in robot-assisted surgery. Following the proposed framework, and application has been developed which is currently used during in vivo surgery, for extensive testing, by the Urology unity of the San Luigi Hospital, in Orbassano (To), Italy. CONCLUSIONS: The main contribution of this paper is in presenting a modular approach to the tracking problem during in-vivo robotic surgery, whose efficacy from a medical point of view has been assessed in cited works. The segmentation of the whole procedure in a set of stages allows associating the best tracking strategy to each of them, as well as to re-utilize implemented software mechanisms in stages with similar features.


Asunto(s)
Aumento de la Imagen/métodos , Imagenología Tridimensional , Procedimientos Quirúrgicos Robotizados , Humanos , Procedimientos Quirúrgicos Mínimamente Invasivos , Procedimientos Quirúrgicos Robotizados/métodos , Programas Informáticos
19.
Minerva Urol Nefrol ; 72(1): 49-57, 2020 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-31833725

RESUMEN

INTRODUCTION: As we enter the era of "big data," an increasing amount of complex health-care data will become available. These data are often redundant, "noisy," and characterized by wide variability. In order to offer a precise and transversal view of a clinical scenario the artificial intelligence (AI) with machine learning (ML) algorithms and Artificial neuron networks (ANNs) process were adopted, with a promising wide diffusion in the near future. The present work aims to provide a comprehensive and critical overview of the current and potential applications of AI and ANNs in urology. EVIDENCE ACQUISITION: A non-systematic review of the literature was performed by screening Medline, PubMed, the Cochrane Database, and Embase to detect pertinent studies regarding the application of AI and ANN in Urology. EVIDENCE SYNTHESIS: The main application of AI in urology is the field of genitourinary cancers. Focusing on prostate cancer, AI was applied for the prediction of prostate biopsy results. For bladder cancer, the prediction of recurrence-free probability and diagnostic evaluation were analysed with ML algorithms. For kidney and testis cancer, anecdotal experiences were reported for staging and prediction of diseases recurrence. More recently, AI has been applied in non-oncological diseases like stones and functional urology. CONCLUSIONS: AI technologies are growing their role in health care; but, up to now, their "real-life" implementation remains limited. However, in the near future, the potential of AI-driven era could change the clinical practice in Urology, improving overall patient outcomes.


Asunto(s)
Inteligencia Artificial , Redes Neurales de la Computación , Urología/métodos , Macrodatos , Femenino , Humanos , Masculino
20.
J Biomech ; 93: 86-93, 2019 Aug 27.
Artículo en Inglés | MEDLINE | ID: mdl-31327523

RESUMEN

Nowadays, facial mimicry studies have acquired a great importance in the clinical domain and 3D motion capture systems are becoming valid tools for analysing facial muscles movements, thanks to the remarkable developments achieved in the 1990s. However, the face analysis domain suffers from a lack of valid motion capture protocol, due to the complexity of the human face. Indeed, a framework for defining the optimal marker set layout does not exist yet and, up to date, researchers still use their traditional facial point sets with manually allocated markers. Therefore, the study proposes an automatic approach to compute a minimum optimized marker layout to be exploited in facial motion capture, able to simplify the marker allocation without decreasing the significance level. Specifically, the algorithm identifies the optimal facial marker layouts selecting the subsets of linear distances among markers that allow to automatically recognizing with the highest performances, through a k-nearest neighbours classification technique, the acted facial movements. The marker layouts are extracted from them. Various validation and testing phases have demonstrated the accuracy, robustness and usefulness of the custom approach.


Asunto(s)
Biomimética , Cara/fisiología , Movimiento (Física) , Movimiento , Fenómenos Ópticos , Algoritmos , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA