Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Med Eng Phys ; 119: 104025, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37634903

RESUMEN

Deep inferior epigastric artery perforator (DIEAP) flap reconstruction surgeries can potentially benefit from augmented reality (AR) in the context of surgery planning and outcomes improvement. Although three-dimensional (3D) models help visualize and map the perforators, the anchorage of the models to the patient's body during surgery does not consider eventual skin deformation from the moment of computed tomography angiography (CTA) data acquisition until the position of the patient while in surgery. In this work, we compared the 3D deformation registration from supine arms down (CTA position) to supine with arms at 90° degrees (surgical position), estimating the patient's skin deformation. We processed the data sets of 20 volunteers with a 3D rigid registration tool and performed a descriptive statistical analysis and statistical inference. With 2.45 mm of root mean square and 2.89 mm of standard deviation, results include 30% cases of deformation above 3 mm and 15% above 4 mm. Pose transformation deformation indicates that 3D surface data from the CTA scan position differs from data acquired in loco at the surgical table. Such results indicate that research should be conducted to construct accurate 3D models using CTA data to display on the patient, while considering projection errors when using AR technology.


Asunto(s)
Angiografía , Piel , Humanos , Tomografía Computarizada por Rayos X
2.
Sci Rep ; 13(1): 2823, 2023 02 17.
Artículo en Inglés | MEDLINE | ID: mdl-36801901

RESUMEN

To test and evaluate the second installment of DENTIFY, a virtual reality haptic simulator for Operative Dentistry (OD), on preclinical dental students, by focusing on user performance and self-assessment. Twenty voluntary unpaid preclinical dental students, with different background experience, were enrolled for this study. After the completion of an informed consent, a demographic questionnaire, and being introduced to the prototype (on the first testing session), three testing sessions followed (S1, S2, S3). Each session involved the following steps: (I) free experimentation; (II) task execution; S3 also included (III) completion of questionnaires associated with the experiment (total of 8 Self-Assessment Questions (SAQ)); and (IV) guided interview. As expected, drill time decreased steadily for all tasks when increasing prototype use, verified by RM ANOVA. Regarding performance metrics (Comparisons by Student's t-test and ANOVA) recorded at S3, in overall, a higher performance was verified for participants with the following characteristics: female, non-gamer, no previous VR experience and with over 2 semesters of previous experience of working on phantom models. The correlation between the participants' performance (drill time), for the four tasks, and user self-assessment evaluation, verified by Spearman's rho analysis, allowed to conclude that a higher performance was observed in students who responded that DENTIFY improved their self perception of manual force applied. Regarding the questionnaires, Spearman's rho analysis showed a positive correlation between the improvement DENTIFY inputs on conventional teaching sensed by students, also enhancing their interest in learning OD, their desire to have more simulator hours and the improvement sensed on manual dexterity. All participating students adhered well to the DENTIFY experimentation. DENTIFY allows for student self-assessment and contributes to improving student performance. Simulators with VR and haptic pens for teaching in OD should be designed as a consistent and gradual teaching strategy, allowing multiplicity of simulated scenarios, bimanual manipulation, and the possibility of real-time feedback to allow for the student's immediate self-assessment. Additionally, they should create performance reports per student to ensure self-perception/criticism of their evolution over longer periods of learning time.


Asunto(s)
Estudiantes de Odontología , Realidad Virtual , Humanos , Femenino , Retroalimentación , Simulación por Computador , Operatoria Dental/educación , Autoevaluación (Psicología) , Tecnología Háptica , Interfaz Usuario-Computador , Competencia Clínica
3.
PeerJ Comput Sci ; 8: e1052, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36091986

RESUMEN

Deep learning (DL) models are very useful for human activity recognition (HAR); these methods present better accuracy for HAR when compared to traditional, among other advantages. DL learns from unlabeled data and extracts features from raw data, as for the case of time-series acceleration. Sliding windows is a feature extraction technique. When used for preprocessing time-series data, it provides an improvement in accuracy, latency, and cost of processing. The time and cost of preprocessing can be beneficial especially if the window size is small, but how small can this window be to keep good accuracy? The objective of this research was to analyze the performance of four DL models: a simple deep neural network (DNN); a convolutional neural network (CNN); a long short-term memory network (LSTM); and a hybrid model (CNN-LSTM), when variating the sliding window size using fixed overlapped windows to identify an optimal window size for HAR. We compare the effects in two acceleration sources': wearable inertial measurement unit sensors (IMU) and motion caption systems (MOCAP). Moreover, short sliding windows of sizes 5, 10, 15, 20, and 25 frames to long ones of sizes 50, 75, 100, and 200 frames were compared. The models were fed using raw acceleration data acquired in experimental conditions for three activities: walking, sit-to-stand, and squatting. Results show that the most optimal window is from 20-25 frames (0.20-0.25s) for both sources, providing an accuracy of 99,07% and F1-score of 87,08% in the (CNN-LSTM) using the wearable sensors data, and accuracy of 98,8% and F1-score of 82,80% using MOCAP data; similar accurate results were obtained with the LSTM model. There is almost no difference in accuracy in larger frames (100, 200). However, smaller windows present a decrease in the F1-score. In regard to inference time, data with a sliding window of 20 frames can be preprocessed around 4x (LSTM) and 2x (CNN-LSTM) times faster than data using 100 frames.

4.
Comput Methods Programs Biomed ; 221: 106831, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-35544961

RESUMEN

BACKGROUND: Dental preclinical training has been traditionally centered onverbal instructions and subsequent execution on phantom heads and plastic training models. However, these present present limitations. Virtual Reality (VR) and haptic simulators have been proposed with promising results and advantages and have showed usefullness in the preclinical training environment. We designed DENTIFY, a multimodal immersive simulator to assist Operative Dentistry learning, which exposes the user to different virtual clinical scenarios while operating a haptic pen to simulate dental drilling. OBJECTIVE: The main objective is to assess DENTIFY's usability, acceptance, and educational usefulness to dentists, in order to make the proper changes and, subsequently, to test DENTIFY with undergraduate preclinical dental students. METHODS: DENTIFY combines an immersive head mounted VR display, a haptic pen in which the pen itself has been replaced by a 3D printed model of a dental turbine and a controller with buttons to adjust and select the scenario of the simulation, along with 3D sounds of real dental drilling. The user's dominant hand operated the virtual turbine on the VR-created scenario, while the non-dominant hand is used to activate the simulator and case selection. The simulation sessions occurred in a controlled virtual environment. We evaluated DENTIFY's usability and acceptance over the course of 13 training sessions with dental professionals, after the users performed a drilling task in virtual dental tissues. RESULTS: The conducted user acceptance indicates that DENTIFY shows potencial enhancing learning in operative dentistry as it promotes self-evaluation and multimodal immersion on the dental drilling experience. CONCLUSIONS: DENTIFY presented significant usability and acceptance from trained dentists. This tool showed to have teaching and learning (hence, pedagogical) potential in operative dentistry.


Asunto(s)
Operatoria Dental , Realidad Virtual , Simulación por Computador , Operatoria Dental/educación , Educación en Odontología/métodos , Tecnología Háptica , Humanos , Interfaz Usuario-Computador
5.
Comput Methods Biomech Biomed Engin ; 25(13): 1459-1470, 2022 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-34919009

RESUMEN

This work presents Motion Envelopes (ME), a simple method to estimate the missing longitudinal rotations of minimal stick figures, which is based on the spatial-temporal surface traced by line segments that connect contiguous pairs of joints. We validate ME by analyzing the gait patterns of 6 healthy subjects, comprising a total of 18 gait cycles. A strong correlation between experimental and estimated data was obtained for lower limbs and upper arms, indicating that ME can predict their longitudinal orientation in normal gait, hence, ME can be used to complement the kinematic information of stick figures whenever it is incomplete.


Asunto(s)
Bastones , Caminata , Fenómenos Biomecánicos , Marcha , Humanos , Movimiento (Física)
6.
Breast ; 56: 14-17, 2021 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-33548617

RESUMEN

INTRODUCTION: Innovations in 3D spatial technology and augmented reality imaging driven by digital high-tech industrial science have accelerated experimental advances in breast cancer imaging and the development of medical procedures aimed to reduce invasiveness. PRESENTATION OF CASE: A 57-year-old post-menopausal woman presented with screen-detected left-sided breast cancer. After undergoing all staging and pre-operative studies the patient was proposed for conservative breast surgery with tumor localization. During surgery, an experimental digital and non-invasive intra-operative localization method with augmented reality was compared with the standard pre-operative localization with carbon tattooing (institutional protocol). The breast surgeon wearing an augmented reality headset (Hololens) was able to visualize the tumor location projection inside the patient's left breast in the usual supine position. DISCUSSION: This work describes, to our knowledge, the first experimental test with a digital non-invasive method for intra-operative breast cancer localization using augmented reality to guide breast conservative surgery. In this case, a successful overlap of the previous standard pre-operative marks with carbon tattooing and tumor visualization inside the patient's breast with augmented reality was obtained. CONCLUSION: Breast cancer conservative guided surgery with augmented reality can pave the way for a digital non-invasive method for intra-operative tumor localization.


Asunto(s)
Realidad Aumentada , Neoplasias de la Mama/cirugía , Imagenología Tridimensional , Mamoplastia , Cirugía Asistida por Computador/métodos , Neoplasias de la Mama/diagnóstico por imagen , Femenino , Humanos , Imagen por Resonancia Magnética , Persona de Mediana Edad
7.
Int J Med Inform ; 146: 104342, 2021 02.
Artículo en Inglés | MEDLINE | ID: mdl-33310434

RESUMEN

BACKGROUND: Tools for training and education of dental students can improve their ability to perform technical procedures such as dental implant placement. Shortage of training can negatively affect dental implantologists' performance during intraoperative procedures, resulting in lack of surgical precision and, consequently, inadequate implant placement, which may lead to unsuccessful implant supported restorations or other complications. OBJECTIVE: We designed and developed IMMPLANT a virtual reality educational tool to assist implant placement learning, which allows users to freely manipulate 3D dental models (e.g., a simulated patient's mandible and implant) with their dominant hand while operating a touchscreen device to assist 3D manipulation. METHODS: The proposed virtual reality tool combines an immersive head-mounted display, a small hand tracking device and a smartphone that are all connected to a laptop. The operator's dominant hand is tracked to quickly and coarsely manipulate either the 3D dental model or the virtual implant, while the non-dominant hand holds a smartphone converted into a controller to assist button activation and a greater input precision for 3D implant positioning and inclination. We evaluated IMMPLANT's usability and acceptance during training sessions with 16 dental professionals. RESULTS: The conducted user acceptance study revealed that IMMPLANT constitutes a versatile, portable, and complementary tool to assist implant placement learning, as it promotes immersive visualization and spatial manipulation of 3D dental anatomy. CONCLUSIONS: IMMPLANT is a promising virtual reality tool to assist student learning and 3D dental visualization for implant placement education. IMMPLANT may also be easily incorporated into training programs for dental students.


Asunto(s)
Implantes Dentales , Realidad Virtual , Humanos , Aprendizaje
8.
J Biomed Inform ; 107: 103463, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-32562897

RESUMEN

One of the most promising applications of Optical See-Through Augmented Reality is minimally laparoscopic surgery, which currently suffers from problems such as surgeon discomfort and fatigue caused by looking at a display positioned outside the surgeon's visual field, made worse by the length of the procedure. This fatigue is especially felt on the surgeon's neck, as it is strained from adopting unnatural postures in order to visualise the laparoscopic video feed. Throughout this paper, we will present work in Augmented Reality, as well as developments in surgery and Augmented Reality applied to both surgery in general and laparoscopy in particular to address these issues. We applied user and task analysis methods to learn about practices performed in the operating room by observing surgeons in their working environment in order to understand, in detail, how they performed their tasks and achieved their intended goals. Drawing on observations and analysis of video recordings of laparoscopic surgeries, we identified relevant constraints and design requirements. Besides proposals to approach the ergonomic issues, we present a design and implementation of a multimodal interface to enhance the laparoscopic procedure. Our method makes it more comfortable for surgeons by allowing them to keep the laparoscopic video in their viewing area regardless of neck posture. Also, our interface makes it possible to access patient imaging data without interrupting the operation. It also makes it possible to communicate with team members through a pointing reticle. We evaluated how surgeons perceived the implemented prototype, in terms of usefulness and usability, via a think-aloud protocol to conduct qualitative evaluation sessions which we describe in detail in this paper. In addition to checking the advantages of the prototype as compared to traditional laparoscopic settings, we also conducted a System Usability Scale questionnaire for measuring its usability, and a NASA Task Load Index questionnaire to rate perceived workload and to assess the prototype effectiveness. Our results show that surgeons consider that our prototype can improve surgeon-to-surgeon communication using head pose as a means of pointing. Also, surgeons believe that our approach can afford a more comfortable posture throughout the surgery and enhance hand-eye coordination, as physicians no longer need to twist their necks to look at screens placed outside the field of operation.


Asunto(s)
Realidad Aumentada , Laparoscopía , Ergonomía , Humanos , Postura , Grabación en Video
9.
Comput Med Imaging Graph ; 82: 101731, 2020 06.
Artículo en Inglés | MEDLINE | ID: mdl-32361555

RESUMEN

Conventional needle insertion training relies on medical dummies that simulate surface anatomy and internal structures such as veins or arteries. These dummies offer an interesting space to augment with useful information to assist training practices, namely, internal anatomical structures (subclavian artery and vein, internal jugular vein and carotid artery) along with target point, desired inclination, position and orientation of the needle. However, limited research has been conducted on Optical See-Through Augmented Reality (OST-AR) interfaces for training needle insertion, especially for central venous catheterization (CVC). In this work we introduce PIÑATA, an interactive tool to explore the benefits of OST-AR in CVC training using a dummy of the upper torso and neck; andexplore if PIÑATA complements conventional training practices.. Our design contribution also describes the observation and co-design sessions used to collect user requirements, usability aspects and user preferences. This was followed by a comparative study with 18 participants - attending specialists and medical residents - that performed needle insertion tasks for CVC with PIÑATAand the conventional training system. The performance was objectively measured by task completion time and number of needle insertion errors. A correlation was found between the task completion time in the two training methods, suggesting the concurrent validity of our OST-AR tool. An inherent difference in the task completion time (p =0.040) and in the number of errors (p = 0.036) between novices and experts proved the construct validity of the new tool. The qualitative answers of the participants also suggest its face and content validity, a high acceptability rate and a medium perceived workload. Finally, the result of semi-structured interviews with these 18 participants revealed that 14 of them considered that PIÑATA can complement the conventional training system, especially due to the visibility of the vessels inside the simulator. 13 agreed that OST-AR adoption in these scenarios is likely, particularly during early stages of training. Integration with ultrasound information was highlighted as necessary future work. In sum, the overall results show that the OST-AR tool proposed can complement the conventional training of CVC.


Asunto(s)
Realidad Aumentada , Cateterismo Venoso Central/métodos , Competencia Clínica , Educación Médica/métodos , Agujas , Humanos , Entrevistas como Asunto , Análisis y Desempeño de Tareas
10.
J Biomed Inform ; 100: 103316, 2019 12.
Artículo en Inglés | MEDLINE | ID: mdl-31669287

RESUMEN

Feet input can support mid-air hand gestures for touchless medical image manipulation to prevent unintended activations, especially in sterile contexts. However, foot interaction has yet to be investigated in dental settings. In this paper, we conducted a mixed methods research study with medical dentistry professionals. To this end, we developed a touchless medical image system in either sitting or standing configurations. Clinicians could use both hands as 3D cursors and a minimalist single-foot gesture vocabulary to activate manipulations. First, we performed a qualitative evaluation with 18 medical dentists to assess the utility and usability of our system. Second, we used quantitative methods to compare pedal foot-supported hand interaction and hands-only conditions next to 22 medical dentists. We expand on previous work by characterizing a range of potential limitations of foot-supported touchless 3D interaction in the dental domain. Our findings suggest that clinicians are open to use their foot for simple, fast and easy access to image data during surgical procedures, such as dental implant placement. Furthermore, 3D hand cursors, supported by foot gestures for activation events, were considered useful and easy to employ for medical image manipulation. Even though most clinicians preferred hands-only manipulation for pragmatic purposes, feet-supported interaction was found to provide more precise control and, most importantly, to decrease the number of unintended activations during manipulation. Finally, we provide design considerations for future work exploring foot-supported touchless interfaces for sterile settings in Dental Medicine, regarding: interaction design, foot input devices, the learning process and camera occlusions.


Asunto(s)
Odontología , Pie , Radiografía Dental , Interfaz Usuario-Computador , Gráficos por Computador , Humanos , Imagenología Tridimensional
11.
Med Eng Phys ; 59: 50-55, 2018 09.
Artículo en Inglés | MEDLINE | ID: mdl-30064940

RESUMEN

Understanding morphological features that characterize normal hip joint is critical and necessary for a more comprehensive definition of pathological presentations, such as femoroacetabular impingement and hip dysplasia. Based on anatomical observations that articular surfaces of synovial joints are better represented by ovoidal shapes than by spheres, the aim of this study is to computationally test this morphological classification for the femoral head and acetabular cavity of asymptomatic, dysplastic and impinged hips by comparing spherical, ellipsoidal and ovoidal shapes. An image-based surface fitting framework was used to assess the goodness-of-fit of spherical, ellipsoidal and tapered ellipsoidal (i.e., egg-like) shapes. The framework involved image segmentation with active contour methods, mesh smoothing and decimation, and surface fitting to point clouds performed with genetic algorithms. Image data of the hip region was obtained from computed tomography and magnetic resonance imaging scans. Shape analyses were performed upon image data from 20 asymptomatic, 20 dysplastic and 20 impinged (cam, pincer, and mixed) hips of patients with ages ranging between 18 and 45 years old (28 male and 32 women). Tapered ellipsoids presented the lowest fitting errors (i.e., more oval), followed by ellipsoids and spheres which had the worst goodness-of-fit. Ovoidal geometries are also more representative of cam, pincer, mixed impinged hips when compared to spherical or ellipsoidal shapes. The statistical analysis of the surface fitting errors reveal that ovoidal shapes better represent both articular surfaces of the hip joint, revealing a greater approximation to the overall features of asymptomatic, dysplastic and impinged cases.


Asunto(s)
Enfermedades Asintomáticas , Luxación de la Cadera/patología , Articulación de la Cadera/patología , Adulto , Femenino , Luxación de la Cadera/diagnóstico por imagen , Articulación de la Cadera/diagnóstico por imagen , Humanos , Imagenología Tridimensional , Imagen por Resonancia Magnética , Masculino , Propiedades de Superficie , Tomografía Computarizada por Rayos X , Adulto Joven
12.
J Biomed Inform ; 72: 140-149, 2017 08.
Artículo en Inglés | MEDLINE | ID: mdl-28720438

RESUMEN

Analyzing medical volume datasets requires interactive visualization so that users can extract anatomo-physiological information in real-time. Conventional volume rendering systems rely on 2D input devices, such as mice and keyboards, which are known to hamper 3D analysis as users often struggle to obtain the desired orientation that is only achieved after several attempts. In this paper, we address which 3D analysis tools are better performed with 3D hand cursors operating on a touchless interface comparatively to a 2D input devices running on a conventional WIMP interface. The main goals of this paper are to explore the capabilities of (simple) hand gestures to facilitate sterile manipulation of 3D medical data on a touchless interface, without resorting on wearables, and to evaluate the surgical feasibility of the proposed interface next to senior surgeons (N=5) and interns (N=2). To this end, we developed a touchless interface controlled via hand gestures and body postures to rapidly rotate and position medical volume images in three-dimensions, where each hand acts as an interactive 3D cursor. User studies were conducted with laypeople, while informal evaluation sessions were carried with senior surgeons, radiologists and professional biomedical engineers. Results demonstrate its usability as the proposed touchless interface improves spatial awareness and a more fluent interaction with the 3D volume than with traditional 2D input devices, as it requires lesser number of attempts to achieve the desired orientation by avoiding the composition of several cumulative rotations, which is typically necessary in WIMP interfaces. However, tasks requiring precision such as clipping plane visualization and tagging are best performed with mouse-based systems due to noise, incorrect gestures detection and problems in skeleton tracking that need to be addressed before tests in real medical environments might be performed.


Asunto(s)
Gestos , Imagenología Tridimensional , Interfaz Usuario-Computador , Bases de Datos Factuales , Estadística como Asunto
13.
J Biomech Eng ; 137(11): 114504, 2015 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-26399629

RESUMEN

In this work, MacConaill's classification that the articular surface of the femoral head is better represented by ovoidal shapes rather than purely spherical shapes is computationally tested. To test MacConaill's classification, a surface fitting framework was developed to fit spheres, ellipsoids, superellipsoids, ovoids, and superovoids to computed tomography (CT) data of the femoral proximal epiphysis. The framework includes several image processing and computational geometry techniques, such as active contour segmentation and mesh smoothing, where implicit surface fitting is performed with genetic algorithms. By comparing the surface fitting error statistics, the results indicate that (super)ovoids fit femoral articular surfaces better than spherical or (super)ellipsoidal shapes.


Asunto(s)
Cabeza Femoral/anatomía & histología , Adulto , Algoritmos , Femenino , Cabeza Femoral/diagnóstico por imagen , Humanos , Imagenología Tridimensional , Masculino , Modelos Anatómicos , Propiedades de Superficie , Tomografía Computarizada por Rayos X , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...