Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
1.
Sensors (Basel) ; 24(4)2024 Feb 15.
Artículo en Inglés | MEDLINE | ID: mdl-38400393

RESUMEN

Human activity recognition (HAR) in wearable and ubiquitous computing typically involves translating sensor readings into feature representations, either derived through dedicated pre-processing procedures or integrated into end-to-end learning approaches. Independent of their origin, for the vast majority of contemporary HAR methods and applications, those feature representations are typically continuous in nature. That has not always been the case. In the early days of HAR, discretization approaches had been explored-primarily motivated by the desire to minimize computational requirements on HAR, but also with a view on applications beyond mere activity classification, such as, for example, activity discovery, fingerprinting, or large-scale search. Those traditional discretization approaches, however, suffer from substantial loss in precision and resolution in the resulting data representations with detrimental effects on downstream analysis tasks. Times have changed, and in this paper, we propose a return to discretized representations. We adopt and apply recent advancements in vector quantization (VQ) to wearables applications, which enables us to directly learn a mapping between short spans of sensor data and a codebook of vectors, where the index comprises the discrete representation, resulting in recognition performance that is at least on par with their contemporary, continuous counterparts-often surpassing them. Therefore, this work presents a proof of concept for demonstrating how effective discrete representations can be derived, enabling applications beyond mere activity classification but also opening up the field to advanced tools for the analysis of symbolic sequences, as they are known, for example, from domains such as natural language processing. Based on an extensive experimental evaluation of a suite of wearable-based benchmark HAR tasks, we demonstrate the potential of our learned discretization scheme and discuss how discretized sensor data analysis can lead to substantial changes in HAR.


Asunto(s)
Actividades Humanas , Dispositivos Electrónicos Vestibles , Humanos , Aprendizaje Automático , Procesamiento de Lenguaje Natural
2.
IEEE Trans Vis Comput Graph ; 16(1): 70-80, 2010.
Artículo en Inglés | MEDLINE | ID: mdl-19910662

RESUMEN

We present an algorithm for creating realistic animations of characters that are swimming through fluids. Our approach combines dynamic simulation with data-driven kinematic motions (motion capture data) to produce realistic animation in a fluid. The interaction of the articulated body with the fluid is performed by incorporating joint constraints with rigid animation and by extending a solid/fluid coupling method to handle articulated chains. Our solver takes as input the current state of the simulation and calculates the angular and linear accelerations of the connected bodies needed to match a particular motion sequence for the articulated body. These accelerations are used to estimate the forces and torques that are then applied to each joint. Based on this approach, we demonstrate simulated swimming results for a variety of different strokes, including crawl, backstroke, breaststroke, and butterfly. The ability to have articulated bodies interact with fluids also allows us to generate simulations of simple water creatures that are driven by simple controllers.


Asunto(s)
Gráficos por Computador , Imagenología Tridimensional/métodos , Articulaciones/fisiología , Modelos Biológicos , Reología/métodos , Natación/fisiología , Simulación por Computador , Humanos
3.
Neurosurg Clin N Am ; 30(3): 383-389, 2019 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-31078239

RESUMEN

Multiple registries are currently collecting patient-specific data on lumbar spondylolisthesis including outcomes data. The collection of imaging diagnostics data along with comparative outcomes data following decompression versus decompression and fusion treatments for degenerative spondylolisthesis represents an enormous opportunity for modern machine-learning analytics research.


Asunto(s)
Inteligencia Artificial , Descompresión Quirúrgica , Estenosis Espinal/cirugía , Espondilolistesis/cirugía , Descompresión Quirúrgica/instrumentación , Descompresión Quirúrgica/métodos , Humanos , Vértebras Lumbares/cirugía , Resultado del Tratamiento
4.
J Neurosurg Spine ; 30(6): 729-735, 2019 Jun 01.
Artículo en Inglés | MEDLINE | ID: mdl-31153155

RESUMEN

OBJECTIVEThere are a wide variety of comparative treatment options in neurosurgery that do not lend themselves to traditional randomized controlled trials. The object of this article was to examine how clinical registries might be used to generate new evidence to support a particular treatment option when comparable options exist. Lumbar spondylolisthesis is used as an example.METHODSThe authors reviewed the literature examining the comparative effectiveness of decompression alone versus decompression with fusion for lumbar stenosis with degenerative spondylolisthesis. Modern data acquisition for the creation of registries was also reviewed with an eye toward how artificial intelligence for the treatment of lumbar spondylolisthesis might be explored.RESULTSCurrent randomized controlled trials differ on the importance of adding fusion when performing decompression for lumbar spondylolisthesis. Standardized approaches to extracting data from the electronic medical record as well as the ability to capture radiographic imaging and incorporate patient-reported outcomes (PROs) will ultimately lead to the development of modern, structured, data-filled registries that will lay the foundation for machine learning.CONCLUSIONSThere is a growing realization that patient experience, satisfaction, and outcomes are essential to improving the overall quality of spine care. There is a need to use practical, validated PRO tools in the quest to optimize outcomes within spine care. Registries will be designed to contain robust clinical data in which predictive analytics can be generated to develop and guide data-driven personalized spine care.


Asunto(s)
Espondilolistesis/terapia , Inteligencia Artificial , Humanos , Vértebras Lumbares , Sistema de Registros , Espondilolistesis/epidemiología
5.
Int J Comput Assist Radiol Surg ; 14(12): 2155-2163, 2019 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-31267333

RESUMEN

PURPOSE: Surgical task-based metrics (rather than entire procedure metrics) can be used to improve surgeon training and, ultimately, patient care through focused training interventions. Machine learning models to automatically recognize individual tasks or activities are needed to overcome the otherwise manual effort of video review. Traditionally, these models have been evaluated using frame-level accuracy. Here, we propose evaluating surgical activity recognition models by their effect on task-based efficiency metrics. In this way, we can determine when models have achieved adequate performance for providing surgeon feedback via metrics from individual tasks. METHODS: We propose a new CNN-LSTM model, RP-Net-V2, to recognize the 12 steps of robotic-assisted radical prostatectomies (RARP). We evaluated our model both in terms of conventional methods (e.g., Jaccard Index, task boundary accuracy) as well as novel ways, such as the accuracy of efficiency metrics computed from instrument movements and system events. RESULTS: Our proposed model achieves a Jaccard Index of 0.85 thereby outperforming previous models on RARP. Additionally, we show that metrics computed from tasks automatically identified using RP-Net-V2 correlate well with metrics from tasks labeled by clinical experts. CONCLUSION: We demonstrate that metrics-based evaluation of surgical activity recognition models is a viable approach to determine when models can be used to quantify surgical efficiencies. We believe this approach and our results illustrate the potential for fully automated, postoperative efficiency reports.


Asunto(s)
Competencia Clínica , Aprendizaje Automático , Modelos Anatómicos , Prostatectomía/educación , Procedimientos Quirúrgicos Robotizados/métodos , Benchmarking , Humanos , Masculino , Cirujanos/educación
6.
Int J Comput Assist Radiol Surg ; 13(5): 731-739, 2018 May.
Artículo en Inglés | MEDLINE | ID: mdl-29549553

RESUMEN

PURPOSE: Manual feedback in basic robot-assisted minimally invasive surgery (RMIS) training can consume a significant amount of time from expert surgeons' schedule and is prone to subjectivity. In this paper, we explore the usage of different holistic features for automated skill assessment using only robot kinematic data and propose a weighted feature fusion technique for improving score prediction performance. Moreover, we also propose a method for generating 'task highlights' which can give surgeons a more directed feedback regarding which segments had the most effect on the final skill score. METHODS: We perform our experiments on the publicly available JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS) and evaluate four different types of holistic features from robot kinematic data-sequential motion texture (SMT), discrete Fourier transform (DFT), discrete cosine transform (DCT) and approximate entropy (ApEn). The features are then used for skill classification and exact skill score prediction. Along with using these features individually, we also evaluate the performance using our proposed weighted combination technique. The task highlights are produced using DCT features. RESULTS: Our results demonstrate that these holistic features outperform all previous Hidden Markov Model (HMM)-based state-of-the-art methods for skill classification on the JIGSAWS dataset. Also, our proposed feature fusion strategy significantly improves performance for skill score predictions achieving up to 0.61 average spearman correlation coefficient. Moreover, we provide an analysis on how the proposed task highlights can relate to different surgical gestures within a task. CONCLUSIONS: Holistic features capturing global information from robot kinematic data can successfully be used for evaluating surgeon skill in basic surgical tasks on the da Vinci robot. Using the framework presented can potentially allow for real-time score feedback in RMIS training and help surgical trainees have more focused training.


Asunto(s)
Automatización , Competencia Clínica , Procedimientos Quirúrgicos Mínimamente Invasivos/educación , Procedimientos Quirúrgicos Robotizados/educación , Fenómenos Biomecánicos , Retroalimentación , Análisis de Fourier , Gestos , Humanos , Movimiento (Física) , Análisis de Componente Principal , Técnicas de Sutura , Análisis y Desempeño de Tareas
7.
Int J Comput Assist Radiol Surg ; 13(3): 443-455, 2018 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-29380122

RESUMEN

PURPOSE: Basic surgical skills of suturing and knot tying are an essential part of medical training. Having an automated system for surgical skills assessment could help save experts time and improve training efficiency. There have been some recent attempts at automated surgical skills assessment using either video analysis or acceleration data. In this paper, we present a novel approach for automated assessment of OSATS-like surgical skills and provide an analysis of different features on multi-modal data (video and accelerometer data). METHODS: We conduct a large study for basic surgical skill assessment on a dataset that contained video and accelerometer data for suturing and knot-tying tasks. We introduce "entropy-based" features-approximate entropy and cross-approximate entropy, which quantify the amount of predictability and regularity of fluctuations in time series data. The proposed features are compared to existing methods of Sequential Motion Texture, Discrete Cosine Transform and Discrete Fourier Transform, for surgical skills assessment. RESULTS: We report average performance of different features across all applicable OSATS-like criteria for suturing and knot-tying tasks. Our analysis shows that the proposed entropy-based features outperform previous state-of-the-art methods using video data, achieving average classification accuracies of 95.1 and 92.2% for suturing and knot tying, respectively. For accelerometer data, our method performs better for suturing achieving 86.8% average accuracy. We also show that fusion of video and acceleration features can improve overall performance for skill assessment. CONCLUSION: Automated surgical skills assessment can be achieved with high accuracy using the proposed entropy features. Such a system can significantly improve the efficiency of surgical training in medical schools and teaching hospitals.


Asunto(s)
Acelerometría/métodos , Competencia Clínica , Educación Médica/métodos , Facultades de Medicina , Técnicas de Sutura/educación , Grabación en Video , Humanos
8.
DigitalBiomarkers 17 (2017) ; 2017: 21-26, 2017 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-29505038

RESUMEN

Motivated by health applications, eating detection with off-the-shelf devices has been an active area of research. A common approach has been to recognize and model individual intake gestures with wrist-mounted inertial sensors. Despite promising results, this approach is limiting as it requires the sensing device to be worn on the hand performing the intake gesture, which cannot be guaranteed in practice. Through a study with 14 participants comparing eating detection performance when gestural data is recorded with a wrist-mounted device on (1) both hands, (2) only the dominant hand, and (3) only the non-dominant hand, we provide evidence that a larger set of arm and hand movement patterns beyond food intake gestures are predictive of eating activities when L1 or L2 normalization is applied to the data. Our results are supported by the theory of asymmetric bimanual action and contribute to the field of automated dietary monitoring. In particular, it shines light on a new direction for eating activity recognition with consumer wearables in realistic settings.

9.
Int J Comput Assist Radiol Surg ; 11(9): 1623-36, 2016 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-27567917

RESUMEN

PURPOSE: Routine evaluation of basic surgical skills in medical schools requires considerable time and effort from supervising faculty. For each surgical trainee, a supervisor has to observe the trainees in person. Alternatively, supervisors may use training videos, which reduces some of the logistical overhead. All these approaches however are still incredibly time consuming and involve human bias. In this paper, we present an automated system for surgical skills assessment by analyzing video data of surgical activities. METHOD: We compare different techniques for video-based surgical skill evaluation. We use techniques that capture the motion information at a coarser granularity using symbols or words, extract motion dynamics using textural patterns in a frame kernel matrix, and analyze fine-grained motion information using frequency analysis. RESULTS: We were successfully able to classify surgeons into different skill levels with high accuracy. Our results indicate that fine-grained analysis of motion dynamics via frequency analysis is most effective in capturing the skill relevant information in surgical videos. CONCLUSION: Our evaluations show that frequency features perform better than motion texture features, which in-turn perform better than symbol-/word-based features. Put succinctly, skill classification accuracy is positively correlated with motion granularity as demonstrated by our results on two challenging video datasets.


Asunto(s)
Competencia Clínica , Educación de Postgrado en Medicina/métodos , Evaluación Educacional/métodos , Cirugía General/educación , Facultades de Medicina , Grabación en Video , Automatización , Humanos
10.
Proc ACM Int Conf Ubiquitous Comput ; 2015: 1029-1040, 2015 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-29520397

RESUMEN

Recognizing when eating activities take place is one of the key challenges in automated food intake monitoring. Despite progress over the years, most proposed approaches have been largely impractical for everyday usage, requiring multiple on-body sensors or specialized devices such as neck collars for swallow detection. In this paper, we describe the implementation and evaluation of an approach for inferring eating moments based on 3-axis accelerometry collected with a popular off-the-shelf smartwatch. Trained with data collected in a semi-controlled laboratory setting with 20 subjects, our system recognized eating moments in two free-living condition studies (7 participants, 1 day; 1 participant, 31 days), with F-scores of 76.1% (66.7% Precision, 88.8% Recall), and 71.3% (65.2% Precision, 78.6% Recall). This work represents a contribution towards the implementation of a practical, automated system for everyday food intake monitoring, with applicability in areas ranging from health research and food journaling.

11.
IUI ; 2015: 427-431, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-25859566

RESUMEN

Dietary self-monitoring has been shown to be an effective method for weight-loss, but it remains an onerous task despite recent advances in food journaling systems. Semi-automated food journaling can reduce the effort of logging, but often requires that eating activities be detected automatically. In this work we describe results from a feasibility study conducted in-the-wild where eating activities were inferred from ambient sounds captured with a wrist-mounted device; twenty participants wore the device during one day for an average of 5 hours while performing normal everyday activities. Our system was able to identify meal eating with an F-score of 79.8% in a person-dependent evaluation, and with 86.6% accuracy in a person-independent evaluation. Our approach is intended to be practical, leveraging off-the-shelf devices with audio sensing capabilities in contrast to systems for automated dietary assessment based on specialized sensors.

12.
Proc Int Symp Wearable Comput ; 2015: 75-82, 2015 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-29553145

RESUMEN

We present a method to analyze images taken from a passive egocentric wearable camera along with the contextual information, such as time and day of week, to learn and predict everyday activities of an individual. We collected a dataset of 40,103 egocentric images over a 6 month period with 19 activity classes and demonstrate the benefit of state-of-the-art deep learning techniques for learning and predicting daily activities. Classification is conducted using a Convolutional Neural Network (CNN) with a classification method we introduce called a late fusion ensemble. This late fusion ensemble incorporates relevant contextual information and increases our classification accuracy. Our technique achieves an overall accuracy of 83.07% in predicting a person's activity across the 19 activity classes. We also demonstrate some promising results from two additional users by fine-tuning the classifier with one day of training data.

13.
IEEE Trans Pattern Anal Mach Intell ; 33(1): 30-42, 2011 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-21088317

RESUMEN

This paper presents an automatic segmentation algorithm for video frames captured by a (monocular) webcam that closely approximates depth segmentation from a stereo camera. The frames are segmented into foreground and background layers that comprise a subject (participant) and other objects and individuals. The algorithm produces correct segmentations even in the presence of large background motion with a nearly stationary foreground. This research makes three key contributions: First, we introduce a novel motion representation, referred to as "motons," inspired by research in object recognition. Second, we propose estimating the segmentation likelihood from the spatial context of motion. The estimation is efficiently learned by random forests. Third, we introduce a general taxonomy of tree-based classifiers that facilitates both theoretical and experimental comparisons of several known classification algorithms and generates new ones. In our bilayer segmentation algorithm, diverse visual cues such as motion, motion context, color, contrast, and spatial priors are fused by means of a conditional random field (CRF) model. Segmentation is then achieved by binary min-cut. Experiments on many sequences of our videochat application demonstrate that our algorithm, which requires no initialization, is effective in a variety of scenes, and the segmentation results are comparable to those obtained by stereo systems.


Asunto(s)
Inteligencia Artificial , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Simulación por Computador , Humanos , Movimiento (Física) , Reconocimiento de Normas Patrones Automatizadas/métodos , Grabación en Video/métodos
14.
Ergon Des ; 15(3): 17-23, 2007 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-22545001

RESUMEN

Technology in the home environment has the potential to support older adults in a variety of ways. We took an interdisciplinary approach (human factors/ergonomics and computer science) to develop a technology "coach" that could support older adults in learning to use a medical device. Our system provided a computer vision system to track the use of a blood glucose meter and provide users with feedback if they made an error. This research could support the development of an in-home personal assistant to coach individuals in a variety of tasks necessary for independent living.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA