Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-38427541

ABSTRACT

With the rise of short-form video platforms and the increasing availability of data, we see the potential for people to share short-form videos embedded with data in situ (e.g., daily steps when running) to increase the credibility and expressiveness of their stories. However, creating and sharing such videos in situ is challenging since it involves multiple steps and skills (e.g., data visualization creation and video editing), especially for amateurs. By conducting a formative study (N=10) using three design probes, we collected the motivations and design requirements. We then built VisTellAR, a mobile AR authoring tool, to help amateur video creators embed data visualizations in short-form videos in situ. A two-day user study shows that participants (N=12) successfully created various videos with data visualizations in situ and they confirmed the ease of use and learning. AR pre-stage authoring was useful to assist people in setting up data visualizations in reality with more designs in camera movements and interaction with gestures and physical objects to storytelling.

2.
Article in English | MEDLINE | ID: mdl-38079370

ABSTRACT

A Virtual Reality Laboratory (VR Lab) experiment refers to an experiment session that is being conducted in the virtual environment through Virtual Reality (VR) and aims to deliver procedural knowledge to students similar to that in a physical lab environment. While VR Lab is becoming more popular among education institutes as a learning tool for students, existing designs are mostly considered from a student's perspective. Instructors could only receive limited information on how the students are performing and could not provide useful feedback to aid the students' learning and evaluate their performance. This motivated us to create VisTA-LIVE: a Visualization Tool for Assessment of Laboratories In Virtual Environments. In this paper, we present in detail the design thinking approach that was applied to create VisTA-LIVE. The tool is deployed in an Extended Reality (XR) environment, and we report the evaluation results with domain experts and discuss issues related to monitoring and assessing a live VR lab session which lay potential directions for future work. We also describe how the resulting design of the tool could be used as a reference for other education developers who wish to develop similar applications.

3.
IEEE Trans Vis Comput Graph ; 29(8): 3685-3697, 2023 Aug.
Article in English | MEDLINE | ID: mdl-35446768

ABSTRACT

Appropriate gestures can enhance message delivery and audience engagement in both daily communication and public presentations. In this article, we contribute a visual analytic approach that assists professional public speaking coaches in improving their practice of gesture training through analyzing presentation videos. Manually checking and exploring gesture usage in the presentation videos is often tedious and time-consuming. There lacks an efficient method to help users conduct gesture exploration, which is challenging due to the intrinsically temporal evolution of gestures and their complex correlation to speech content. In this article, we propose GestureLens, a visual analytics system to facilitate gesture-based and content-based exploration of gesture usage in presentation videos. Specifically, the exploration view enables users to obtain a quick overview of the spatial and temporal distributions of gestures. The dynamic hand movements are firstly aggregated through a heatmap in the gesture space for uncovering spatial patterns, and then decomposed into two mutually perpendicular timelines for revealing temporal patterns. The relation view allows users to explicitly explore the correlation between speech content and gestures by enabling linked analysis and intuitive glyph designs. The video view and dynamic view show the context and overall dynamic movement of the selected gestures, respectively. Two usage scenarios and expert interviews with professional presentation coaches demonstrate the effectiveness and usefulness of GestureLens in facilitating gesture exploration and analysis of presentation videos.


Subject(s)
Computer Graphics , Gestures , Speech , Hand , Movement
4.
IEEE Trans Vis Comput Graph ; 27(7): 3168-3181, 2021 07.
Article in English | MEDLINE | ID: mdl-31902765

ABSTRACT

Analyzing students' emotions from classroom videos can help both teachers and parents quickly know the engagement of students in class. The availability of high-definition cameras creates opportunities to record class scenes. However, watching videos is time-consuming, and it is challenging to gain a quick overview of the emotion distribution and find abnormal emotions. In this article, we propose EmotionCues, a visual analytics system to easily analyze classroom videos from the perspective of emotion summary and detailed analysis, which integrates emotion recognition algorithms with visualizations. It consists of three coordinated views: a summary view depicting the overall emotions and their dynamic evolution, a character view presenting the detailed emotion status of an individual, and a video view enhancing the video analysis with further details. Considering the possible inaccuracy of emotion recognition, we also explore several factors affecting the emotion analysis, such as face size and occlusion. They provide hints for inferring the possible inaccuracy and the corresponding reasons. Two use cases and interviews with end users and domain experts are conducted to show that the proposed system could be useful and effective for analyzing emotions in the classroom videos.


Subject(s)
Emotions/classification , Facial Expression , Image Processing, Computer-Assisted/methods , Schools , Video Recording/methods , Algorithms , Child , Humans , Students
5.
JMIR Res Protoc ; 9(6): e17756, 2020 Jun 12.
Article in English | MEDLINE | ID: mdl-32530436

ABSTRACT

BACKGROUND: Children have high levels of curiosity and eagerness to explore. This makes them more vulnerable to danger and hazards, and they thus have a higher risk of injury. Safety education such as teaching safety rules and tips is vital to prevent children from injuries. Although game-based approaches have the potential to capture children's attention and sustain their interest in learning, whether these new instructional approaches are more effective than traditional approaches in delivering safety messages to children remains uncertain. OBJECTIVE: The aim of this study is to test the effectiveness of a game-based intervention in promoting safety knowledge and behaviors among Hong Kong school children in Grades 4-6. It will also examine the potential effect of the game-based intervention on these children's functioning and psychosocial difficulties. METHODS: This study comprises the development of a city-based role-playing game Safe City, where players are immersed as safety inspectors to prevent dangerous situations and promote safety behavior in a virtual city environment. The usability and acceptability tests will be conducted with children in Grades 4-6 who will trial the gameplay on a mobile phone. Adjustments will be made based on their feedback. A 4-week randomized controlled trial with children studying in Grades 4-6 in Hong Kong elementary schools will be conducted to assess the effectiveness of the Safe City game-based intervention. In this trial, 504 children will play Safe City, and 504 children will receive traditional instructional materials (electronic and printed safety information). The evaluation will be conducted using both child self-report and parent proxy-report data. Specifically, child safety knowledge and behaviors will be assessed by a questionnaire involving items on knowledge and behaviors, respectively, for home safety, road safety, and sport-related safety; child functioning will be assessed by PedsQL Generic Core Scales; and psychosocial difficulties will be assessed by the Strength and Difficulties Questionnaire. These questionnaires will be administered at 3 time points: before, 1 month, and 3 months after the intervention. Game usage statistics will also be reviewed. RESULTS: This project was funded in September 2019. The design and development of the Safe City game are currently under way. Recruitment and data collection will begin from September 2020 and will continue up to March 1, 2021. Full analysis will be conducted after the end of the data collection period. CONCLUSIONS: If the Safe City game is found to be an effective tool to deliver safety education, it could be used to promote safety in children in the community and upgraded to incorporate more health-related topics to support education and empowerment for the larger public. TRIAL REGISTRATION: ClinicalTrials.gov NCT04096196; https://clinicaltrials.gov/ct2/show/NCT04096196. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): PRR1-10.2196/17756.

6.
IEEE Trans Vis Comput Graph ; 26(1): 579-589, 2020 01.
Article in English | MEDLINE | ID: mdl-31425087

ABSTRACT

Production planning in the manufacturing industry is crucial for fully utilizing factory resources (e.g., machines, raw materials and workers) and reducing costs. With the advent of industry 4.0, plenty of data recording the status of factory resources have been collected and further involved in production planning, which brings an unprecedented opportunity to understand, evaluate and adjust complex production plans through a data-driven approach. However, developing a systematic analytics approach for production planning is challenging due to the large volume of production data, the complex dependency between products, and unexpected changes in the market and the plant. Previous studies only provide summarized results and fail to show details for comparative analysis of production plans. Besides, the rapid adjustment to the plan in the case of an unanticipated incident is also not supported. In this paper, we propose PlanningVis, a visual analytics system to support the exploration and comparison of production plans with three levels of details: a plan overview presenting the overall difference between plans, a product view visualizing various properties of individual products, and a production detail view displaying the product dependency and the daily production details in related factories. By integrating an automatic planning algorithm with interactive visual explorations, PlanningVis can facilitate the efficient optimization of daily production planning as well as support a quick response to unanticipated incidents in manufacturing. Two case studies with real-world data and carefully designed interviews with domain experts demonstrate the effectiveness and usability of PlanningVis.

7.
IEEE Trans Vis Comput Graph ; 26(3): 1622-1636, 2020 03.
Article in English | MEDLINE | ID: mdl-30281461

ABSTRACT

The research on massive open online courses (MOOCs) data analytics has mushroomed recently because of the rapid development of MOOCs. The MOOC data not only contains learner profiles and learning outcomes, but also sequential information about when and which type of learning activities each learner performs, such as reviewing a lecture video before undertaking an assignment. Learning sequence analytics could help understand the correlations between learning sequences and performances, which further characterize different learner groups. However, few works have explored the sequence of learning activities, which have mostly been considered aggregated events. A visual analytics system called ViSeq is introduced to resolve the loss of sequential information, to visualize the learning sequence of different learner groups, and to help better understand the reasons behind the learning behaviors. The system facilitates users in exploring learning sequences from multiple levels of granularity. ViSeq incorporates four linked views: the projection view to identify learner groups, the pattern view to exhibit overall sequential patterns within a selected group, the sequence view to illustrate the transitions between consecutive events, and the individual view with an augmented sequence chain to compare selected personal learning sequences. Case studies and expert interviews were conducted to evaluate the system.

8.
IEEE Trans Image Process ; 12(3): 341-55, 2003.
Article in English | MEDLINE | ID: mdl-18237913

ABSTRACT

This paper presents new approaches in characterizing and segmenting the content of video. These approaches are developed based upon the pattern analysis of spatio-temporal slices. While traditional approaches to motion sequence analysis tend to formulate computational methodologies on two or three adjacent frames, spatio-temporal slices provide rich visual patterns along a larger temporal scale. We first describe a motion computation method based on a structure tensor formulation. This method encodes visual patterns of spatio-temporal slices in a tensor histogram, on one hand, characterizing the temporal changes of motion over time, on the other hand, describing the motion trajectories of different moving objects. By analyzing the tensor histogram of an image sequence, we can temporally segment the sequence into several motion coherent subunits, in addition, spatially segment the sequence into various motion layers. The temporal segmentation of image sequences expeditiously facilitates the motion annotation and content representation of a video, while the spatial decomposition of image sequences leads to a prominent way of reconstructing background panoramic images and computing foreground objects.

SELECTION OF CITATIONS
SEARCH DETAIL
...