Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Psychometrika ; 88(2): 487-526, 2023 06.
Article in English | MEDLINE | ID: mdl-36877429

ABSTRACT

Multidimensional item response theory (MIRT) is a statistical test theory that precisely estimates multiple latent skills of learners from the responses in a test. Both compensatory and non-compensatory models have been proposed for MIRT: the former assumes that each skill can complement other skills, whereas the latter assumes they cannot. This non-compensatory assumption is convincing in many tests that measure multiple skills; therefore, applying non-compensatory models to such data is crucial for achieving unbiased and accurate estimation. In contrast to tests, latent skills will change over time in daily learning. To monitor the growth of skills, dynamical extensions of MIRT models have been investigated. However, most of them assumed compensatory models, and a model that can reproduce continuous latent states of skills under the non-compensatory assumption has not been proposed thus far. To enable accurate skill tracing under the non-compensatory assumption, we propose a dynamical extension of non-compensatory MIRT models by combining a linear dynamical system and a non-compensatory model. This results in a complicated posterior of skills, which we approximate with a Gaussian distribution by minimizing the Kullback-Leibler divergence between the approximated posterior and the true posterior. The learning algorithm for the model parameters is derived through Monte Carlo expectation maximization. Simulation studies verify that the proposed method is able to reproduce latent skills accurately, whereas the dynamical compensatory model suffers from significant underestimation errors. Furthermore, experiments on an actual data set demonstrate that our dynamical non-compensatory model can infer practical skill tracing and clarify differences in skill tracing between non-compensatory and compensatory models.


Subject(s)
Algorithms , Software , Psychometrics , Computer Simulation , Monte Carlo Method
2.
Front Robot AI ; 9: 903450, 2022.
Article in English | MEDLINE | ID: mdl-36246490

ABSTRACT

In this study, HcVGH, a method that learns spatio-temporal categories by segmenting first-person-view (FPV) videos captured by mobile robots, is proposed. Humans perceive continuous high-dimensional information by dividing and categorizing it into significant segments. This unsupervised segmentation capability is considered important for mobile robots to learn spatial knowledge. The proposed HcVGH combines a convolutional variational autoencoder (cVAE) with HVGH, a past method, which follows the hierarchical Dirichlet process-variational autoencoder-Gaussian process-hidden semi-Markov model comprising deep generative and statistical models. In the experiment, FPV videos of an agent were used in a simulated maze environment. FPV videos contain spatial information, and spatial knowledge can be learned by segmenting them. Using the FPV-video dataset, the segmentation performance of the proposed model was compared with previous models: HVGH and hierarchical recurrent state space model. The average segmentation F-measure achieved by HcVGH was 0.77; therefore, HcVGH outperformed the baseline methods. Furthermore, the experimental results showed that the parameters that represent the movability of the maze environment can be learned.

4.
Front Robot AI ; 6: 115, 2019.
Article in English | MEDLINE | ID: mdl-33501130

ABSTRACT

Humans perceive continuous high-dimensional information by dividing it into meaningful segments, such as words and units of motion. We believe that such unsupervised segmentation is also important for robots to learn topics such as language and motion. To this end, we previously proposed a hierarchical Dirichlet process-Gaussian process-hidden semi-Markov model (HDP-GP-HSMM). However, an important drawback of this model is that it cannot divide high-dimensional time-series data. Furthermore, low-dimensional features must be extracted in advance. Segmentation largely depends on the design of features, and it is difficult to design effective features, especially in the case of high-dimensional data. To overcome this problem, this study proposes a hierarchical Dirichlet process-variational autoencoder-Gaussian process-hidden semi-Markov model (HVGH). The parameters of the proposed HVGH are estimated through a mutual learning loop of the variational autoencoder and our previously proposed HDP-GP-HSMM. Hence, HVGH can extract features from high-dimensional time-series data while simultaneously dividing it into segments in an unsupervised manner. In an experiment, we used various motion-capture data to demonstrate that our proposed model estimates the correct number of classes and more accurate segments than baseline methods. Moreover, we show that the proposed method can learn latent space suitable for segmentation.

5.
Front Neurorobot ; 11: 67, 2017.
Article in English | MEDLINE | ID: mdl-29311889

ABSTRACT

Humans divide perceived continuous information into segments to facilitate recognition. For example, humans can segment speech waves into recognizable morphemes. Analogously, continuous motions are segmented into recognizable unit actions. People can divide continuous information into segments without using explicit segment points. This capacity for unsupervised segmentation is also useful for robots, because it enables them to flexibly learn languages, gestures, and actions. In this paper, we propose a Gaussian process-hidden semi-Markov model (GP-HSMM) that can divide continuous time series data into segments in an unsupervised manner. Our proposed method consists of a generative model based on the hidden semi-Markov model (HSMM), the emission distributions of which are Gaussian processes (GPs). Continuous time series data is generated by connecting segments generated by the GP. Segmentation can be achieved by using forward filtering-backward sampling to estimate the model's parameters, including the lengths and classes of the segments. In an experiment using the CMU motion capture dataset, we tested GP-HSMM with motion capture data containing simple exercise motions; the results of this experiment showed that the proposed GP-HSMM was comparable with other methods. We also conducted an experiment using karate motion capture data, which is more complex than exercise motion capture data; in this experiment, the segmentation accuracy of GP-HSMM was 0.92, which outperformed other methods.

SELECTION OF CITATIONS
SEARCH DETAIL
...