RESUMO
Procedural knowledge space theory (PKST) was recently proposed by Stefanutti (British Journal of Mathematical and Statistical Psychology, 72(2) 185-218, 2019) for the assessment of human problem-solving skills. In PKST, the problem space formally represents how a family of problems can be solved and the knowledge space represents the skills required for solving those problems. The Markov solution process model (MSPM) by Stefanutti et al. (Journal of Mathematical Psychology, 103, 102552, 2021) provides a probabilistic framework for modeling the solution process of a task, via PKST. In this article, three adaptive procedures for the assessment of problem-solving skills are proposed that are based on the MSPM. Beside execution correctness, they also consider the sequence of moves observed in the solution of a problem with the aim of increasing efficiency and accuracy of assessments. The three procedures differ from one another in the assumption underlying the solution process, named pre-planning, interim-planning, and mixed-planning. In two simulation studies, the three adaptive procedures have been compared to one another and to the continuous Markov procedure (CMP) by Doignon and Falmagne (1988a). The last one accounts for dichotomous correct/wrong answers only. Results show that all the MSP-based adaptive procedures outperform the CMP in both accuracy and efficiency. These results have been obtained in the framework of the Tower of London test but the procedures can also be applied to all psychological and neuropsychological tests that have a problem space. Thus, the adaptive procedures presented in this paper pave the way to the adaptive assessment in the area of neuropsychological tests.
Assuntos
Algoritmos , Resolução de Problemas , Humanos , Matemática , Simulação por Computador , Cadeias de Markov , Testes NeuropsicológicosRESUMO
In practical applications of knowledge space theory, knowledge states can be conceived as partially ordered clusters of individuals. Existing extensions of the theory to polytomous data lack methods for building "polytomous" structures. To this aim, an adaptation of the k-median clustering algorithm is proposed. It is an extension of k-modes to ordinal data in which the Hamming distance is replaced by the Manhattan distance, and the central tendency measure is the median, rather than the mode. The algorithm is tested in a series of simulation studies and in an application to empirical data. Results show that there are theoretical and practical reasons for preferring the k-median to the k-modes algorithm, whenever the responses to the items are measured on an ordinal scale. This is because the Manhattan distance is sensitive to the order on the levels, while the Hamming distance is not. Overall, k-median seems to be a promising data-driven procedure for building polytomous structures.
Assuntos
Algoritmos , Análise por Conglomerados , Humanos , ConhecimentoRESUMO
If the automatic item generation is used for generating test items, the question of how the equivalence among different instances may be tested is fundamental to assure an accurate assessment. In the present research, the question was dealt by using the knowledge space theory framework. Two different ways of considering the equivalence among instances are proposed: The former is at a deterministic level and it requires that all the instances of an item template must belong to exactly the same knowledge states; the latter adds a probabilistic level to the deterministic one. The former type of equivalence can be modeled by using the BLIM with a knowledge structure assuming equally informative instances; the latter can be modeled by a constrained BLIM. This model assumes equality constraints among the error parameters of the equivalent instances. An approach is proposed for testing the equivalence among instances, which is based on a series of model comparisons. A simulation study and an empirical application show the viability of the approach.
Assuntos
Processamento Eletrônico de Dados/normas , Bases de Conhecimento , Modelos Estatísticos , Probabilidade , Estudos de Avaliação como Assunto , Humanos , PesquisaRESUMO
The methodologies for the construction of a knowledge structure mainly refer to the query to experts, the skill maps, and the data-driven approaches. This last method is of growing interest in recent literature. In this paper, an iterative procedure for building a skill map from a set of data is introduced. This procedure is based on the minimization of the distance between the knowledge structure delineated by a given skill map and the data. The accuracy of the proposed method is tested through a number of simulation studies where the amount of noise in the data is manipulated as well as the kind of structure to be reconstructed. Results show that the procedure is accurate and that its performance tends to be sufficiently stable even with high error rates. The procedure is compared to two already-existing methodologies to derive knowledge structures from a set of data. The use of the corrected Akaike Information Criterion (AICc) as a stopping criterion of the iterative reconstruction procedure is tested against the app criterion introduced by Schrepp. Moreover, two empirical applications on clinical data are reported, and their results show the applicability of the procedure.
Assuntos
Conhecimento , Processos Mentais/fisiologia , Testes Neuropsicológicos , Algoritmos , Simulação por Computador , Humanos , Modelos PsicológicosRESUMO
Assessing executive functions in individuals with disorders or clinical conditions can be challenging, as they may lack the abilities needed for conventional test formats. The use of more personalized test versions, such as adaptive assessments, might be helpful in evaluating individuals with specific needs. This paper introduces PsycAssist, a web-based artificial intelligence system designed for neuropsychological adaptive assessment and training. PsycAssist is a highly flexible and scalable system based on procedural knowledge space theory and may be used potentially with many types of tests. We present the architecture and adaptive assessment engine of PsycAssist and the two currently available tests: Adap-ToL, an adaptive version of the Tower of London-like test to assess planning skills, and MatriKS, a Raven-like test to evaluate fluid intelligence. Finally, we describe the results of an investigation of the usability of Adap-ToL and MatriKS: the evaluators perceived these tools as appropriate and well-suited for their intended purposes, and the test-takers perceived the assessment as a positive experience. To sum up, PsycAssist represents an innovative and promising tool to tailor evaluation and training to the specific characteristics of the individual, useful for clinical practice.
RESUMO
The present work aims at showing that the identification problems (here meant as both issues of empirical indistinguishability and unidentifiability) of some item response theory models are related to the notion of identifiability in knowledge space theory. Specifically, that the identification problems of the 3- and 4-parameter models are related to the more general issues of forward- and backward-gradedness in all items of the power set, which is the knowledge structure associated with IRT models under the assumption of local independence. As a consequence, the identifiability problem of a 4-parameter model is split into two parts: a first one, which is the result of a trade-off between the left-side added parameters and the remainder of the Item Response Function, e.g., a 2-parameter model, and a second one, which is the already well-known identifiability issue of the 2-parameter model itself. Application of the results to the logistic case appears to provide both a confirmation and a generalization of the current findings in the literature for both fixed- and random-effects IRT logistic models.
Assuntos
Psicometria , Humanos , Psicometria/métodos , Modelos Estatísticos , Modelos Logísticos , ConhecimentoRESUMO
Recent literature has pointed out that the basic local independence model (BLIM) when applied to some specific instances of knowledge structures presents identifiability issues. Furthermore, it has been shown that for such instances the model presents a stronger form of unidentifiability named empirical indistinguishability, which leads to the fact that the existence of certain knowledge states in such structures cannot be empirically tested. In this article the notion of indistinguishability is extended to skill maps and, more generally, to the competence-based knowledge space theory. Theoretical results are provided showing that skill maps can be empirically indistinguishable from one another. The most relevant consequence of this is that for some skills there is no empirical evidence to establish their existence. This result is strictly related to the type of probabilistic model investigated, which is essentially the BLIM. Alternative models may exist or can be developed in knowledge space theory for which this indistinguishability problem disappears.
Assuntos
Conhecimento , Modelos EstatísticosRESUMO
The Polytomous Local Independence Model (PoLIM) by Stefanutti, de Chiusole, Anselmi, and Spoto, is an extension of the Basic Local Independence Model (BLIM) to accommodate polytomous items. BLIM, a model for analyzing responses to binary items, is based on Knowledge Space Theory, a framework developed by cognitive scientists and mathematical psychologists for modeling human knowledge acquisition and representation. The purpose of this commentary is to show that PoLIM is simply a paraphrase of a DINA model in cognitive diagnosis for polytomous items. Specifically, BLIM is shown to be equivalent to the DINA model when the BLIM-items are conceived as binary single-attribute items, each with a distinct attribute; thus, PoLIM is equivalent to the DINA for polytomous single-attribute items, each with a distinct attribute.
Assuntos
Conhecimento , Modelos Estatísticos , Humanos , PsicometriaRESUMO
In self-determination theory (SDT), multiple conceptual regulations of motivation are posited. These forms of motivation are especially qualitatively viewed by SDT researchers, and there are situations in which combinations of these regulations occur. In this article, instead of the commonly used numerical approach, this is modeled more versatilely by sets and relations. We discuss discrete mathematical models from the theory of knowledge spaces for the combinatorial conceptualization of motivation. Thereby, we constructively add insight into a dispute of opinions on the unidimensionality vs. multidimensionality of motivation in SDT literature. The motivation order derived in our example, albeit doubly branched, was approximately a chain, and we could quantify the combinatorial details of that approximation. Essentially, two combinatorial dimensions reducible to one were observed, which could be studied in other more popular scales as well. This approach allows us to define the distinct, including even equally informative, gradations of any regulation type. Thus, we may identify specific forms of motivation that may otherwise be difficult to measure or not be separable empirically. This could help to dissolve possible inconsistencies that may arise in applications of the theory in distinguishing the different regulation types. How to obtain the motivation structures in practice is demonstrated by relational data mining. The technique applied is an inductive item tree analysis, an established method of Boolean analysis of questionnaires. For a data set on learning motivation, the motivation spaces and co-occurrence relations for the gradations of the basic regulation types are extracted, thus, enumerating their potential subforms. In that empirical application, the underlying models were computed within each of the intrinsic, identified, introjected, and external regulations, in autonomous and controlled motivations, and the entire motivation domain. In future studies, the approach of this article could be employed to develop adaptive assessment and training procedures in SDT contexts and for dynamical extensions of the theory, if motivational behavior can go in time.
RESUMO
Approximately counting and sampling knowledge states from a knowledge space is a problem that is of interest for both applied and theoretical reasons. However, many knowledge spaces used in practice are far too large for standard statistical counting and estimation techniques to be useful. Thus, in this work we use an alternative technique for counting and sampling knowledge states from a knowledge space. This technique is based on a procedure variously known as subset simulation, the Holmes-Diaconis-Ross method, or multilevel splitting. We make extensive use of Markov chain Monte Carlo methods and, in particular, Gibbs sampling, and we analyse and test the accuracy of our results in numerical experiments.
Assuntos
Algoritmos , Simulação por Computador , Cadeias de Markov , Método de Monte CarloRESUMO
In recent years a number of articles have focused on the identifiability of the basic local independence model. The identifiability issue usually concerns two model parameter sets predicting an identical probability distribution on the response patterns. Both parameter sets are applied to the same knowledge structure. However, nothing is known about cases where different knowledge structures predict the same probability distribution. This situation is referred to as 'empirical indistinguishability' between two structures and is the main subject of the present paper. Empirical indistinguishability is a stronger form of unidentifiability, which involves not only the parameters, but also the structural and combinatorial properties of the model. In particular, as far as knowledge structures are concerned, a consequence of empirical indistinguishability is that the existence of certain knowledge states cannot be empirically established. Most importantly, it is shown that model identifiability cannot guarantee that a certain knowledge structure is empirically distinguishable from others. The theoretical findings are exemplified in a number of different empirical scenarios.
Assuntos
Conhecimento , ProbabilidadeRESUMO
A probabilistic framework for the polytomous extension of knowledge space theory (KST) is proposed. It consists in a probabilistic model, called polytomous local independence model, that is developed as a generalization of the basic local independence model. The algorithms for computing "maximum likelihood" (ML) and "minimum discrepancy" (MD) estimates of the model parameters have been derived and tested in a simulation study. Results show that the algorithms differ in their capability of recovering the true parameter values. The ML algorithm correctly recovers the true values, regardless of the manipulated variables. This is not totally true for the MD algorithm. Finally, the model has been applied to a real polytomous data set collected in the area of psychological assessment. Results show that it can be successfully applied in practice, paving the way to a number of applications of KST outside the area of knowledge and learning assessment.
Assuntos
Algoritmos , Modelos Estatísticos , Psicometria , Simulação por Computador , ConhecimentoRESUMO
Knowledge space theory (KST) structures are introduced within item response theory (IRT) as a possible way to model local dependence between items. The aim of this paper is threefold: firstly, to generalize the usual characterization of local independence without introducing new parameters; secondly, to merge the information provided by the IRT and KST perspectives; and thirdly, to contribute to the literature that bridges continuous and discrete theories of assessment. In detail, connections are established between the KST simple learning model (SLM) and the IRT General Graded Response Model, and between the KST Basic Local Independence Model and IRT models in general. As a consequence, local independence is generalized to account for the existence of prerequisite relations between the items, IRT models become a subset of KST models, IRT likelihood functions can be generalized to broader families, and the issues of local dependence and dimensionality are partially disentangled. Models are discussed for both dichotomous and polytomous items and conclusions are drawn on their interpretation. Considerations on possible consequences in terms of model identifiability and estimation procedures are also provided.
Assuntos
Conhecimento , Modelos Estatísticos , Psicometria , Algoritmos , HumanosRESUMO
The clinical assessment of mental disorders can be a time-consuming and error-prone procedure, consisting of a sequence of diagnostic hypothesis formulation and testing aimed at restricting the set of plausible diagnoses for the patient. In this article, we propose a novel computerized system for the adaptive testing of psychological disorders. The proposed system combines a mathematical representation of psychological disorders, known as the "formal psychological assessment," with an algorithm designed for the adaptive assessment of an individual's knowledge. The assessment algorithm is extended and adapted to the new application domain. Testing the system on a real sample of 4,324 healthy individuals, screened for obsessive-compulsive disorder, we demonstrate the system's ability to support clinical testing, both by identifying the correct critical areas for each individual and by reducing the number of posed questions with respect to a standard written questionnaire.
RESUMO
In knowledge space theory, existing adaptive assessment procedures can only be applied when suitable estimates of their parameters are available. In this paper, an iterative procedure is proposed, which upgrades its parameters with the increasing number of assessments. The first assessments are run using parameter values that favor accuracy over efficiency. Subsequent assessments are run using new parameter values estimated on the incomplete response patterns from previous assessments. Parameter estimation is carried out through a new probabilistic model for missing-at-random data. Two simulation studies show that, with the increasing number of assessments, the performance of the proposed procedure approaches that of gold standards.
Assuntos
Avaliação Educacional , Conhecimento , Adolescente , Criança , Humanos , Funções Verossimilhança , Modelos Teóricos , PsicometriaRESUMO
The basic local independence model (BLIM) is a probabilistic model for knowledge structures, characterized by the property that lucky guess and careless error parameters of the items are independent of the knowledge states of the subjects. When fitting the BLIM to empirical data, a good fit can be obtained even when the invariance assumption is violated. Therefore, statistical tests are needed for detecting violations of this specific assumption. This work provides an extension to theoretical results obtained by de Chiusole, Stefanutti, Anselmi, and Robusto (2013), showing that statistical tests based on the partitioning of the empirical data set into two (or more) groups are not adequate for testing the BLIM's invariance assumption. A simulation study confirms the theoretical results.