Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 28
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 14(1): 15698, 2024 Jul 08.
Artigo em Inglês | MEDLINE | ID: mdl-38977712

RESUMO

The visual attentional deficits in delirium are poorly characterized. Studies have highlighted neuro-anatomical abnormalities in the visual processing stream but fail at quantifying these abnormalities at a functional level. To identify these deficits, we undertook a multi-center eye-tracking study where we recorded 210 sessions from 42 patients using a novel eye-tracking system that was made specifically for free-viewing in the (ICU); each session lasted 10 min and was labeled with the delirium status of the patient using the Confusion Assessment Method in ICU (CAM-ICU). To analyze this data, we formulate the task of visual attention as a hierarchical generative process that yields a probabilistic distribution of the location of the next fixation. This distribution can then be compared to the measured patient fixation producing a correctness score which is tallied compared across delirium status. This analysis demonstrated that the visual processing system of patients suffering from delirium is functionally restricted to a statistically significant degree. This is the first study to explore the potential mechanisms underpinning visual inattention in delirium and suggests a new target of future research into a disease process that affects one in four hospitalized patients with severe short and long-term consequences.


Assuntos
Atenção , Delírio , Percepção Visual , Humanos , Delírio/fisiopatologia , Delírio/diagnóstico , Masculino , Feminino , Atenção/fisiologia , Idoso , Estudos Prospectivos , Percepção Visual/fisiologia , Pessoa de Meia-Idade , Tecnologia de Rastreamento Ocular , Idoso de 80 Anos ou mais , Movimentos Oculares/fisiologia
2.
R Soc Open Sci ; 11(1): 221620, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38179073

RESUMO

The ear is well positioned to accommodate both brain and vital signs monitoring, via so-called hearable devices. Consequently, ear-based electroencephalography has recently garnered great interest. However, despite the considerable potential of hearable based cardiac monitoring, the biophysics and characteristic cardiac rhythm of ear-based electrocardiography (ECG) are not yet well understood. To this end, we map the cardiac potential on the ear through volume conductor modelling and measurements on multiple subjects. In addition, in order to demonstrate real-world feasibility of in-ear ECG, measurements are conducted throughout a long-time simulated driving task. As a means of evaluation, the correspondence between the cardiac rhythms obtained via the ear-based and standard Lead I measurements, with respect to the shape and timing of the cardiac rhythm, is verified through three measures of similarity: the Pearson correlation, and measures of amplitude and timing deviations. A high correspondence between the cardiac rhythms obtained via the ear-based and Lead I measurements is rigorously confirmed through agreement between simulation and measurement, while the real-world feasibility was conclusively demonstrated through efficacious cardiac rhythm monitoring during prolonged driving. This work opens new avenues for seamless, hearable-based cardiac monitoring that extends beyond heart rate detection to offer cardiac rhythm examination in the community.

3.
IEEE Trans Pattern Anal Mach Intell ; 45(8): 10535-10554, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37015127

RESUMO

Visible and infrared image fusion (VIF) has attracted a lot of interest in recent years due to its application in many tasks, such as object detection, object tracking, scene segmentation, and crowd counting. In addition to conventional VIF methods, an increasing number of deep learning-based VIF methods have been proposed in the last five years. Different types of methods, such as CNN-based, autoencoder-based, GAN-based, and transformer-based methods, have been proposed. Deep learning-based methods have undoubtedly become dominant methods for the VIF task. However, while much progress has been made, the field will benefit from a systematic review of these deep learning-based methods. In this paper we present a comprehensive review of deep learning-based VIF methods. We discuss motivation, taxonomy, recent development characteristics, datasets, and performance evaluation methods in detail. We also discuss future prospects of the VIF field. This paper can serve as a reference for VIF researchers and those interested in entering this fast-developing field.


Assuntos
Aprendizado Profundo , Algoritmos
4.
Sci Robot ; 7(65): eabm6010, 2022 04 06.
Artigo em Inglês | MEDLINE | ID: mdl-35385294

RESUMO

Assistive robots have the potential to support people with disabilities in a variety of activities of daily living, such as dressing. People who have completely lost their upper limb movement functionality may benefit from robot-assisted dressing, which involves complex deformable garment manipulation. Here, we report a dressing pipeline intended for these people and experimentally validate it on a medical training manikin. The pipeline is composed of the robot grasping a hospital gown hung on a rail, fully unfolding the gown, navigating around a bed, and lifting up the user's arms in sequence to finally dress the user. To automate this pipeline, we address two fundamental challenges: first, learning manipulation policies to bring the garment from an uncertain state into a configuration that facilitates robust dressing; second, transferring the deformable object manipulation policies learned in simulation to real world to leverage cost-effective data generation. We tackle the first challenge by proposing an active pre-grasp manipulation approach that learns to isolate the garment grasping area before grasping. The approach combines prehensile and nonprehensile actions and thus alleviates grasping-only behavioral uncertainties. For the second challenge, we bridge the sim-to-real gap of deformable object policy transfer by approximating the simulator to real-world garment physics. A contrastive neural network is introduced to compare pairs of real and simulated garment observations, measure their physical similarity, and account for simulator parameters inaccuracies. The proposed method enables a dual-arm robot to put back-opening hospital gowns onto a medical manikin with a success rate of more than 90%.


Assuntos
Pessoas com Deficiência , Robótica , Atividades Cotidianas , Bandagens , Vestuário , Humanos , Políticas
5.
IEEE Trans Pattern Anal Mach Intell ; 44(12): 9561-9573, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34813470

RESUMO

We propose a novel Dispersion Minimisation framework for event-based vision model estimation, with applications to optical flow and high-speed motion estimation. The framework extends previous event-based motion compensation algorithms by avoiding computing an optimisation score based on an explicit image-based representation, which provides three main benefits: i) The framework can be extended to perform incremental estimation, i.e., on an event-by-event basis. ii) Besides purely visual transformations in 2D, the framework can readily use additional information, e.g., by augmenting the events with depth, to estimate the parameters of motion models in higher dimensional spaces. iii) The optimisation complexity only depends on the number of events. We achieve this by modelling the event alignment according to candidate parameters and minimising the resultant dispersion, which is computed by a family of suitable entropy-based measures. Data whitening is also proposed as a simple and effective pre-processing step to make the framework's accuracy performance more robust, as well as other event-based motion-compensation methods. The framework is evaluated on several challenging motion estimation problems, including 6-DOF transformation, rotational motion, and optical flow estimation, achieving state-of-the-art performance.

6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 1869-1873, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34891652

RESUMO

Delirium, an acute confusional state, is a common occurrence in Intensive Care Units (ICUs). Patients who develop delirium have globally worse outcomes than those who do not and thus the diagnosis of delirium is of importance. Current diagnostic methods have several limitations leading to the suggestion of eye-tracking for its diagnosis through in-attention. To ascertain the requirements for an eye-tracking system in an adult ICU, measurements were carried out at Chelsea & Westminster Hospital NHS Foundation Trust. Clinical criteria guided empirical requirements of invasiveness and calibration methods while accuracy and precision were measured. A non-invasive system was then developed utilising a patient-facing RGB camera and a scene-facing RGBD camera. The system's performance was measured in a replicated laboratory environment with healthy volunteers revealing an accuracy and precision that outperforms what is required while simultaneously being non-invasive and calibration-free The system was then deployed as part of CONfuSED, a clinical feasibility study where we report aggregated data from 5 patients as well as the acceptability of the system to bedside nursing staff. To the best of our knowledge, the system is the first eye-tracking systems to be deployed in an ICU for delirium monitoring.


Assuntos
Delírio , Tecnologia de Rastreamento Ocular , Adulto , Cuidados Críticos , Delírio/diagnóstico , Estudos de Viabilidade , Humanos , Unidades de Terapia Intensiva
7.
IEEE Trans Haptics ; 14(1): 44-56, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-32746376

RESUMO

Contact driven tasks, such as surface conditioning operations (wiping, polishing, sanding, etc.), are difficult to program in advance to be performed autonomously by a robotic system, specially when the objects involved are moving. In many applications, human-robot physical interaction can be used for the teaching, specially in learning from demonstrations frameworks, but this solution is not always available. Robot teleoperation is very useful when user and robot cannot share the same workspace due to hazardous environments, inaccessible locations, or because of ergonomic issues. In this sense, this article introduces a novel dual-arm teleoperation architecture with haptic and visual feedback to enhance the operator immersion in surface treatment tasks. Two task-based assistance systems are also proposed to control each robotic manipulator individually. To validate the remote assisted control, some usability tests have been carried out using Baxter, a dual-arm collaborative robot. After analysing several benchmark metrics, the results show that the proposed assistance method helps to reduce the task duration and improves the overall performance of the teleoperation.


Assuntos
Robótica , Desenho de Equipamento , Ergonomia , Retroalimentação Sensorial , Humanos , Interface Usuário-Computador
9.
Front Robot AI ; 6: 118, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-33501133

RESUMO

Social or humanoid robots do hardly show up in "the wild," aiming at pervasive and enduring human benefits such as child health. This paper presents a socio-cognitive engineering (SCE) methodology that guides the ongoing research & development for an evolving, longer-lasting human-robot partnership in practice. The SCE methodology has been applied in a large European project to develop a robotic partner that supports the daily diabetes management processes of children, aged between 7 and 14 years (i.e., Personal Assistant for a healthy Lifestyle, PAL). Four partnership functions were identified and worked out (joint objectives, agreements, experience sharing, and feedback & explanation) together with a common knowledge-base and interaction design for child's prolonged disease self-management. In an iterative refinement process of three cycles, these functions, knowledge base and interactions were built, integrated, tested, refined, and extended so that the PAL robot could more and more act as an effective partner for diabetes management. The SCE methodology helped to integrate into the human-agent/robot system: (a) theories, models, and methods from different scientific disciplines, (b) technologies from different fields, (c) varying diabetes management practices, and (d) last but not least, the diverse individual and context-dependent needs of the patients and caregivers. The resulting robotic partner proved to support the children on the three basic needs of the Self-Determination Theory: autonomy, competence, and relatedness. This paper presents the R&D methodology and the human-robot partnership framework for prolonged "blended" care of children with a chronic disease (children could use it up to 6 months; the robot in the hospitals and diabetes camps, and its avatar at home). It represents a new type of human-agent/robot systems with an evolving collective intelligence. The underlying ontology and design rationale can be used as foundation for further developments of long-duration human-robot partnerships "in the wild."

10.
IEEE Trans Pattern Anal Mach Intell ; 40(12): 2920-2934, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-29989982

RESUMO

In this paper, we present a novel framework for finding the kinematic structure correspondences between two articulated objects in videos via hypergraph matching. In contrast to appearance and graph alignment based matching methods, which have been applied among two similar static images, the proposed method finds correspondences between two dynamic kinematic structures of heterogeneous objects in videos. Thus our method allows matching the structure of objects which have similar topologies or motions, or a combination of the two. Our main contributions can be summarised as follows: (i) casting the kinematic structure correspondence problem into a hypergraph matching problem by incorporating multi-order similarities with normalising weights, (ii) introducing a structural topology similarity measure by aggregating topology constrained subgraph isomorphisms, (iii) measuring kinematic correlations between pairwise nodes, and (iv) proposing a combinatorial local motion similarity measure using geodesic distance on the Riemannian manifold. We demonstrate the robustness and accuracy of our method through a number of experiments on synthetic and real data, outperforming various other state of the art methods. Our method is not limited to a specific application nor sensor, and can be used as building block in applications such as action recognition, human motion retargeting to robots, and articulated object manipulation amongst others.

11.
IEEE Trans Haptics ; 2018 Feb 12.
Artigo em Inglês | MEDLINE | ID: mdl-29994370

RESUMO

An emerging research problem in assistive robotics is the design of methodologies that allow robots to provide personalized assistance to users. For this purpose, we present a method to learn shared control policies from demonstrations offered by a human assistant. We train a Gaussian process (GP) regression model to continuously regulate the level of assistance between the user and the robot, given the user's previous and current actions and the state of the environment. The assistance policy is learned after only a single human demonstration, i.e. in one-shot. Our technique is evaluated in a one-of-a-kind experimental study, where the machine-learned shared control policy is compared to human assistance. Our analyses show that our technique is successful in emulating human shared control, by matching the location and amount of offered assistance on different trajectories. We observed that the effort requirement of the users were comparable between human-robot and human-human settings. Under the learned policy, the jerkiness of the user's joystick movements dropped significantly, despite a significant increase in the jerkiness of the robot assistant's commands. In terms of performance, even though the robotic assistance increased task completion time, the average distance to obstacles stayed in similar ranges to human assistance.

12.
IEEE Trans Pattern Anal Mach Intell ; 40(9): 2165-2179, 2018 09.
Artigo em Inglês | MEDLINE | ID: mdl-28880158

RESUMO

In this paper, we present a novel framework for unsupervised kinematic structure learning of complex articulated objects from a single-view 2D image sequence. In contrast to prior motion-based methods, which estimate relatively simple articulations, our method can generate arbitrarily complex kinematic structures with skeletal topology via a successive iterative merging strategy. The iterative merge process is guided by a density weighted skeleton map which is generated from a novel object boundary generation method from sparse 2D feature points. Our main contributions can be summarised as follows: (i) An unsupervised complex articulated kinematic structure estimation method that combines motion segments with skeleton information. (ii) An iterative fine-to-coarse merging strategy for adaptive motion segmentation and structural topology embedding. (iii) A skeleton estimation method based on a novel silhouette boundary generation from sparse feature points using an adaptive model selection method. (iv) A new highly articulated object dataset with ground truth annotation. We have verified the effectiveness of our proposed method in terms of computational time and estimation accuracy through rigorous experiments with multiple datasets. Our experiments show that the proposed method outperforms state-of-the-art methods both quantitatively and qualitatively.


Assuntos
Fenômenos Biomecânicos/fisiologia , Processamento de Imagem Assistida por Computador/métodos , Movimento/fisiologia , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Bases de Dados Factuais , Humanos , Gravação em Vídeo
13.
Front Robot AI ; 5: 22, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-33500909

RESUMO

Generating complex, human-like behavior in a humanoid robot like the iCub requires the integration of a wide range of open source components and a scalable cognitive architecture. Hence, we present the iCub-HRI library which provides convenience wrappers for components related to perception (object recognition, agent tracking, speech recognition, and touch detection), object manipulation (basic and complex motor actions), and social interaction (speech synthesis and joint attention) exposed as a C++ library with bindings for Java (allowing to use iCub-HRI within Matlab) and Python. In addition to previously integrated components, the library allows for simple extension to new components and rapid prototyping by adapting to changes in interfaces between components. We also provide a set of modules which make use of the library, such as a high-level knowledge acquisition module and an action recognition module. The proposed architecture has been successfully employed for a complex human-robot interaction scenario involving the acquisition of language capabilities, execution of goal-oriented behavior and expression of a verbal narrative of the robot's experience in the world. Accompanying this paper is a tutorial which allows a subset of this interaction to be reproduced. The architecture is aimed at researchers familiarizing themselves with the iCub ecosystem, as well as expert users, and we expect the library to be widely used in the iCub community.

14.
User Model User-adapt Interact ; 27(2): 267-311, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-32063681

RESUMO

Personalised content adaptation has great potential to increase user engagement in video games. Procedural generation of user-tailored content increases the self-motivation of players as they immerse themselves in the virtual world. An adaptive user model is needed to capture the skills of the player and enable automatic game content altering algorithms to fit the individual user. We propose an adaptive user modelling approach using a combination of unobtrusive physiological data to identify strengths and weaknesses in user performance in car racing games. Our system creates user-tailored tracks to improve driving habits and user experience, and to keep engagement at high levels. The user modelling approach adopts concepts from the Trace Theory framework; it uses machine learning to extract features from the user's physiological data and game-related actions, and cluster them into low level primitives. These primitives are transformed and evaluated into higher level abstractions such as experience, exploration and attention. These abstractions are subsequently used to provide track alteration decisions for the player. Collection of data and feedback from 52 users allowed us to associate key model variables and outcomes to user responses, and to verify that the model provides statistically significant decisions personalised to the individual player. Tailored game content variations between users in our experiments, as well as the correlations with user satisfaction demonstrate that our algorithm is able to automatically incorporate user feedback in subsequent procedural content generation.

15.
IEEE Trans Image Process ; 24(12): 5916-27, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26452283

RESUMO

We present a spatio-temporal attention relocation (STARE) method, an information-theoretic approach for efficient detection of simultaneously occurring structured activities. Given multiple human activities in a scene, our method dynamically focuses on the currently most informative activity. Each activity can be detected without complete observation, as the structure of sequential actions plays an important role on making the system robust to unattended observations. For such systems, the ability to decide where and when to focus is crucial to achieving high detection performances under resource bounded condition. Our main contributions can be summarized as follows: 1) information-theoretic dynamic attention relocation framework that allows the detection of multiple activities efficiently by exploiting the activity structure information and 2) a new high-resolution data set of temporally-structured concurrent activities. Our experiments on applications show that the STARE method performs efficiently while maintaining a reasonable level of accuracy.

16.
IEEE Trans Neural Netw Learn Syst ; 26(3): 522-36, 2015 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-25720008

RESUMO

Successful biological systems adapt to change. In this paper, we are principally concerned with adaptive systems that operate in environments where data arrives sequentially and is multivariate in nature, for example, sensory streams in robotic systems. We contribute two reservoir inspired methods: 1) the online echostate Gaussian process (OESGP) and 2) its infinite variant, the online infinite echostate Gaussian process (OIESGP) Both algorithms are iterative fixed-budget methods that learn from noisy time series. In particular, the OESGP combines the echo-state network with Bayesian online learning for Gaussian processes. Extending this to infinite reservoirs yields the OIESGP, which uses a novel recursive kernel with automatic relevance determination that enables spatial and temporal feature weighting. When fused with stochastic natural gradient descent, the kernel hyperparameters are iteratively adapted to better model the target system. Furthermore, insights into the underlying system can be gleamed from inspection of the resulting hyperparameters. Experiments on noisy benchmark problems (one-step prediction and system identification) demonstrate that our methods yield high accuracies relative to state-of-the-art methods, and standard kernels with sliding windows, particularly on problems with irrelevant dimensions. In addition, we describe two case studies in robotic learning-by-demonstration involving the Nao humanoid robot and the Assistive Robot Transport for Youngsters (ARTY) smart wheelchair.


Assuntos
Aprendizado de Máquina , Redes Neurais de Computação , Distribuição Normal , Robótica/métodos , Humanos , Reconhecimento Automatizado de Padrão/métodos , Fatores de Tempo
17.
IEEE Trans Haptics ; 7(4): 512-25, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25532151

RESUMO

Human beings not only possess the remarkable ability to distinguish objects through tactile feedback but are further able to improve upon recognition competence through experience. In this work, we explore tactile-based object recognition with learners capable of incremental learning. Using the sparse online infinite Echo-State Gaussian process (OIESGP), we propose and compare two novel discriminative and generative tactile learners that produce probability distributions over objects during object grasping/palpation. To enable iterative improvement, our online methods incorporate training samples as they become available. We also describe incremental unsupervised learning mechanisms, based on novelty scores and extreme value theory, when teacher labels are not available. We present experimental results for both supervised and unsupervised learning tasks using the iCub humanoid, with tactile sensors on its five-fingered anthropomorphic hand, and 10 different object classes. Our classifiers perform comparably to state-of-the-art methods (C4.5 and SVM classifiers) and findings indicate that tactile signals are highly relevant for making accurate object classifications. We also show that accurate "early" classifications are possible using only 20-30 percent of the grasp sequence. For unsupervised learning, our methods generate high quality clusterings relative to the widely-used sequential k-means and self-organising map (SOM), and we present analyses into the differences between the approaches.


Assuntos
Inteligência Artificial , Modelos Teóricos , Sistemas On-Line , Reconhecimento Automatizado de Padrão/métodos , Tato , Adulto , Algoritmos , Análise por Conglomerados , Análise Discriminante , Feminino , Força da Mão , Humanos , Masculino , Palpação , Robótica/métodos , Adulto Jovem
18.
Neuroinformatics ; 12(1): 63-91, 2014 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-24085487

RESUMO

The mirror neuron system in primates matches observations of actions with the motor representations used for their execution, and is a topic of intense research and debate in biological and computational disciplines. In robotics, models of this system have been used for enabling robots to imitate and learn how to perform tasks from human demonstrations. Yet, existing computational and robotic models of these systems are found in multiple levels of description, and although some models offer plausible explanations and testable predictions, the difference in the granularity of the experimental setups, methodologies, computational structures and selected modeled data make principled meta-analyses, common in other fields, difficult. In this paper, we adopt an interdisciplinary approach, using the BODB integrated environment in order to bring together several different but complementary computational models, by functionally decomposing them into brain operating principles (BOPs) which each capture a limited subset of the model's functionality. We then explore links from these BOPs to neuroimaging and neurophysiological data in order to pinpoint complementary and conflicting explanations and compare predictions against selected sets of neurobiological data. The results of this comparison are used to interpret mirror system neuroimaging results in terms of neural network activity, evaluate the biological plausibility of mirror system models, and suggest new experiments that can shed light on the neural basis of mirror systems.


Assuntos
Simulação por Computador , Neurônios-Espelho/fisiologia , Modelos Neurológicos , Robótica , Animais , Bases de Dados Factuais , Força da Mão/fisiologia , Humanos , Macaca , Primatas
19.
Bioinspir Biomim ; 8(3): 035002, 2013 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-23981534

RESUMO

Exploratory gaze movements are fundamental for gathering the most relevant information regarding the partner during social interactions. Inspired by the cognitive mechanisms underlying human social behaviour, we have designed and implemented a system for a dynamic attention allocation which is able to actively control gaze movements during a visual action recognition task exploiting its own action execution predictions. Our humanoid robot is able, during the observation of a partner's reaching movement, to contextually estimate the goal position of the partner's hand and the location in space of the candidate targets. This is done while actively gazing around the environment, with the purpose of optimizing the gathering of information relevant for the task. Experimental results on a simulated environment show that active gaze control, based on the internal simulation of actions, provides a relevant advantage with respect to other action perception approaches, both in terms of estimation precision and of time required to recognize an action. Moreover, our model reproduces and extends some experimental results on human attention during an action perception.


Assuntos
Atenção/fisiologia , Biomimética/instrumentação , Fixação Ocular/fisiologia , Modelos Biológicos , Reconhecimento Visual de Modelos/fisiologia , Reconhecimento Psicológico/fisiologia , Robótica/instrumentação , Inteligência Artificial , Simulação por Computador , Desenho Assistido por Computador , Desenho de Equipamento , Análise de Falha de Equipamento , Humanos , Robótica/métodos
20.
IEEE Trans Pattern Anal Mach Intell ; 35(6): 1523-34, 2013 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-23599063

RESUMO

Sequential data labeling is a fundamental task in machine learning applications, with speech and natural language processing, activity recognition in video sequences, and biomedical data analysis being characteristic examples, to name just a few. The conditional random field (CRF), a log-linear model representing the conditional distribution of the observation labels, is one of the most successful approaches for sequential data labeling and classification, and has lately received significant attention in machine learning as it achieves superb prediction performance in a variety of scenarios. Nevertheless, existing CRF formulations can capture only one- or few-timestep interactions and neglect higher order dependences, which are potentially useful in many real-life sequential data modeling applications. To resolve these issues, in this paper we introduce a novel CRF formulation, based on the postulation of an energy function which entails infinitely long time-dependences between the modeled data. Building blocks of our novel approach are: 1) the sequence memoizer (SM), a recently proposed nonparametric Bayesian approach for modeling label sequences with infinitely long time dependences, and 2) a mean-field-like approximation of the model marginal likelihood, which allows for the derivation of computationally efficient inference algorithms for our model. The efficacy of the so-obtained infinite-order CRF (CRF(∞)) model is experimentally demonstrated.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA