Mental imagery for a conversational robot.
IEEE Trans Syst Man Cybern B Cybern
; 34(3): 1374-83, 2004 Jun.
Article
in En
| MEDLINE
| ID: mdl-15484910
To build robots that engage in fluid face-to-face spoken conversations with people, robots must have ways to connect what they say to what they see. A critical aspect of how language connects to vision is that language encodes points of view. The meaning of my left and your left differs due to an implied shift of visual perspective. The connection of language to vision also relies on object permanence. We can talk about things that are not in view. For a robot to participate in situated spoken dialog, it must have the capacity to imagine shifts of perspective, and it must maintain object permanence. We present a set of representations and procedures that enable a robotic manipulator to maintain a "mental model" of its physical environment by coupling active vision to physical simulation. Within this model, "imagined" views can be generated from arbitrary perspectives, providing the basis for situated language comprehension and production. An initial application of mental imagery for spatial language understanding for an interactive robot is described.
Search on Google
Collection:
01-internacional
Database:
MEDLINE
Main subject:
Algorithms
/
Natural Language Processing
/
Robotics
/
Artificial Intelligence
/
Communication
/
Imagination
/
Mental Processes
Type of study:
Evaluation_studies
Language:
En
Journal:
IEEE Trans Syst Man Cybern B Cybern
Journal subject:
ENGENHARIA BIOMEDICA
Year:
2004
Document type:
Article
Affiliation country:
United States
Country of publication:
United States