From egocentric to allocentric spatial behavior: A computational model of spatial development

Abstract

Psychological experiments on children's development of spatial knowledge suggest experience at self-locomotion with visual tracking as important factors. Yet, the mechanism underlying development is unknown. We propose a robot that learns to mentally track a target object (i.e., maintaining a representation of an object's position when outside the field-of-view) as a model of spatial development. Mental tracking is considered as prediction of an object's position given the previous environmental state and motor commands, and the current environment state resulting from movement. Following Jordan and Rumelhart's (1992) forward modeling architecture the system consists of two components: an inverse model of sensory input to desired motor commands; and a forward model of motor commands to desired sensory inputs (goals). The robot was tested on the "three cups" paradigm (where children are required to select the cup containing the hidden object under various movement conditions). Consistent with child development, without the capacity for self-locomotion the robot's errors are self-center based. When given the ability of self-locomotion the robot responds allocentrically.

Citation

Hiraki, K., Sashima, A., and Phillips, S. (1998). From egocentric to allocentric spatial behavior: A computational model of spatial development. Adaptive Behavior, 6(3/4), 371-391.