Path of images in an expanding room

From A conversation about the brain
Jump to: navigation, search
As a person walks across the room, their sensory+motivational context (yellow dot) follows a certain path through sensory+motivational space. In an expanding room, the visual input remains the same but the proprioceptive input is different (for a given visual input). This changes the sensory input, so the yellow dot takes a different path. However, unless a new cell is added that allows the person to notice this change (blue dashed lines), the room will appear the same and the perceptual experience of walking across the room will be indistinguishable.
When people walk across a room in virtual reality, it is possible to expand the room hugely without them noticing, provided that no single, monocular image would give the game away (i.e. all the images they see are ones that they could obtain in a static room)[1]. This is a useful example with which to discuss perception, as are any cases where the observer's perception is indistinguishable in two situations with different sensory input (and where these differences in sensory input are usually supra-threshold). The figure shows the path through sensory+motivational space as an observer walks across a static room or an expanding room. The paths are very similar because vision dominates the sensory elements of this vector and visual inputs are matched in the two conditions (essentially). The key difference is in the proprioceptive input in the two cases (e.g. when the room is big you have to walk much further to achieve a given change in the image). Ellen Svarverud, who did her PhD using this stimulus[2] learned to distinguish expanding rooms from static ones. She must have learned to 'listen' to the proprioceptive input and developed more cells, like the one shown in blue. (See here for a warning on dividing up cells like this - it is only shorthand). In general, if the path across sensory+motivational space taken by the yellow dot (the current context) is the same, in terms of the cells that it occupies along the way, then the perception will be the same.

Back to Hypotheses


  1. Glennerster, A., Tcheang, L., Gilson, S. J., Fitzgibbon, A. W., & Parker, A. J. (2006). Humans ignore motion and stereo cues in favor of a fictional stable world. Current Biology, 16(4), 428-432.
  2. Svarverud, E., Gilson, S. J., & Glennerster, A. (2010). Cue combination for 3D location judgements. Journal of vision, 10(1), 5