A base representation

From A conversation about the brain
Jump to: navigation, search
Everest base camp
It is all very well saying that 'the outside world serves as its own, external, representation'[1], because as soon as the observer wants to know something about the visual scene they move their eyes there and, 'hey presto!', a high resolution of the image appears. The capacity to move the eyes to the relevant part of the optic array (or to a distant optic array, which requires walking somewhere) depends on stored knowledge. I call this a 'base representation', rather like base camp, since it allows you to get started even though there might be quite a variety in the level of sophistication of the algorithms that are applied to the visual information once it is gathered. For example, judging the metric (Euclidean) structure of a surface (which requires an estimate of viewing distance and is prone to systematic errors, see task dependency), cognitively computing how to point to Paris, or a building a 3D model of a scene (as an architect might do) are all cases that might seem to imply that the visual system must generate a 3D model of the scene. But an alternative is that a 'base representation' that allows the observer to look around and glean the relevant information could be sufficient to explain performance in these tasks without forcing the conclusion that the visual system generates 3D models obligatorily, all the time. In an earlier page, I used the term 'universal primal sketch' to refer to the same idea.

See also 3D vision.


References

  1. O'Regan, J. K., & Noë, A. (2001). A sensorimotor account of vision and visual consciousness. Behavioral and brain sciences, 24(05), 939-973.