Some predictions

From A conversation about the brain
Jump to: navigation, search

An excerpt from Glennerster (2016)[1] describing some predictions in relation to the ideas set out in this wiki.

3.2. Example predictions

When putting forward their stereo algorithm, Marr and Poggio (1979) went to admirable lengths to list psychophysical and neurophysiological results that, if they could be demonstrated, would falsify their hypothesis. Here are a few that would make the proposals described above untenable (Section 2.4). Using Marr and Poggio's, convention, the number of stars by a prediction (‘P’) indicates the extent to which the result would be fatal and 'A' indicates supportive data that already exist.

• (P***). Coordinate transformations. Strong evidence in favour of true coordinate transformations of visual information in the parietal cortex or hippocampus would be highly problematic for the ideas set out in Section 2.4. If it could be shown that visual information in retinotopic visual areas like V1 goes through a rotation and translation 'en masse' to generate receptive fields with a new origin and rotated axes in another visual area, where these new receptive fields relate to the orientation of the head, hand or body then the ideas set out in Section 2.4 will be proved wrong, since they are based on a quite different principle. Equally fatal would be a demonstration that the proposal illustrated in Figure 3b is correct, or any similar proposal involving multiple duplications of a representation in one coordinate frame in order to choose one of the set based on idiothetic information. Current models of coordinate transformations in parietal cortex are much more modest, simulating ‘partially shifting receptive fields’ (Pouget, Deneve and Duhamel, 2002) or ‘gain fields’ (Zipser and Andersen, 1988) which are 2D, not 3D transformations. Similarly, models of grid cell or hippocampal place cell firing do not describe how 3D transformations could take place taking input from visual receptive fields in V1 and transforming them into a different, world-based 3D coordinate frame (Burgess and O'Keefe, 1996; Burgess, 2008; Whitlock, Sutherland, Witter, Moser, and Moser, 2008).

• (P***). World-centred visual receptive fields. This does not refer to receptive fields of neurons that respond to the location of the observer (O'Keefe, 1979). After all, the location of the observer is not represented in V1 (it is invisible) so no rotation and translation of visual receptive fields from retinotopic to egocentric to world-centred coordinates could make a place cell. A world-centred visual receptive field is a 3D 'voxel' much like the 3D receptive field of a disparity-tuned neuron in V1 but based in world-centred coordinates. Its structure is independent of the test object brought into the receptive field and independent of the location of the observer or the fixation point. For example, if the animal viewed a scene from the South and then moved, in the dark, round to the West, evidence of 3D receptive fields remaining constant in world-based frame would be incompatible with the ideas set out here. In this example, the last visual voxels to be filled before the lights went out should remain in the same 3D location, contain the same visual information (give or take general memory decay across all voxels) and remain at the same resolution, despite the translation, rotation and new fixation point of the animal. An experiment that followed this type of logic but for pointing direction found, on the contrary, evidence for gaze-centred encoding (Henriques, Klier, Smith, Lowy and Crawford, 1998).

• (A*) Task-dependent performance. If all tasks are carried out with reference to an internal model of the world (a 'cognitive map' or reconstruction), then whatever distortions there are in that model with respect to ground truth should be reflected in all tasks that depend on that model. Proof that this is the case would make the hypothesis set out in Section 2.4 untenable. However, there is already considerable evidence that the internal representation used by the visual system is something much looser and, instead, that different strategies are used in response to different tasks. Many examples demonstrate such 'task-dependence' (Koenderink, van Doorn, Kappers and Lappin, 2002; Smeets, Sousa and Brenner, 2009; Svarverud, Gilson and Glennerster, 2012; Glennerster et al, 1996; Knill, Bondada and Chhabra, 2011). For example, when participants compare the depth relief of two disparity-defined surfaces at different distances they do so very accurately while, at the same time, having substantial biases in depth-to-height shape judgements (Glennerster et al, 1996). This experiment was designed to ensure that, to all intents and purposes, the binocular images the participant received were the same for both tasks so that any effect on responses was not due to differences in the information available to the visual system. The fact that biases were systematically different in the two tasks rules out the possibility that participants are making both judgements by referring to the same internal ‘model’ of the scene. Discussing a related experiment that demonstrates inconsistency between performance on two spatial tasks, Koenderink et al (2002) suggest that it might be time to "… discard the notion of `visual space' altogether. We consider this an entirely reasonable direction to explore, and perhaps in the long run the only viable option."

• (P**) Head-centred adaptation. A psychophysical approach could be, for example, to look for evidence of receptive fields that are constant in head-centred coordinates. For example, if an observer fixates a point 20 degrees to the right of the head-centric midline and adapts to a moving stimulus 20 degrees to the left of fixation (i.e. on the head-centric midline), do they show adaptation effects in a head-centric frame after they rotate their head to a new orientation while maintaining fixation (see Figure 6)? Evidence of a pattern of adaptation that followed the head in this situation would not be expected according to the ideas set out in Section 2.4. As Figure 6 illustrates, this prediction is different from either retinal or spatiotopic (world-based) adaptation (Melcher, 2005; Knapen, Rolfs and Cavanagh, 2009; Turi and Burr, 2012). There is psychophysical evidence that gaze direction can modulate adaptation (Mayhew, 1973; Nishida, Motoyoshi, Andersen and Shimojo, 2003) consistent with physiological evidence of ‘gain fields’ in parietal cortex (Zipser and Andersen, 1988) but the data do not show that adaptation is spatially localised in a head-centred frame as illustrated in Figure 6.


  1. Glennerster (2016) A moving observer in a 3D world. Philosophical Transactions of the Royal Society, B, 371(1697), 20150265. This review contains all the references cited above.