8Mar2016meeting

From A conversation about the brain
Jump to: navigation, search

This meeting included Bence Nanay, Michael Morgan, Jenny Read, Ghaith Tarawneh, Vivek Nityananda, Bruce Cumming, Sindre Henriksen, Ingo Bojak, Andrew Parker, James Stazicker and Andrew Glennerster.

Jenny raised a number of issues:

  • Until we get down to talking about a specific task, we can't distinguish a reconstruction-type model (that includes 3D coordinate frames of various sorts) and this 'big W' idea.
    • In the example here philtrans_eg there is a simple task with different components. A robot that used a 3D representation would carry out all the components in a fairly similar way (and use 3D coordinate transformations) but the proposal is that animals do not. In the final phase of this task the idea is that humans do something that is quite hard to distinguish from 3D reconstruction but, the suggestion is, this is relatively rare.

Bruce was particularly concerned about 'virtual movement'

  • If the output ($\vec{c}$) can cause a change in ($\vec{r}$) then 'the framework is universal'. So, this is a similar point to Jenny's, that this says nothing if it is so general as to encompass everything.
    • There are two replies. First, the transition from one state of $\vec{r}$ to the next ($\vec{r_{t}}$ to $\vec{r_{t+1}}$) is a worry for the proposal set out here if the world is not included in the loop. It seems to entail the brain simulating the world, so that an output $\vec{c}$ gives rise to a new input $\vec{r}$ just as if the world were really there. If that happened via a 3D model it would certainly go against the simplicity of the 'big W' idea and would probably break it altogether. I discuss this briefly in relation to forward models.
    • Second, although we concentrated heavily on this step (output, $\vec{c}$, is chosen using max($\bf{W}$ $\vec{r}$)), it is not the best place to try and 'break' the theory. Many different types of representation could precede this decision stage ('the framework is universal'), including making $\vec{r}$ a rich 3D, voxel-based description of the layout of the scene and, provided $\bf{W}$ was arranged to match, a similar decision step might be appropriate. From my perspective (AG), the advantage is that it is possible to avoid steps that are difficult in 3D vision including head-centred, world-centred and other 3D coordinate frames and transformations.

Bence questioned the philosphical context in which this proposal has been described in the wiki:

  • The link to O'Regan and Noë (2001)[1] is perhaps inappropriate given their stance against representations in the brain (rather than 'in the world'). Pragmatic representations[2] may be a more appropriate framework with which to compare the 'big W' idea.
    • I (AG) look forward to reading all of this book. There seem to be many overlaps between the 'pragmatic representations' and the idea described in the wiki.

Ingo asked about how the system might learn and how it should deal with sensory errors (compared to the expected feedback) after a movement. Andrew Parker raised a similar point in the meeting. Ingo also raised the connection between the ideas discussed here and 'the currently popular narrative of the brain as “Bayesian prediction machine” '.

  • I have not put much on the wiki about learning as I would prefer to establish first whether this type of scheme could explain behaviour in a 3D world with a static $\bf{W}$ (i.e. after learning). Broadly, I assume that errors (unexpected sensory feedback) can lead to learning, i.e. changes in the Voronoi tessellation ($\bf{W}$), as one would expect. I have put up a brief mention of Bayes on the wiki, too. I'll expand on these in a reply to Ingo.


References

  1. O'Regan, J. K., & Noë, A. (2001). A sensorimotor account of vision and visual consciousness. Behavioral and Brain Sciences, 24(05), 939-973.
  2. Nanay, B. (2013). Between perception and action. Oxford University Press.