28Nov2016meeting

From A conversation about the brain
Jump to: navigation, search

The participants at this meeting were Gunnar Sigurdsson, Stuart Golodetz, Jenny Read, James Stazicker and Andrew Glennerster.

Questions we covered included: Is this type of approach taken in computer vision? if not, why not? if so, how many dimensions, how many stored contexts? Is the decision rule here similar to the last layer of a deep neural network? (e.g. 4096 dimensions)? How do computer vision and robotics researchers include ‘motivation’ or task? In relation to this, Gunnar talked about long short-term memory in recurrent neural networks and some recent uses of 'attention' in computer vision and pointed out the following papers:

Recurrent Neural Networks references: http://karpathy.github.io/2015/05/21/rnn-effectiveness/ http://www.cs.bham.ac.uk/~jxb/INC/l12.pdf http://www.cs.toronto.edu/~graves/phd.pdf (Thesis, see section on RNNs) https://arxiv.org/pdf/1308.0850v5.pdf (LSTMs)

Attention modeling references: https://arxiv.org/pdf/1502.03044v3.pdf https://arxiv.org/pdf/1511.05234v2.pdf https://papers.nips.cc/paper/5542-recurrent-models-of-visual-attention.pdf http://shikharsharma.com/projects/action-recognition-attention/

In the meeting and afterwards by email there was discussion about whether the brain has 'some sort of 3D map' without necessarily pinning down a coordinate frame, e.g. by describing 3D relationships between objects. This idea would have to become more explicit before it is easy to discuss. There was also discussion (amongst the biologists) about whether the transformations of information that occurs between layers of a deep neural net could be considered similar to 3D transformations that are useful in an agent that continually moves its head and eyes.