In Chapter 5, Shea. talks about structural correspondence between elements of a representation and the thing that is represented. He uses place cells as an example. He also discusses Churchland's ideas on using similarity in a high dimensional space to indicate similarity between things that are represented. In a recent paper, we explore the structural correspondence between navigated space and a high dimensional representation generated by an agent learning to navigate in that space.
The key take-home message from the paper is that the structure of the representation is dominated by task (goal image), then camera orientation and much less so by information about spatial location. The other representation we explore (right hand side) has a much more direct structural correspondence to space in the real world (brown colours in the tSNE plot correspond to brown colours of camera locations in the scene). Poster version of Muryy et al: File:MuryyPoster.pdf.
Back to notes on Shea (2018).
- Zhu, Y., Mottaghi, R., Kolve, E., Lim, J. J., Gupta, A., Fei-Fei, L., & Farhadi, A. (2017). Target-driven visual navigation in indoor scenes using deep reinforcement learning. In 2017 IEEE international conference on robotics and automation (ICRA) (pp. 3357-3364).
- Glennerster, A., Hansard, M. E., & Fitzgibbon, A. W. (2001). Fixation could simplify, not complicate, the interpretation of retinal flow. Vision Research, 41(6), 815-834
- Shea, N. (2018). Representation in cognitive science. Oxford University Press.
- Churchland, P. M. (2012). Plato's camera: How the physical brain captures a landscape of abstract universals. MIT press.
- Muryy, A., Siddharth, N., Nardelli, N., Glennerster, A., & Torr, P. H. (2019). Lessons from reinforcement learning for biological representations of space. arXiv preprint arXiv:1912.06615. (Vision Research, in press)