Talk:Is Shea asking the right question?

From A conversation about the brain
Revision as of 02:08, 8 July 2020 by Ag (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Comments from Nick:

Wrong question objection.

'What is special about the activity of certain neurons when they are involved in representing something out in the world?' I actually think this question can be answered at more than one level. At the level you're dealing with it, it's a matter of working out how the brain computes and performs various tasks (e.g. ones that depend on vision and acting in a 3D world). One example of that question: does the brain use a rate code or a phase code for a certain computation? Another example: how does it manage to compute normalization?

I agree that your interesting question is really interesting. But it's not my question. In fact, my project depends on being able to draw on answers to your question. I'm asking (along with a tradition in philosophy, as well as in some more theoretical corners of computation and cognitive science): what special properties does the activity of certain neurons have to have to make it the case that they are representing something out in the world? It's in answering that question that task functions and exploitable relations come into the picture.

So I join with my philosophical colleagues in thinking that's an interesting question. Where I part company with many is in seeing a connection between these two questions. I think the theoretical question has to be answered in the light of the empirical one. Conversely, I think some (very general) constraints on answers to the empirical question derive from having the right answer to the philosophical question. E.g. you shouldn't just look at receptivity; it's important how states are sorted into vehicles (e.g. the puzzle about the single neuron which is a difference-maker for behaviour); vehicles are individuated functionally, in part relative to downstream processing; the function of an internal algorithm is crucially important; etc.

The two questions are quite some distance from one another, so it's not like there are any simple transfers from one to the other. But I do see a productive process of co-constraint, a dialogue that can fruitfully go both ways. But even if I'm wrong about that, I do think the philosophical theorising has to listen to - to be carried out in the light of answers to - your question.


Reply from AG: In the end, I realise I don't understand. "What special properties does the activity of certain neurons have to have to make it the case that they are representing something out in the world?" It sounds so close to the neuroscientist's question and yet it is actually very distant. Functionally identical neurons but in a swamp creature do not have the 'special properties' that the philosopher is interested in. So the questions really are very different and the relationship between the two remains baffling to me (at least, in the direction of philosophy informing the neuroscientific debate).