Talk:Swamp creatures

From A conversation about the brain
Revision as of 02:23, 8 July 2020 by Ag (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Emma Borg responded:

Here are some options:

  • 1. Irrealism – intentional mental states are illusions of some kind, we don’t ‘believe p’ or ‘desire q’. Taking someone else to be in one of these states is some kind of intentional stance, explanatory but not reflecting reality.
  • 2. Internalism – the only sense to be made of representation is neural weights/rates and these can remain constant across relevant external changes. So on the only sense to be made of representation in the brain, the brain can’t be representing these external states.
  • 3. Naturalistic externalism – neural states have content via more than just weights/rates, also via realising task functions (where this requires things like correlation of states, isomorphism of structures/processing, so the external world gets in).

Problems:

  • 1. Seems to fly in the face of what we know about ourselves
  • 2. Most successful language of cognitive science is not internalist but externalist
  • 3. “It is very hard to see how two brains identical in terms of weights/rates could represent things any different from one another”.

The objection to 3 is of course right, but while I think AG means this as knock down obj Shea takes it as a genuine challenge that he’s trying to answer in the book…

Reply from AG: I am clearly in camp 2 (internalist). I am really, really struggling to understand the jump between "weights/rates can remain constant across relevant external changes" (true) and "So...the brain can’t be representing these external states". I guess this will become clear as we discuss more. I think this debate will touch on the really interesting topic of transfer learning (as the reinforcement learning people call it, i.e. "relevant external changes") and the hierarchical structure of knowledge/representation that Anastasia talked about, but I am not sure yet.

Comments from Nick Shea:

Swampman.

I agree that this isn't at all a realistic or practical problem. I also share the impatience of the empirical scientist in even spending any time addressing this kind of problem. It's not something that's going to arise in practice. For practical purposes, the kinds of things that count as representations based on their intrinsic properties (synaptic weights and firing patterns) will definitely have the kind of history which will make them count as representations according to my theory (agreeing with your point (vi)).

Nevertheless, the philosophical literature discusses this issue a lot, so I felt I had to engage with it. It points up a difference which does tell us something about the nature of representation, just not something that is relevant to any practical project. Psychologists, neuroscientists and many philosophers share the strong intuition that representation is internalist. It only depends on synaptic weights and firing rates (perhaps also other intrinsic properties of the brain, e.g. if glia turn out to perform computations). But that raises some puzzles about the kinds of explanations we typically give when we rely on representational content.

The contrast here is relational or externalist properties. Science relies on lots of these, too. E.g. being a certain distance away. That property need not be shared by intrinsic duplicates. We can imagine two asteroids that are absolutely identical in every particular, molecule-by-molecule, but one is 100km further away from the sun than the other. As a result of this difference in the relational properties, different things will happen to them. Another example is being a parent (e.g. in biology / ecology). We can have intrinsic duplicates one of whom has descendants and the other not. A different example is being a £1 coin. There could be an absolute perfect counterfeit, identical in down to every molecule, but if it wasn't made in the Royal Mint it's not a £1 coin.

The philosophical question here is: is representation based only on intrinsic properties (like synaptic weights) or does it also depend on relational properties (like being a £1 coin)? That's not the kind of question that's going to matter for any cognitive science project. But if we're interested in the higher level (theoretical / philosophical) question of understanding how representational explanation works, then I think it is revealing to see that content is actually based on extrinsic properties. (Only in part, of course - I think it depends on vehicle properties too.)

Reply from AG: This is really helpful. As you make clear, all the neuroscientific questions would apply equally well to the real or counterfeit brain (or £1 coin). I have no idea (by definition) whether I am a swamp creature or not but I am equally interested, in either case, in how my neurons give rise to that thought.