Tuesday 1 March 2011

Chemero (2009), Chapter 3: Theories of Representation

The radical part of 'radical embodied cognitive science' is anti-representationalism. Simply put, the claim is that the brain does not internally represent states of the world in any way, nor are these the basis for our experience. It's radical because to a typical cognitive scientist, the mere suggestion is ludicrous: like suggesting you do biology without genes or particle physics without Large Hadron Colliders. But this indignation is hiding a dirty little secret: mental representations are generally just assumed to exist, because of assumptions about the poverty of stimulus we supposedly face. They are theoretical entities designed to solve a problem which, frankly, might not exist. There has been a lot of theoretical work on what it means to be a representation, but most of this has occurred in philosophy and most modern experimental cognitive psychologists don't ever engage with the issue. Sabrina has talked about this a bit already here and here and here and it's likely to keep cropping up.

Chemero's third chapter lays out some basics about what a theory of representation has to look like. If you're going to be anti-representational you need something to be against that isn't a straw man. This is unfortunately difficult because the concept of representation is a moving target: if I had a dollar for every conversation I've had that included the phrase 'Oh, that's not what I mean by representation' (even when it really is), I wouldn't need to submit the grant I should be working on just now. This chapter is therefore going to look for a 'minimal case' of representation that doesn't bug too many people but that isn't too easy a target.

Strap in, this chapter's busy.

Some definitions
A representation is a theoretical entity; they are not directly observed, but postulated to exist to explain the behaviour of a system. To function as parts of a theory of behaviour, they have to have certain properties attributed to them that can be supported empirically. Specifically, in cognitive science, representations bear information about the environment, and the function of a given representation reflects the specific informational content of that representation.

Discussion about what counts as a true representation revolves around these terms:
A representation R and it's target T are in constant causal contact just in case whenever R is present in a system, T is causing it.
A representation R is decouplable just in case it can at least sometimes perform its function in a system when it is not in constant causal contact with its target T.
A target T is absent just in case T has no local causal effects when a representation R of it is present in a system.
Chemero, 2009, p.48
Broadly speaking, an internal state is a representation if it stands in for some property of the external environment (the target). Then the question is, to what degree is the representation decouplable from the thing it represents? If you can have the representation when the target is absent, then they are decouplable; if you can only have the representation when the target is present, then they are not decouplable. This is important, because representations are generally invoked to explain how you can achieve some behaviour in the absence of the information that is presumed to be required for that behaviour. Everyone therefore assumes that representations need to be at least weakly decouplable. The flip side of this is that, in order to be functional, the representation needs to remain being about the target, which might change; so there must be the possibility of a flow of information between representation and target.

A traditional view of representation
Chemero describes a fairly traditional view of representation derived from Ruth Millikan (Millikan, 1984). He also talks briefly about Markman & Dietrich (2000a, b; Dietrich & Markman, 2003, which Sabrina has tackled here) as basically a liberal example of this kind of representation:
A  feature R0 of a system S is a Representation for S if and only if:
(R1) R0 stands between a representation producer P and a representation consumer C that have been standardized to fit one another.
(R2) R0 has as it's function to adapt the representation consumer C to some aspect A0 of the environment, in particular by leading S to behave appropriately with respect to A0, even when A0 is not the case.
(R3) There are (in addition to R0) transformations of R0, R1...Rn, that have as their function to adapt the representation consumer C to corresponding transformations of A0...An.
Chemero, 2009, pp 51
Simply put, something is a representation if it aligns the 'consumer' with some key aspect of the environment. This alignment must allow the system to behave in the appropriate manner for that environmental state of affairs, and there must be a representation for each environmental aspect of interest to the system. The alignment must also work in principle when the aspect is not available; there must be some decoupling possible, although it should also work when causally connected to the target.

Some variations: Registration, effective tracking and emulation
Philosopher Brian Cantwell Smith (1996) proposes that for something to be a representation, it must be (potentially) entirely decouplable from it's target; he calls this registration. If this decoupling is not in principle possible, then you are merely engaged in effective tracking. His example is a frog tracking a fly; it is engaged in tracking if it's knowledge of the fly's position depends on being in causal contact, and registration (representation) if and only if that knowledge can survive the loss of that causal contact (if, say, the fly briefly goes out of sight; when the frog reacquires it, does it 'know' it's the same fly?).

Smith is about the only person to subscribe to this strict view, but the basic idea (that representations allow you to 'think about things in their absence') is still critical. If you want to support a weaker decoupling but preserve the functional character of the representation (by allowing it some contact with it's target) then the next move is emulation (Grush, 1997, 2004; Clark & Grush, 1999). For Grush, the job of a representation is to emulate what happens when you are in causal contact ('presentation'), if and when you lose that contact; the representation is a forward-model of a system, which takes as input 'state-at-time-t' and returns as output 'state-at-time-t+1'. These become useful when, for example, things are moving too fast to maintain causal contact (e.g. reaching for an object occurs faster than feedback can be provided, claims Grush). 

Regardless of the specific flavour you endorse, a representation's job is to carry information that affects the behaviour of a system by standing in for some part of that system when it is absent. That absence can be caused by an inability to maintain causal contact (because, for instance, the feedback loop is too slow) or by an absence of that target from the information flow (as in the case of the retinal image, for instance). 

Coupled Oscillator Models
The other thing going on in this chapter is Chemero is introducing the idea of coupled oscillator models. These show up in perception/action dynamical systems work all the time (e.g. Bingham's model of coordinated rhythmic movement) and they are also, apparently, candidate model systems for various representational accounts of things. Oscillators come in two basic flavours in the cognitive sciences; relaxation oscillators which accumulate voltage then fire all at once (like neurons), and mass-spring oscillators, based on physical systems which contain, well, a mass on a spring (like Bingham's model, and as illustrated in this post). Because these have mass, they have inertia and preferred frequencies; these are the intrinsic dynamics (which HKB model people talk about all the time). They can also have momentum; that is, they can continue to move for a time after they have lost the immediate causal coupling to the target event driving them.You can hybridise these as adaptive oscillators but these are not all that biologically plausible. The reason Chemero's discussing these models is to talk about theories of representation in terms he can relate to the dynamics coming up.

What kind of representation should a radical cognitive science reject?
The question is, what flavour of representation can serve as a legitimate 'base case' and a target for RECS? In dynamics terms:
  • traditional representational systems (c.f. Millikan) can be implemented with relaxation oscillators;
  • emulators can be implemented as relaxation oscillators with a time delay (making them more traditional than, say, Grush wants to suggest);
  • systems supporting strong decouplability require hybrid adaptive oscillators; these are properly different theories of representation but are arguably too restrictive (they rule out candidates for representation that psychologists would like to keep)
Chemero decides not to pick a fight with either emulation or registration,. The former is essentially  the traditional form, and was designed to represent parts of agents (e.g. arms during reaching) which leaves unanswered the question of how a representation could come to have content about the environment. The latter is simply too strict (as evidenced by the need for biologically implausible adaptive oscillator models) and there are many cases in cognitive science that people would like to include as representational but that are, for Smith, examples of mere effective tracking.

The best candidate 'base case' of representation for RECS to kick against, then, is the widely accepted traditional view.

Some thoughts
So far, so good.  This is more useful background, and it's important to be clear about what you are going after when you're chasing representations. We get into the meat of the book next time, with the dynamical stance followed by the version of ecological psychology Chemero will advocate.

References
Clark, A., and R. Grush (1999). Towards a cognitive robotics. Adaptive Behavior, 7, 5-16. Download

Dietrich, E., and A. Markman (2003). Discrete thoughts: Why cognition must use discrete representations. Mind and Language, 18, 95 119. Download

Grush, R. (1997). The architecture of representation. Philosophical Psychology, 10, 5-24. DOI Download (.rtf)

Grush, R. (2004). The emulation theory of representation: Motor control, imagery, and perception. Behavioral and Brain Sciences, 27, 377 442. Download

Markman, A. B., and E. Dietrich (2000a). In defense of representation. Cognitive Psychology, 40, 138 171. Download

Markman, A. B., and E. Dietrich (2000b). Extending the classical view of representation. Trends in Cognitive Sciences, 4, 70 75. Download

Millikan, R. (1984). Language, Thought, and Other Biological Categories. Cambridge, Mass.: MIT Press.

Smith, B. C. (1996). On the Origin of Objects. Cambridge, Mass.: MIT Press.

No comments:

Post a Comment