Phil 473/673: Fodor and Twinearth
- Dretske's solution: Information theory. Anthony presenting.
- Chapter 3 of Explaining behavior: Representational Systems
- The general project: To explain how reasons cause behaviors.
Why is this mysterious?
- Step 1: Notice that there's a difference between 'going to the kitchen',
and a detailed neurophysiological account of musculature changes that get
you there. A difference between behavior, and bodily changes that constitute
- Step 2: How do reasons count as explanations? To know this, we need to
understand how beliefs, as mental representations are special. Prima facie
they are special because their causal role is determined by content.
- A representational system (RS) is a system that indicates the state of some
- Type I representations: Conventional systems of representation
- No intrinsic powers of representation : Basketball example, representation
- Both their power to perform as an RS and their function (what they
are supposed to represent) are derivative
- The representational elements are symbols
- Type II representations: Natural signs
- RSs that indicate what they do because of their place in nature (tracks,
- They have an objective (causal) relation that we exploit
for our representational purposes
- Examples: measuring instruments, thermostats, tree rings, bits of the
world in reliable causal relations
- Grice: natural (indication) vs. non-natural meaning (representation);
nothing can indicate P if P is not true (unlike non-natural meaning). There
is no misinformation
- Strong claims?: broken clocks indicate nothing at all; if a dependency
doesn't exist there's no indication (he's just explaining the technical
notion of information here, really).
- Consider the quail example in more detail, he gets it sort of wrong
(the tracks do indicate that a quail or pheasant was there, since it
limits the possible set of things that were there... to this extent
it indicates that a quail was). Same with picture example
- You don't need physical or natural laws (Fodor's 'basic' laws) to
set up these correlations, but you need some law-like conditions (special
science-like?) that explain the correlation.
- Actual representation in type II systems exploits the natural indication
and we then assign it a function (making it representational)
- Example: gas tank guage representing force on bolts or amount of fuel...
it indicates both, but has the function of representing only one.
- Difference betwee Types I and II? (Type I gets its function first and
then I make it indicate appropriately. Type II indicates already and I
exploit that indication in assigning a function)
- Type III representations:
Natural systems of representation
- Have their own intrinsic indicator functions
- For example, functions can be determined by natural selection; biological
functions (hearts, kidneys, etc.) are discovered by us, not assigned
- Example: bacteria with magnetosome representation
- However, it is not (usually?) evolutionary development, it is individual
development (learning) that matters for underwriting representation function
- Why would he shy away from evolutionary development? (Not precise/fine-grained
- The capacity to represent something as being so when it is not.
- Why this (nevative) aspect of representation? Telling the truth isn't
a virtue if you cannot lie.
- Unique to have original/intrinsic misrepresentation in type III systems,
hence we find a source of intentionality
- Functional indeterminacy eliminates error, so it is essential to be able
to state the function of an RS in order for it to be Type III
- Reference (aboutness) and sense (intensionality with an 's'):
- Or topic and comment
- topic: the thing the representation says something of
- comment: what it says about that thing (content)
- Usually causal: Looking at the representation itself won't
tell you what it's connected to (which gas tank, e.g.)
- Or is sit? De re vs. De dicto reference: De dicto is wierd: (according
to D), the fourth person is wearing a funny hat is true if the 4th
person was wearing a funny hat, regardless of whether John, who you
represent as the 4th person (but wasn't), was wearing a funny hat.
- The basic intent is to demonstrate that you can separate the reference
of the repn from the causally responsible object for the repn (even
better are predictive representations...)
- very fine-grained (can pick out one of two co-extensional properties);
which is why it is intensional
- this fine-grainedness is underwritten by function; so that is where
all the interest lies in subsequent chapters
- General approach:
- A. There is a Twinearth problem. The moral is that attitudes
are relationally individuated (i.e. 'wide').
- B. It seems relational individuation
violates supervienience because then all physical states can be identical
but the attitudes can be different.
- C. Furthermore, all scientific explanation
relies on non-relational individuation, because scientific explanations
are causal and causal powers don't differ in the relation individuation case.
- D. So for psychological explanations, we need a non-relational notion of
individuation of mental states. .
- F. So scientific and
common sense psychological individuations are different (the former 'wide',
the latter 'narrow').
- G. So is commonsense content relevant? Notably, content
won't determine extension if we go with scientific psychology, and it
seems we won't be able to individuate content as we did before!
<course home page>