Phil 473/673: Content and Representation
- Remaining readings are available (except Harman from week
7; Ryder (2004) from 11)
- Content can be attributed to events, states or processes (all called 'states'),
and involves reference to objects, properties, or relations.
- Content specifies properties of the referent. What does Peacocke's "highly
generic characterization of content" assume? (It is, btw, "there always exists
a specific condition for a state with content to refer to certain things,"
What about non-existent things? How specific is the condition? What about things
you can't in principle know the conditions for? )
- Frege content (senses/mediated/mode of presentation) vs. Russell content
- Ascribing contents: Will rationality help (intentional stance (Dennett) /principle
of charity (Davidson))? But, couldn't there just be something that accidentally
is interpretable as rational (some random number generator)? If so, don't we
need to introduced some connection, e.g. cause?
- What explanation can be given regarding the ascription of content as rational?
(Note the emphasis on linguistic content)
- McDowell, rational intelligibility depends on us. Subjectivist.
posit more fundamental concept possession conditions, which constrain
possible interpretation (then assign references to concepts s.t.
rational transitions are truth preserving, that explains why transitions
are correct). Objectivist.
- Observational/perceptual contents seem to have a different status, but still
count as content
- Teleological theories (refering to natural/proper function) may not pick
out the 'right' (knowable) contents .
- Many conceptual contents are naturally understood as sentenc-like. But it
isn't obvious that all content is conceptual
- Non-conceptual/perceptual content: Perhaps specifiable as a spatial type
(refering to agent centered repns of surfaces, etc.), which won't have sentential
structure. Critics would note that nothing in those types can't be captured
by conceptual 'that' clauses. Reply: those 'concepts' can't be elucidated without
reference to the types.
- Strong (Evans, McDowell): thinking about 'that pear' will be a 'different
way of thinking' if the pear changes
- Weaker (Peacocke): It's the same way of thinking, if 'way' is identified
with, not the object, but a collective of circumstances (distances, directions,
- Strong dependence (i.e. supervenenience) championed by Putnam and Burge
of content not on internal states, seems to fail for perceptual content.
That doesn't refute externalism, but rather shows supervenience to be too
strong. The 'positive' view, i.e. that content *does* rely on external
states (not that it doesn't supervene on internal ones) is weaker.
- Fodor (1981) 'methodological solipsism' is essential for explanation,
since molecular dopplegangers must have the same causal powers.
- Reply: But content presupposes a background, so changing background
(regardless of molecular identity) can change content. Common in science
- Against externalism: explaining behavior that employs the concept water
will refer to a mental state that refers to an externally individuated
substance 'water', this seems too a priori... i.e. why couldn't the same
concept refer to something else (determined through experimentation)?
1. explanations involving concepts will also explain the relations
that underwrite that reference 2. It's ok for some concepts to do that:
- If anything, internalists can't explain behavior well. e.g. walking
towards the cinema, seems to invoke external relations in a description
of the behavior. They can pile on supplemental relations of internal
states to narrow down the possible external relations, but this ends
up being equivalent.
- Neuro: The internalist problem is also a problem for reducing content
explanation to neurophysiology (CE: only if you don't think neural states have
external content, of course; which Peacocke seems to assume when he says retinal
states aren't individuated externally).
- Remaining challenges: If content is externally individuated, how can we know
our own contents (Mean what we mean to mean?) Similarly, what sense is there
to be made of externally individuated conscious states? And how do we ascribe
contents to others, not know all the external relations?
- The problem: A causal/explanatory role for beliefs (which have contents)
is assumed everywhere, but the contents themselves can be causally unconnected
to the thinker. Contents, then seem extraneous. Something must be done!
- Interpretational semantics (i.e. isomorphisms between a mechanism and an
assignment of meanings) won't do for people. Because, isomorphisms are too
cheap. It's fine for machines because they don't have non-derivative intentionality,
but it's not fine for people.
- Causal semantics:
- Misrepresentation problem/disjunction problem
- Drestke's learning fails for the usual 3 reasons
- Fodor's asymmetic dependence fails because it's uninformative
- Also, causal theories (like verificationism and rational intepretation)
take it that truth conditions are conditions underwhich beliefs are held.
So, beliefs are usually true.
- Teleological semantics
- item F has purpose G iff it is now present and a result of selection
that favoured items with G
- learning is a kind of natural selection
- swampman is a problem, but has three responses
- independent reasons to think narrow contents aren't enough for representational
- non-historical accounts of purpose could be available (but there
are no good ones)
- the problem is 'merely' an intuitive conceptual problem. we're redefining
a better notion of content
- it fails to solve the disjunction problem (e.g. flys and black dots),
- beliefs are more relevant to results than causes
- it's a fly-belief because it has advantageous results
- frequency of presentation of flies and black dots is irrelevant,
so long as it is flies (not black dots) that are advantageous (what
about different kinds of flies? if more than one thing is advantageous.
Is teleology fine-grained enough?)
- merely embraces the fact that beliefs don't usually have to be true
- Success semantics: needs to supplement teleology (to explain selectively
advantageous beliefs that are not true (so you can't equate truth conditions
with beliefs serve biological purposes; not expecting to die in battle).
- Success semantics: truth conditions are conditions under which beliefs
lead to the satisfaction of desires (then you can have 'secondary' purposes
(making people fight) and it won't harm your ascription of belief; it's
independent of truth).
<course home page>