Phil 473/673: Fodor and Twinearth
Administrivia
- Update on
who is presenting when
Representation (Schwartz, 1994)
- Kinds of representations: maps, guages, swatches, words, sculpture, music
- What is a representation: Something that 'stands for' or 're-presents' something
else
- How can you taxonomize the kinds? Peirce (semiotics): Icons (resemblace repn),
Indices (causal dependency), Symbols (arbitrary 'use' relation). Symbols are
sometimes take to indicate mindfulness
- Goodman proposed a different, finer categorization Many of these
aren't easy to analyze with standard truth theories.
- reference can be denotation
(converse of possession, 'blue') or exemplification (swatches)
- symbols belong to symbol schemas (sets of symbols) that have a realm of
reference, metaphor is cross schema/realm symbol use
- many 'routes' to reference, some are chains or complexes (e.g. beaver picture
denoting canada by denoting an animal that is industrious, which it exemplifies)
- symbols are systems that have schemes (syntactic elements) and rules (grammar)
but also rules for semantics (i.e. that determine the symbol references)
- In particular, Icons have been challenged because 1. resemblance doesn't necessitate
repn (e.g. twins) and 2. non-iconic symbols can be isomorphic to what they represent.
Both these are a result of the fact that all repn seems to demand a system of
interpretation
Intentionality (Schwartz, 1994)
- strange relation (because it can have non-existing relata); resists physicalist
characterization
- limited to humans and maybe a few other animals
- using mental states, have two wierd theories 1. a symbol refers to mental
states (but what about the world) 2. it refers to whatever the mental state does
(but just moved problem)
- behaviorist analyses don't work well (e.g. 1. person behaves for symbol like
referent or 2. symbol causes behavior appropriate to referent), in both cases
'disposed to behave as appropriate' is now just as much a problem, also just
little correlation betwen symbol and referent elicited behaviors
- more recent is causal theories (more related to indices), e.g. Dretske. But
everyone knows cause isn't enough (recall why?) Appealing to functions means
'functions' takes all the weight. Also hard to describe abstract symbols/relations.
Contemporary Concerns (Schwartz, 1994)
- Cog sci gave rise to cognitivism which talks of repn all the time.
- Qs: 1. What repn are used in cognition 2. What type of computer is a cog system
- Standard/first answer As: 1. language-like 2. digital
- What does digital mean? In short, it's a red herring because the digital/analog
distinction is irrelevant to intentionality
- Imagery: Nothing new, really. Mostly concerns about icons moved inside the
head. Still have to explain why an icon/image is referential.
- Ends with short rant, with which I hardily agree. 1. Can't divide representation
into two kinds, there are many. 2. Not doing so challenges many assumptions (basically
everything Fodor says :) )
Naturalizing meaning (Fodor, 1987)
- To be a realist about intentionality seems like you have to be a reductionist
(if it's real, it's really about something else)
- The meaning of attitudes comes from the meaning of the propositions in them.
The meaning of the propositions comes from their interpretation (given a truth
definition). So it's the interpretation that's the focus of naturalization.
- Naturalized theory of meaning: a non-semantic, non-intentional theory that
states sufficition conditions for A to 'be about' B
- It seems most plausible that a causal story will work. But it has to be a story
that works for *at the very least* mental repns
- Naive theory: counterfactual supporting causal dependnce (nomic relation)
relating HORSE to horse. (not 'horse' since that's highly unreliable, as Grice
notes; because the causal chain is longer)
Error (Fodor, 1987)
- Misrepresentation: Disjunction example/problem.
- Dretske's solution: Uses correlations instead of nomic relations talk, but
ends up the same. Trick is to distinguish what happens in the learning period
(when correlations are laid down), and what happens afterwards (when correlations
are used). Problems: 1.how to enforce that distinction? (learning doesn't really
end does it?) 2. then only learned reprns are intentional, but that introduces
a dichotomy between indexes (natural repn) and learned repns that Dretske disavows
3. learning errors could re-introduce the disjunction problem
- Millikan's solution (Stampe, and later Dretske): Teleology. Appeals to 'optimal
circumstances' as 'proper functions' to set the real meaning of a mental repn.
Problems: 1. Why think teleology gives truth optimal circumstances? (Doesn't
seem a problem with assuming truth is essential for representation... is it?)
2. Defining optimal circumstances seems like it has to presuppose knowing the
character of the content (optimal for hearing vs. seeing).
- Asymmetric dependence is the answer (not! see last weeks readings).
- Another problem: How to ensure that we can fill in "Unfamiliar horses would
cause HORSE tokenings if..." And this has to work for abstract terms too.
<course home page>