Joseph Mikhael
#20082998
Oct 25, 2001 (Week 7)
Phil 673 - Mental Representations
Robert Cummins: Representations,
Targets and Attitudes;
Chapters 1-3
CHAPTER 1 – INTRODUCTION
Problem of Mental
Representation
- 4 difficulties surrounding mental
representation:
1.
Content:
Which contents are represented in the mind?
2.
Form:
What form does mental representation take (images, symbolic structures,
activation vectors)?
3.
Implementation: How are the mind’s representational schemes implemented in the
brain?
4.
Definition: What is it for one thing to represent another? (p.1)
- Book deals with 4, which is,
according to Cummins a philosophical as opposed to an empirical issue, but
will also look at hoe 1 – 3 constrain 4.
- My chapters deal mostly with how 1
constrains 4.
- 2 fundamental constraints on a
philosophical theory of mental representation:
- Explanatory Constraint: The theory should underwrite the explanatory appeals that
cognitive theory makes to mental representation (philosophical
constraint).
- Implementation Constraint: The theory should be compatible with the best scientific
stories about what sorts of things actually do the representing in the
mind/brain (scientific constraint). (p.2)
Naturalizing Content
- In defining the content of
representations, we want to use terms that are used in contemporary
science.
- We also want to avoid semantic terms
like ‘means’, ‘refers’, ‘true’ or else the definition would be circular.
- Lastly, we want to avoid intentional
terms such as ‘believes’, ‘desires’, ‘intends’, since these bring about
the same mysteries as those between representations and what they are
representing.
- “We do the foundations of this
science no service, then, if we define representation in mental or
cognitive terms.” (p.4)
CHAPTER 2 –
CONTENTS AND TARGETS; ATTITUDES AND APPLICATIONS
Beginning with Error
- Theories of mental representation
are often criticized for a failure to do justice to misrepresentation, or
what Cummins will henceforth refer to as error.
- Cummins generally looks at the
causal theories in explaining this problem.
- The proposed solutions, according
to Cummins, have been inadequate.
- So, instead of beginning with a
theory of representation and account for error, we will begin with a
theory of error which will place useful constraints on further theorizing
about representation.
- This should at least remove the
“staleness” of the disjunction problem
Targets and Contents
M =
<k-kr1, K-KB4, k-kr2>
P1: the starting
position P2: the position
after M P3: the position
RP3
actually
represents
RP1:
the representation RP2:
the representation of P2, RP3:
a representation of
of the starting
position the position after
M P3, the
representation S
constructs to represent P2.
- S = chess machine.
- Subroutine: LOOK_AHEAD
- Series of possible moves created by
LOOK_AHEAD to prevent a stalemate: M
- S’s use of M should lead to RP2, but instead it
creates RP3, which leads to a stalemate.
- Since M leads to a stalemate in RP3,
S rejects
K-KB5 as a response to k-kr1.
- However, S’s tokening of RP3 is error.
- There is a mismatch between what RP3
needs to represent (P2) and what it does (P3).
- 1st important
distinction: P2, in this case, is the target
of the system, while P3 is the content of the representation.
- “Tokening a representation is error
when the target of tokening it on that occasion fails to satisfy its
content” (p.6)
- R (representation), in this case, is
a data structure that represents the position after M.
- But RP3 does not
represent the position after M, but P3, instead.
- “Targets are determined by the
representational function of tokening a representation on a particular
occasion in a particular context, not by the content of the representation
tokened.” (p.7)
- Independence of targets from
contents makes error possible.
- If one determined the other, then
there would be no mismatch, and therefore no error.
- A theory that says what it is for R
to be true of T (targets) is only part of the story about
representations.
- Need a theory of target
fixation, a theory that says what it is for R to be applied to
T.
- 2nd important
distinction: Representations mean their
content, but intend their targets.
- This is the difference between
intentionality and meaning.
- Error occurs when there is a
mismatch between what is meant and what is intended.
- How do targets get fixed?
- Representational function of
the mechanism + the current state of the world ® Intenders.
- e.g. chess subroutine
CURRENT_POSITION
- Named after its target.
- Mechanism that runs it (chess
program) is an intender.
- One can evaluate if there is error
by checking the content of CURRENT_POSITION and comparing it to the
actual current position (although what we are actually doing is comparing
the content with our own)
- Therefore, intenders, not the
representations, determine the target.
Falsehood and Error
- 3rd important
distinction: Falsehood and Error.
- e.g.
of error without falsehood: S represents x as G, but what is needed is a representation of x
as F.
- However, if x really is G,
there is no falsehood, but there is error in the intender.
- Targets are determined by the
function of tokening a representation on a particular occasion, in a
particular context Þ Satisfaction conditions vs. Truth conditions.
- Therefore, truth isn’t the opposite
of error, but correctness is.
- “p will be error when the target is
the proposition that q, regardless of the truth values of p and q.” (p.12)
- If r’s target is a false
proposition, but it correctly applies to it, then there is no error,
therefore no misrepresentation.
Illustration of Distinction
Between Falsehood and Error
- It is also possible to misrepresent
propositions that are true.
- e.g. Suppose you learn the following
two propositions:
1.
Letters are more easily recognized in
the context of words than alone.
2.
In chess, one should develop the queen
early.
- 1 is true, but 2 is false, and this
would make error seem similar to falsehood. But suppose you were to learn the following proposition,
3.
Letters are more easily recognized in
the context of superwords (set of words and pronounceable nonwords) than alone.
- However, if you forget 3, and were
asked on a multiple choice test “Which of the following best describes the
word-superiority effect?” and chose 1 over 3, you would be in error, even
though 1, by itself, is true.
Representations, Applications
and Attitudes
- Applications of representations are
not representations themselves.
- e.g. POSITION_AFTER_M tokens RP3,
is actually the application that the position after M is P3. There is actually no representation
whose content is ‘that the position after M is P3’.
- Vs. Fodor and Schiller, who have
representations with characteristic cognitive functions.
- e.g. Fodor: ‘S believes that p’ = p is in S’s belief box
- e.g. Cummins: ‘S believes that the position after M is P3’ is an application
whose content is ‘that the position after M is P3’
- Attitudes are relations to
applications.
- The propositional content of an
attitude is the content of an application of a representation.
- Attitudes are distinguished in type
– beliefs vs desires vs intentions
- Applications, not
representations, are correct or incorrect
- Proven by the fact that one can
retract an application, not a representation
- e.g. can retract that ‘the sky is
blue’, but not ‘sky’ or ‘blue’.
- Applications distinguish between
attitude types.
- Attitude is the result of giving a
cognitive role to an application of a representation to a target
- e.g. the belief that the current
auditory stimulation is a bell Þ |bell| É cas.
- There is a representation for
bell and cas, but their binding is an application because cas
is such that it temporally contingent.
- e.g. ‘Socrates was a Greek
philosopher’
- There is an intender pos
(properties of Socrates) that is bound to ‘Greek’ and ‘philosopher’.
- Fodor’s use of representations for
attitudes causes the problem of misrepresentation since one can confuse
content with targets.
- Cummin’s use of representations
removes it – two principles:
- Attitudes are applications with a
characteristic cognitive function.
The semantic content of an attitude is thus the semantic content
of the constituent application.
- Applications are the result of
applying a representation to a target.
The semantic content of an application is that the representation
hits the target. (p.16)
- Representations are true or false,
satisfied or unsatisfied.
Applications, however, are correct or incorrect.
- “The semantic function of a
representation is to represent its content; the semantic function of an
application is to hit its target.” (p.16)
Nesting Intenders
- Sometimes, the content for one
intender is needed before another intender can be used ® nesting.
- e.g. need to satisfy M before you
can use POSITION_AFTER_M
- This shows how there can be an
unlimited number of targets, thus making a system systematic and
productive.
- The target of tokening r is what S expects to find when r is accessed, not what S needs to represent to succeed.
Can Representations Determine Targets?
- No, for example, the function of
tokening RP3 is not the same as the function of RP3.
- “The Eiffel Tower is my target:
doesn’t necessarily mean that the target of this representation is the
Eiffel Tower.
Target and Referent
- The problem above results from a
conflation of targets and referents.
- e.g. representation R whose
content is the right hand, whereas intender L’s target is the left hand
- When L generates R = error
Three Theories of Mental Content
- Therefore, we need three distinct
theories to deal with mental content:
1.
A theory of representational content.
2.
A theory of target fixation.
3.
A theory of application fixation.
(p.20)
- We need the three in order to
explain error.
Some Diagnostics
- We can’t just look for a theory of
truth to account for error.
- We, instead, need to at least ask
which truth the system is after on the occasion in question.
- Also come to the realization that
beliefs with the same content can have different targets.
- e.g. contents-of-the-basket
intender generates a representation of a cat, a belief whose content is
that a cat is in the basket, but the target is the contents of the
basket. Location-of-the-cat
intender generates a representation of a basket, a belief whose content
is that the cat is in the basket, but the target is the location of the
cat.
CHAPTER 3 – MORE
ABOUT ERROR
Forced Error and Expressive Adequacy
- Forced errors: representational
scheme lacks expressive power to provide correct representations.
- e.g. Euclidean geometry inadequate
to represent the structure of physical space.
- When a system S uses scheme R to representing a domain T that R is not capable
of representing, it is forced to make the “best”, at best, representation.
- Forced error can occur when one
groups mice and voles as mice when one does not know the distinction, but
when the distinction is known, then the error is unforced.
- Some errors are difficult to
determine whether they are forced or unforced.
Accuracy:
Degree of Correctness
- Representational error and
correctness come in degrees.
- Replace correctness with accuracy.
- However, accuracy is not necessarily
linked with truth.
- Instead, accuracy is relation
between representation and target.
Seriousness and Inaccuracy
- Seriousness not necessarily related
to accuracy.
- Can have seriousness from small
errors, or large errors that are not serious.
- Seriousness instead related to the effectiveness
of a representation.
- Therefore, a correct
representation is one that is effective.
This drives Conceptual Role Semantics (CRS)
- “We require a clean distinction
between accuracy and effectiveness if the concept of representation is to
have the independent explanatory leverage that makes it an important
primitive in cognitive science.” (p.27)