On Schwartz

He starts out by pointing out the vast array of representational systems that humans use.  The point seems to be that a lot of people think that explaining mental representation can be narrowed down to one or two categories, but it can’t.  He systematically goes through these ways of looking at the relationship between signs and objects, and what it is that distinguishes representational items from other things:

 

1.  Icons, Indices, and Symbols (Peirce):  icons resemble the things they represent, indices have a causal relationship with the things they represent, and symbols are related to their objects through use or association, usually arbitrarily

Problems: 

-  this line of thought eliminates many useful systems of representation (Goodman)

-  the concept of resemblance used with icons is questionable in its sufficiency, and its claimed independence from interpretation

 

2.  Mental states of the user:  A symbol # stands for * for S, if it triggers a *-idea in S.

Problems:

- this line of thought fails to explain the link between symbols and the world

- this line of thought also fails to give any reasons for seeing a mental picture or word as having any reference at all

 

 

 

 

3.  Behaviourist:  # denotes * for S is explained in terms of S’s disposition to behave in certain ways when presented with #.

Problems:

- the whole idea of behaviour with respect to a symbol is obscure

- there’s no reasonable correlation between behaviour towards signs and behaviour towards objects that warrants assigning a referential relation between the two

 

4.  Causal:  for # to represent * to S, # must play a certain role in the functional economy of S.

Problems: 

- the notion of ‘function’ is fuzzy

- misrepresentation

- the difficulty of extending the theory to abstract or theoretical symbols

- the insufficiency of a causal relation for the implication of a intentional representation

EXAMPLE: increase temp = rise of mercury…

 

5.  Digital vs. Analogue: 

: discrete vs. continuous. 

: applies to either, systems, users of the system, a combination thereof, or the laws and stuff the system or user are made of. 

: based on either syntactic and semantic density of the elements of the system, or, whether the representation is and analogue of what it represents

 

 

Analogue:

: meaning either a “1-1 correspondence between sign and object” or “some law-like connection between” the two.

: often rejected on the basis of the appearance of production or response to a sign being ‘triggered’ by some present stimuli.  This is, however, a problematic claim as it confuses “structural properties of the representational system with claims about how the system is used.”  The question of the free use of a system is separate from the question of the semantic or syntactic density of a system.                              EXAMPLE: bank machine using (digital) English in a fixed manner, vs. humans using symbolically dense bee language freely

: often portrayed as non-mental or non-cognitive without good reason

EXAMPLE: calculator seen as digital and slide rule as analogue.  Human math ability as cognitive activity regardless of it working like a calculator or a slide rule

 

6.  Imagery:  ideas are thought of as ‘pictures in the mind’

Problems:

- discussions seem to imply that “images are objects”, yet they “have no mass, physical size, shape, or location”. 

- if ideas are images, and images are thought be analogue, then the digital computer model of the mind cannot work, and claims that all mental representation is digital are automatically false

- accounts of  what it means for one thing to stand for another are problematic, assuming that  pictorial representation can be explained in terms of resemblance or some other notion of 1-1 correspondence.  Also assuming that since pictures are like their referents they require no interpretation

 

7.  Language:  language or language-like symbol systems have always been the focus of these discussions and this has had a constricting affect on our understanding of human cognitive activities.

 

CONCLUSION: Since “many more than two types of representation are employed in our cognitive activities” it would be “premature, then, to assume that yet-to-be discovered modes of internal representation must fit neatly into one or two pre-ordained categories” (541).  “ recognizing that a much broader range of representational systems plays a role in our cognitive activities should throw [some] doctrines in the study of mind into question (541)”

 

 

On Fodor

He starts out by establishing that for intentional states to be real, we need to take them out of the intentional language.  We need to naturalize.  To put them into the physcialistic view of the world.  The point is to explain what it means for a physical system to have intentional states in natural terms, not intentional or semantic.   

ULTIMATE GOAL:  to generate conditions for the semantic evaluation of an attitude by fixing a context for a system of mental representations        

 

To do this, you must:  specify an interpretation for items in the primitive nonlogical vocabulary of the language to which the symbols belong (98)”

Find a way to figure out the meaning of a symbol by selecting a certain method of understanding it.  Is this really saying anything except, let’s choose a way to figure this out?

 

The chosen interpretation: Crude Causal Theory of Content

                                                :  “the symbol tokenings denote their causes, and the symbol types express the property whose instantiations reliably cause their tokenings (99)”

                                                : thought of symbols point to their causes, and the symbols indicate the property of the things which reliably cause the thought of that symbol

                                                :  “a symbol expresses a property if it’s nomologically necessary that all and only instances of the property cause tokenings of the symbol (100)”

                                                :  my utterance of ‘horse’ says of a horse that it is a horse

PROBLEM 1:  misrepresentation

: how do you getunveridical ‘A’ tokens into the causal picture (101)?”

: Not ONLY A’s cause ‘A’s

: how does seeing a dog make you think “rat!”?                       

Solution:  disjunctive properties?  NO GO

When you say “rat!” you really mean (rat or dog).  Wrong.  Because if you meant (rat or dog) then you wouldn’t be wrong when you said “rat!”

 

PROBLEM 1 restated: ‘disjunction problem’: how do you tell the difference between the case where the symbol means ‘AvB’ and the case where the symbol means just ‘A’, but sometimes the ‘A’s are caused by B’s.

So the problem has gone from “how does this happen?” to “how can we tell when this is happening?”

 

Solutions:        Dretzke:  if this happens after the learning period, then it’s a misrepresentation.  Fodor: but misrepresentations can happen during the learning period, and which point you’d be learning A to mean ‘AvB’, and thus after the learning period you wouldn’t have a misrepresentation.

 

Note: can innate information be false?

                        Optimal circumstances/Teleological:  only A’s would cause ‘A’s in “optimal circumstances”, circumstances where the mechanisms are working “as they are supposed to”. 

 

Fodor:  the optimal function of a mechanism may not be to deliver truth.

 

                        Asymmetric synchronic dependence:  B’s wouldn’t cause ‘A’s, unless A’s caused ‘A’s, but not vice versa (it is not the case that A’s wouldn’t cause ‘A’s unless B’s caused ‘A’s)

You know something is a misrepresentation when it’s being able to happen is dependent on the non-misrepresentation happening, but not vice-versa.

 

PROBLEM 2: Not ALL A’s cause ‘A’s

                      : there are dogs out there that don’t make you think “dog!”

Solution:  counterfactuals?  A would cause ‘A’ if ______(fill in the blank)

Problem:  how do we fill in the blank?

Solution:  fill in the blank with psychophysically specifiable conditions

Problem:  it looks like psychophysics won’t work for all of our concepts (like proton and horse)

Solution:  extend psychophysics.  Reduce all of the non-psychophysical concepts to psychophysical ones.  NO CAN DO.       

Solution:  maybe psychophysics goes further than we thought. 

Problem:  “what you don’t have you can’t token (117)”.  Psychophysics can guarantee that you’ll see a horse under conditions x, y and z, but it cannot guarantee that you’ll see it as a horse

Solution:  instantiations of non-psychophysical properties (horse), are often causally responsible for instantiations of psychophysical properties (that horsy look)

Problem:  this gets ‘that horsy look’ into the belief box, not ‘horse’.

Solution:  something (theoretical inference, but that doesn’t matter) causes the brain to go from ‘that horsy look’ to ‘horse’

It doesn’t matter how it happens, just as long has it happens.  How is this different than saying, “it doesn’t matter how we go from object to symbol, as long as it happens” and skipping this whole discussion?

 

SUMMARY:  The Slightly Less Crude Causal Theory of Content

:  “A sufficient condition for ‘A’s to express A is that it’s nomologically necessary that (1) All instances of A’s cause ‘A’s when (i) the A’s are causally responsible for psychophysical traces to which (ii) the organism stands in psychophysically optimal relation; and (2) If non-A’s cause ‘A’s then their doing so is asymmetrically dependent upon A’s causing ‘A’s (126).”

:  We go from a horse, to that horsy look, to ‘that horsy look’ to ‘horse’.  This chain is reliable, and thus ‘horse’ means horse