Phil 473/673: Cummins, Error, Cause and Holism

Administrivia

Use and Error

'       Groundwork: A representation itself is semantically not epistemically assessable (i.e. doesn't have or determine epistemic liasons except as part of an attitude). That is, representations are justified, warranted, etc. Also, you can't just identify attitude content with epistemic liasons because the belief that p and the desire that p will have different content although both use the same p. So what's the content of p?

'       Start with just attitudes. Epistemic liasons: the set of paths in state space that intersect at cA are liasons of A in the context of cA. (Total) Conceptual role: set of A's conceptual roles realtive to c for each state in which A can occur.

'       Now we need representational content: Identify the conceptual role of r with the set of attitudes it enables. Thus the meanin of r is identifies with the paths through cognitive space that are made possible by its availability to the cognitive system.

'       Result: epistemic relations determined by r are insensitive to which other beliefs a system actually has. Therefore no meaning-incomparability. (Because what you mean doesn't depend on what you currently believe, it depends on the set of possible states licensed by the inclusion of that representation in your system).

'       See p. 38-9 bottom to top. Result: E2 is empty as an explanation of D because it entails D. Why isn't entailment explanation? Because: 1. self-explanation 2. weak premises 3. irrelevant information 4. causal order.

'       So what? CRS gives a 'valence theory' of content. I.e., it takes content to be a useful fiction that simply stands in for talk of complex epistemic relations (like 'valence' in bonding in chemicals; claims you can't explain why O and H bond 1 to 2 by appeal to valence. Agree?).

'       Why not think content is causally inert? Just look and see. We can tell that representational error, getting content wrong, results in problematic behavior. CRS response: We can tell, but what about the system? The only way we know it has a problem is because it results in bad behavior, and behavior just is the result of epistemic liasons.  C:Well at least we know CRS will be wrong if we can show that content has a genuine explanatory role.

'       CRS and Error:

'       CRS can't distinguish wrong moves in chess from right moves in schess. This is because CRS assigns content determinations without intepretations, so we don't know if their directed at chess or schess. In other words we know how to get the content but we don't kow what it is (chess e.g.). p. 41.

'       We can't distinguish what the system uses a representation to represent and what it actually represents (content) so we can't explain error (since there isn't any). CRS defines the content of RP3 as (in part) the actual use of RP3 in the system, so it must always be correctly used.

'       How to fix this? Appeal to ideal use rather than actual use for content fixation. But it tends to be circular:

1.     Ideal use is correct use requires a criterion of error.

2.     Ideal use as successful use can't evaluate success without an interpretation.

3.     Ideal use as rational use but rationality can't be specified in a way independent of how cognitive systems actually work so it isn't independent of actual use.

4.     Ideal use is competent use. Competence/performance is based on breakdown or limited resources but these don't explain representational error (rather because of incomplete information). So you couldn't distinguish correct use from justified use.

5.     Ideal use as adaptive use but correctness and adaptivness easily come apart. Some repns are adaptive but not correct (fast predator detection) others are correct but not adaptive (trout with true position of flies but reactions that compensate for diffraction). Also explanatory order is wrong: representations are adaptive because they represent what they do.

'       Totally different strategy: relativize to interpretations. (It's playing chess wrt one interpretation and schess wrt another). But this dispenses with CRS because we've already fixed contents in advance!

'       Nontriviality of representational explanation

'       Granted CRS will give an accurate picture of the causal story of interest.  But, the explanatory role of content is not found in causal role, it's in the ability to talk of representational error.

'       With error you can note that faster systems with many false positives can be better than slower more accurate ones. Use theories undermine the accuracy/effectiveness distinction and thus can't make such claims.

'       With error you can describe the debate between Piagetians and others that error during development is forced or unforced (i.e. due to representational changes or processing changes). In general (Kant example, p. 48) use theories can't have forced error because that requires representations that never correctly apply (e.g. Euclidean space). Response: The Kant case is just that the theory is wrong regarding our perceptual content (i.e. it is non-euclidean and not euclidean). Reply: Really want to say it's a euclidean approximation, but you can't because 'always being in error' isn't acceptable; no systemmatic misrepresentations!

'       So why do we need representation?

1.     Because it allows dimension shifts: Explanations by appeal to computation need semantic interpretations of their objects; hence representations. CRS gets you this far.

2.     It permits the distinction between reasoning errors from representational errors. CRS can't support this.

Causal Theories

1.     Specify epistemological conditions when detectors apply representations correctly

2.     Teleological theories that say detectors are right when properly functioning

3.     Asymmetric dependence theories that say successful detection are cases in which a special law obtains.

'       Problem 1: Foolproof detectors (when? Never!)

1.     E-theories: Fact of life that even under optimal conditions detectors fail (why are the conditions optimal then?). C suggests you can save this view by becoming an antirealist (identify what there is to detect with what actually is detected). But this is 'perverse' because you started with a theory that supposed there were properties out ther causing things inside to detect them.

2.     T-theories: Distinguish a) psychological proper functioning from b) semantic proper functioning. A) is when it's playing the right cognitive role b) is when it is detecting correctly. A detector could be a) and not b). This is a problem for T-theories because they try to define b) in terms of a)! That is, according to C: 'TRUE-OF doesn't reduce to GOOD-FOR any more than TRUE reduces to GOOD.' P. 57.

3.     AD-theories: (p. 57) The basic idea is that connections between properties and representations are correct when they are basic (not AD). C claims that AD must assume that every instance of a basic law of detection is a case of correct representation. This is, of course, true (not by assumption but by assertion). When a law is violated is when you get error, that's the point. C provides a case where a Pavlov's dog has his olfactory nerve cut to put pressure on AD. However, Fodor has 'normal conditions' clauses in his formulation of AD, so this is 'sort of' cheating (though we may think Fodor has no right to such clauses). Also, on p. 60 C claims that 'the idea that every basic law is a law governing detection is radically implausible' is perfectly fine because AD is only concerned with basic laws of detection. C himself points this out 2 pages earlier (?). This is really a poor discussion of AD. His second attack is better motivated when he points out that AD has to show that representational error is never a basic feature of detection architecture (recall cartesian example). AD is commited, as C points out, to idealizing away from resource constraints in detection. This seems highly unusual given that cognitive systems are severely resource limited. Cummins pushes the point by claiming that natural mechanisms are designed with such constraints in mind and that changing resources won't change detection properties. This is because they simply key to one feature (imperfectly correlated) to perform detection, more resources won't change this mechanism. Response: There is another system (reason) that takes the output from those detectors and can always get it right with unlimited resources. Reply: There is no guarantee that we'll always get it right even with unlimited resources (begins to seem like intution mongering). C then claims that if getting more resources improves your detection ability, obviously you are relying on a nondemonstrative inference. But this isn't true (e.g. Turing Machines). He shouts 'anti-realism' at those who deny this. Why?

'       Problem 2: Content/Attitude Distinction

'       Like CRS, CT suffers from putting it's efforts in understanding attitude content, not representational content.

'       This is because detections are attitudes: To detect H is to token an attitude whose content is that there is H present in response to H's being present.

'       But, we can have such an attitude, C claims, without being able to represent H! Provides 'food-status' example. The constituent representation means 'present' not 'food'. Thus we can construct two systems with the same attitudes but different representations. You can't, in other words, infer representational content from attitude content.

'       C says that this happens because having an attitude that p isn't standing in a relation to a representation p. He says any theory that does this will not allow a distinction between targets and contents. Why?

'       Problem 3: Explanation

'       We want CT to tell us why representations tokened by detections should matter to understanding cognition.

'       Any explanatory plausiblity that CT does have comes from its use of the notion of information. Detectors token representations that carry information about the world; having information is useful; so representations indicate their contents and are useful. But representing and indicating are different.

'       Furthermore, information processing according to CT doesn't preserve information. Complex representations are functions of their constituents, but their information isn't a function of the information of constituents (p. 65). As a result, |food| does not carry the information that food is around when it is the result of |bell&(bell->food)|. (Is that the right information content?). This is a more typical case of where |food| comes from.

'       So we have a dillema: CT derive plausibility from the idea that a basic case of representing is carrying information but they don't (and can't) think representing is carrying information!

'       This, says C, is because information and symbolic constituency don't mix. The idea that computing over representations is information processing is thus abandoned by CT.

'       How is content relevant then? Standard story: content mirrors form and form is processed by underlying mechanisms ('tower bridge').

'       C thinks this initial correlation won't stand up. He uses the argument against 2-factor theories from Fodor and others to show that content, fixed causally, need not track form. Response: alignment is contingent, but systems that don't respect it will get into problems. Reply: No way. Confusing |cow| with |ungulate| or |cow or apparent cow| just won't matter. Such a system would get into the problems that we actually get into. So misalignment won't be rare.

'       This is because symbols are arbitrary according to LOT (and CT). thus as far as intrinsic properties of a representation are concerned it's not clear whether detection or processing has gone wrong. This is because r counting as a |horse| can't be read off of intrinsic properties. If you read it off detection, processing is wrong but if you read it off conceptual role, detection is wrong. Either way it is your theory that fixes the problem, there's no fact of the matter.

'       From this C concludes that it is probably a representations structure that matters, not it's role in detection (foreshadowing).

'       CT entails LOT: CT implies representations are abitrary because it's causal connections that matter. This is how primitive representations are fixed complex representations get meaning from combinatorial semantics. Voila.

'       Thus, there are 3 ways for symbols to help explanation: 1) as triggers 2) as cues to knowledge 3) as constituents of complex representations. None of these is that interesting; i.e., none addresses the fact that representations of cows tell you something about cows!

'       Since representations just trigger knowledge (ITT-internal tacit theories) it is theories doing all the work, but 1) theories as sentences have computationally relevant structure that semantically irrelevant and 2) no one thinks complex mental representations are sets of sentences. Thus whole ranges of problems will be intractable to one system and tractable to another even though they have semantically equivalent theories according to CT.

'       From 2) CT can't accommodate context sensitive representations (e.g. faces). Why? Only faces have independently specifiable content so 1) no semantic complexity and 2) primitives are infinite. Furthermore, such representations are common, even in standard symbols systems (see map e.g. p. 73)!

Atomism and Holism

'       We have good reasons to be suspicious of any theory (CT or CRS) that suggests that all representational content is intrinsically holistic or atomistic because we should accept the 'obvious' fact that representational schemes are both. (Descartes even showed that holistic and atomistic representations can represent the same functions).

'       Thus we need a theory that makes content holistic in holistic schemes and atomistic in atomistic schemes.

'       Question: Is C conflating content and representational schemes? Could one be holistic and the other either holistic or atomistic? Do the definitions apply to each? Yes.

'       You could argue that the scheme of mental representation is atomistic or holistic and all representation is parasitic on mental representation. But this won't work because

1.     You have to show that mental meaning has to be atomistic (holistic). You could try to do this as Fodor has using the essential feature strategy (identify some property X that all minds must have and show that only atomistic schemes could have X). Fodor uses systematicity and productivity (p. 80). Unfortunately one of Fodor's premises is false (because of cartesian representations). But this doesn't generalize and C says he can't make it do so. But, he considers another argument (p. 81). This also turns on a false premise (that acquiring new beliefs changes meaning -- but not by fCRS). (It seems that, in fact, changing representational power doesn't come through learning but maturation or trauma.) C. claims this is a pattern: any arg that a crucial feature requires atomism will undermine the claim that it is essential.

2.     It seems to be contingent even if true what the structure of our mental schemes is. If it's contingent, a theory of representational content should leave it an open question but CRS and CT don't. Thus need a representational theory that avoids this consequence' see next week.

 

<course home page>