Knowledge and the Flow of Information - Dretske

 

“An attempt to develop a philosophically useful theory of information.”

 

Information

- The goal of the section: to “develop a semantic theory of information, a theory of the propositional content of a signal”

-        Dretske begins by comparing the mathematical theory of communication to how information is ordinarily understood.

-        Communication theory: “Measures how much information is transmitted from one point to another, how much information there is at point r about what is transpiring at s”. It is concerned with the statistical properties of the ‘channel’ connecting r and s, not individual events or particular signals.

-        Information as it is normally understood: Something associated with, and only with, individual events. The relevant thing here is content, what information a signal carries. However, unlike how much information is carried, what information is carried cannot be averaged out. “There is no meaningful average for the information that my grandmother had a stroke…” So, “the quantities of interest in engineering are not…the quantities of interest” to Dretske, because he’s concerned with what information travels from source to receiver (not the channel between them).

-        However, this analogy does “highlight the relevant objective relations on which the communication of genuine information depends”.

-        The objective relation is as follows: “The amount of information at r about s is a function of the degree of lawful dependence between conditions at these two points.” Only when there is this lawful regularity between two events can one be carrying information about the other.

-        For Example: If you dial my number and my phone rings, then information is being carried, the information that someone has dialled my number. It doesn’t tell me someone dialled your number, even if this happened to happen at the same time. Why? Because there is no lawful regularity between my phone and your phone ringing.

-        So, the theoretical definition of a signal’s informational content is: “A signal r carries the information that s is F if and only if the conditional probability of s’s being F, given r (and k), is 1 (but given k alone, less than 1).

-        What does that mean: My gas gauge (r) carries the information that I (s) still have some gas left (F) if and only if the conditional probability of my (s) having some gas left (F), given the reading on the gauge (r) and that I know what this means (k), is 1.

 

Main features of this definition

  1. Three reasons that the conditional probability must be 1:

A – If it’s less that 1 is it leads to unacceptable results. For instance, if there’s a .91 probability that my dog is dead and a .91 probability that my dog is bald, then a signal could relay that my dog is dead or that my dog is bald but not that my dog is both dead and bald because the probability of them happening together is less than .9.

 

B – The Xerox principle: If C carries the information that B and B carries the information that A, then C carries the information that A. This is essential to the flow of information, because without it there simply is no flow of information. So, this principle is of absolute necessity, but isn’t valid if the conditional probability isn’t set at 1. For example: The conditional probability of B, given C, could be .91, the conditional probability of A, given B, also .91, but the conditional probability of A, given C, less than 0.9.

 

C – It is needed to sustain the tie between knowledge and information. For example: If there are 94 juicy suckers and 6 crappy candies in a bowl, then the probability of getting a sweet, juicy sucker is 0.94. However, you can’t say that you know you’re going to get a sweet, juicy sucker if you put your hand in the bowl, you could only do that if you got rid of the crappy candies and had a hundred sweet, juicy suckers.

 

  1. “The definition captures the element that makes information an important epistemic commodity”.

Misinformation, disinformation are not varieties of information. Information is closely related to truth. It is not the same as meaning, nor is it just anything true, false, or meaningless that is stored on a disk.

 

  1. The definition shows information to be objective. It exists whether or not anyone appreciates it or knows how to extract it.

The k in the definition relativizes the information about the possibilities at the source. For example, if I am playing chess and I know your knight is not on KB-3, then I know it’s on KB-5, only if I know that all the other possible positions to which your knight could have moved are already occupied by your pieces. However, this does not undermine the objectivity of the information. Without a lawfully regular universe no information is ever communicated, no matter what I know about chess, even if I’m a grandmaster and can beat Deep Blue.

 

  1. The definition shows that there is no single piece of information in a signal or structure.

For example: if there is a signal, s is a donut, we also know that s has a whole in the middle of it, that it’s made of pastry, it’s not a meat sandwich, etc.

While a linguistic meaning may be unique the information in an utterance isn’t. For instance, when Herman says he’s not coming to the party, it means only that he isn’t coming to the party. Linguistically, it doesn’t mean he thinks that the party sucks or that everyone there is smelly, although his utterance may carry these pieces of information.

 

  1. “The definition of the signal has been relativized to k, what the receiver already knows”.

See knight example. The big deal here is that someone who doesn’t know anything about chess won’t get the same information that a grandmaster will from the signal, “the knight is not on KB-5”.

 

  1. The informational content of a signal is a function of the lawful relations it bears to other conditions”.

For example: “The reason my thermometer carries information about the temperature of my room, but not about your room though both rooms are the same temperature, is that the registration of my thermometer is such that it would not read 72 degrees unless my room was at this temperature. This isn’t true of your room.”

This helps to explain how the informational content of a structure exhibits intentional properties. That is, “the informational content of a signal depends not on the reference of the terms used in its sentential expression, but on their meaning.” So, one cannot refer to one thing and then refer to another thing that is the same thing without altering the content. For example, the information that “I love Gerald” is different than “I love my rhinoceros” even if Gerald is my rhinoceros.

 

Knowledge

-        The goal of the section: “to apply the theory of information to knowledge”.

-        Knowledge is defined as information-caused (or causally sustained) belief.

-        Dretske’s analysis is restricted to perceptual knowledge.

-        He’s seeking to give a more “realistic picture of what perceptual knowledge is”.

 

1. How does information cause something (a belief, for instance)?

-        For example: Any old knock on the door doesn’t tell you it’s a friend. What tells you it’s a friend is three quick knocks, a pause, and then three more quick knocks (who knows what kind of weird friends Dretske has). “It is that particular signal that constitutes the information-carrying property of the signal”. So, when you hear this weird pattern of knocks, the information is that your friend has arrived, and this causes you to believe that he/she has arrived. The knocks might also wake up the whole neighbourhood, but in this case it is the physical property of the knock that did this, not the information.

-        So, Dretske’s not denying that there are physical properties of the signal, but he’s clarifying what needs to occur for your belief to be constituted as knowledge. For instance, if you believe your friend has arrived because there is just some random knock at the door not done in the ‘secret friend pattern’ knock, then your belief doesn’t constitute knowledge. A belief is knowledge only if it is caused by the appropriate information (in this case the patterned knock).

-        This theory of knowledge accounts for some of the puzzles that intrigue philosophers, for instance those theories based on justificational accounts. In such theories, one can be justified in believing something that is false, and also know that Q (which happens to be true) is a logical consequence of what one believes, and come to believe Q as a result.”  So, one is justified in believing the truth, but one doesn’t know Q. This is a problem for justificational accounts but not for information-theoretic model because “you can get into an appropriate justificational relationship to something false, but cannot get into an appropriate informational relationship to something false”.

-        Also, the lottery paradox is disarmed. The informational-theoretical analysis avoids it’s implications because the information that one is going to lose is absent - there is a very slight, negligible chance that one might win.

-        An objection: Under Dretske’s theory of knowledge, nobody will ever have knowledge. “If every logical possibility is deemed a possibility, then everything is noise. Nothing is communicated.”

-        Reply: An analogy to assessing the emptiness of containers.  If everything is deemed a thing (even dust, molescules, etc.) then no room, pocket, or refrigerator is ever empty. In the same way, there is the slightest weird possibility that Herman could hallucinate an entire football game, or a voltmeter’s spring could behave like silly putty. Yet, “the probability of these things happening is set at 0. If they remain possibilities in some sense, they are not possibilities that affect the flow of information.” He’s calling for a common sense view of things.

-        Does this really answer the objection? It seems there are good reasons for believing that no knowledge could ever be communicated if the conditional probability is set at one (for instance, most of the times our measurement instruments aren’t that precise, etc.) However, there doesn’t seem to be any good reason for Herman’s hallucination of the football game, it’s only a logical possibility. It doesn’t seem fair to compare the two. (Dealt with more in the objections section)

 

Perception

-        Dretske’s attempt to apply the informational-theoretical analysis to perception.

 

-        He begins with the difference between extensional and intensional perceptions:

  1. Extensional perception: I see what happens to be a duck
  2. Intensional perception: I recognize it as a duck

“You can see a duck, get information about a duck, without getting, let alone cognitively processing, the information that it is a duck.”

 

-        Dretske then relates this distinction to beliefs.

-        For example: You glance quickly around a room that happens to have 28 people. Do you believe you saw 28 people? No – though the information was in this sensory representation, it was not cognitively transformed (through digitalization) into a belief.

-        What is digitalization: “a process whereby a piece of information is taken from a richer matrix of information in the sensory representation (where it is held in ‘analog’ form) and featured to the exclusion of all else.” So, when we see an apple in a bowl with a bunch of other stuff, we come to believe that it is an apple, only after our mind has stripped away all other information we intake and featured one component of this information – that the apple is an apple.

-        So, “perception is a process in which incoming information is coded in analog form in preparation for further selective processing by cognitive centers”.

 

Belief

“An information-theoretic analysis of what has come to be called our propositional attitudes – in particular, the belief that something is so.”

 

-        Dretske begins by enumerating the two aspects of belief intentionality:

  1. If beliefs are internal representations then they must be capable of misrepresenting how things stand.
  2. If two sentences mean something different then the belief we express with each one is also different. This is the case even if both you and I say we are sick. The statements have a different reference.

 

- Dretske then discusses why the informational content of a structure fails as a belief:

1.     If nothing can be F without being G, then “no structure can have the informational content that s if F without having the informational content that s is G”. However, we can believe that s is F without believing that s is G, even if there is a lawful relationship between the two.

2.     We can believe that s is F, even if s isn’t F. However, nothing can carry the information that s is F, unless s is F. Example: we can believe all juicy suckers are orange even if all juicy suckers aren’t orange. However, we can never get the information that all juicy suckers are orange unless they are in fact all orange.

 

Dretske then deals with how meanings develop out of informational contents.

-        The example of a map: How does a patch of blue ink mean that there is a body of water in a specific location? It acquires this meaning “by virtue of the information-carrying role that that symbol plays in the production and use of maps. The symbol means this because that is the information it was designed to carry.”

-        Misrepresentation then becomes possible because through inadvertence the mapmakers might put blue ink where there actually is no lake. “Misrepresentation becomes possible, because instances of a structure that has been assigned an information-carrying role may fail to perform in accordance with that role.”

-        In the same way, neural structures acquire an information-carrying role during learning. We teach a child what a bird is by giving him positive and negative instances of the concept until he can successfully identify birds.

-        So, “the thinking that something is so, is characterized in terms of the instantiation of structures (presumably neural), that have, through learning, acquired an information-carrying role”. However, sometimes these structures don’t perform satisfactorily (the child sees an airplane and yells “Bird!”). This is false belief.

-        To learn what a bird is is to learn to recode analogically held information (s is a bird) into a single form that can serve to determine a consistent, univocal response to these diverse stimuli. 

 

Objections and Dretske’s Replies

 

Information theory

Objection: His own account of information theory does not really resemble the mathematical communication theory all that much.

Reply: Granted, but it was only a vehicle to get his point across. He was interested in (among other things):

1.     The idea of information as an objective commodity. “Objective in the sense that it is independent of its potential use, interpretation, or recognition.”

2.     The idea that two signals can be informationally different though they are the same in meaning, and vice-versa.

3.      Informational value is a function of how many possible options it forecloses.

 

Objection: Doesn’t help cognitive psychologists with their problems.

Reply: Who cares, I’m interested in epistemology not the psychological dynamics of cognition.

 

- Other objections about how the ideas embodied in MTC cannot be adapted and applied in the way Dretske uses them. I didn’t really get into them, but we can have lots of fun and discuss if you want.

 

The probability of 1?

Objection: If the communication of information requires a conditional probability of one, then your theory doesn’t allow for much knowledge, if any.

Reply: It’s fine with him to say that our justified beliefs are based on less than complete information. But belief is different than knowledge. There is a difference between believing that something weighs 7 pounds and knowing it weighs 7 pounds. To know something weighs 7 pounds, exactly 7 pounds, is to have the information that it weighs exactly 7 pounds. Dretske asks, “Is no instrument, no matter how reliable and sensitive, capable of telling us (with a probability of 1) that the object weighs between 6.9 and 7.1 lbs.?” He doubts it, but if there isn’t then he won’t conclude that we can know something is 7 pounds.

 

Kyburg Objection: The theory is incorrect, just look at how information is applied across a series of events. The conjunction principle is if S knows P and S knows Q, then S knows P and Q. But this doesn’t work in the case of a voltmeter. If the voltage drop across a resistor is 7v, then by the conjunction principle we should also know the combined voltage drop across a series of registers. But we don’t know (with a probability of one) because of random errors in the measurement tools. Therefore, “so much the worse for absolute knowledge”.

Reply: This is a misapplication of the conjunction principle. The conjunction principle is that if I know the voltage drop at register A is 7v and at register B is 7v, then I know that the voltage drop at A is 7v and at B is 7v. It doesn’t mean that I know the combined voltage drop of A and B.

- (aside) How, then, do I get knowledge of the combined voltage drop? It isn’t through the conjunction principle, and it isn’t through the Xerox principle (If A knows B and B knows C, then A knows C). Can we just not have knowledge of that and if so, is that acceptable?

 

Ginet Objection: He takes Dretske to mean that the channel conditions necessary for the flow of information need have a very small probability of changing.  So, imagine a long communication chain. As we increase the number of links, the probability that one of the channel conditions will change and skew the flow of information gets higher.

Reply: The conditional probability of the channel condition’s changing must be zero. In this case, there is no chance that channel condition’s change and we can therefore rely on the information as knowledge.

 

Knowledge

Alston Objection:  Dretske’s theory of knowledge is too weak. There could be “a person (who) is caused to believe s is F, but who is so undiscriminating in his habits that he is disposed to believing this by a variety of signals lacking the requisite information”.

Reply: This belief is caused not by the information s is F, but by s’s looking somewhat F-like. Beliefs qualify as knowledge only if they are produced by properties that carry the relevant information.

 

Lehrer and Cohen example. A visitor goes to the planetarium, falls asleep, wakes up and thinks he is seeing a real star. In fact, he is, because the people in the planetarium opened the window and you can’t tell the difference between the models and the real deal. In his sleepy state he forgets that he’s in a planetarium and believes he is seeing a real star. Now, he does, but we wouldn’t say he knows this, but Lehrer and Cohen claim that under Dretske’s account, we would have to concede knowledge to the visitor.

Reply: Dretske claims that the visitor is not getting knowledge because he doesn’t know he’s seeing a real star. The reason for this is that he’s not getting the information that it’s a real star.

-        I fail to see how this answers the objection. He is getting the information that it’s a real star - that’s the problem. Just because he doesn’t know he’s seeing a real star doesn’t mean that the information isn’t being given to him. Or so it seems. Dretske should just concede the visitor has knowledge, in the same way he does with the Rundle objection.

 

Rundle Objection: “It is still not clear that a belief caused in a specified way must be knowledge.” For instance, say the positions of the planets actually inform people about events in their lives. By observing the planets, someone might come to believe that they will fall in love, and they do. “But we should wish to know more about the way in which this information was extracted before we allowed that this belief, correct though it was, amounted to knowledge.”

Reply: If I find that there is in fact information being passed to people from the way the planets are aligned, then they do have knowledge. “I can accuse them of being unreasonable…(but) the question is whether they knew, not whether they were reasonable in thinking they knew or whether their knowing was a piece of luck.”

 

Barwise Objection: Signals carrying information only inform those “attuned” to the nomic relations that make possible its flow. “When (Dretske) defines the basic notion, he relativizes only to what the receiver knows about the nomic relations. More important is what the receiver knows about the nomic relations.” Thus, if you don’t understand English, an utterance won’t inform you of anything. 

Reply: “I don’t have to know the laws whose existence enables me to learn – anymore than I have to know the principles of logic in order to argue validity.” It matters what information signals carry, not how they carry it. Learning English is learning what information signals carry, not how they carry it. 

 

Perception

Armstrong Objection: Why the distinction between beliefs and perception? Perception seems very belief-like. “Perceptions are propositional in character, involving both referential elements and classifications and sortings…It seems that we could think of perceiving as the acquiring of beliefs, even if most of those beliefs fade almost immediately…”

Dretske reply: Yes, perception seems very belief-like, but there is an important difference. I don’t get how Dretske explains it, but this is what I think: Perception, “is a process in which incoming information is coded in analog form in preparation for further selective processing by cognitive centers” The further selective cognitive processing goes on to make our beliefs, like the child being given example after example of birds. It seems that Armstrong believes that perception requires a sort of digitalization, but Dreske doesn’t.

 

Belief

Under Dretske’s notion, what makes a gauge ‘mean’ that a tank is, say, half-full, is that under certain conditions this is the information it carries. In the case of our minds, the ‘certain conditions’ refer to those conditions obtained during learning. If our cognitive system misapplies a concept (the information the system acquired in the conditions obtained during learning), then it has a false belief. For example, if the kid misapplies the concept ‘bird’ to an airplane then he has a false belief.