Psychosemantics and Neurosemantics
WEEK 12 – November 29, 2001
Statistical dependencies hypothesis
“The set of events relevant to determining the content of neural responses is the causally related set that has the highest dependence with neural states under all stimulus conditions.”
Theory of neurobiological computation
4 theoretical objects:
Vehicles: internal physical
Referents: external physical things
Contents: "determined by causal relations and transformations."
Systems: neurobiological system
Three relations need to be explained:
1 - vehicle/world (sensation)
2 - inter-vehicular (inner nervous system)
3 - basic/higher order (hierarchical)
Statistical dependencies show causal encoding for 1 and 2. The encoding process is causal, identified by statistical dependence and energy transfer. E.g. Light hits Rhodopsin and generates an impulse. Sensation is physically causal in the nervous system. Cause and computation are isomorphic. There is no strict distinction between 1 and 2 in terms of this process.
But this description does not have a direct relation to the content of neurons.
Solution? Claim: Characterize these relations as transformations.
Basic Vehicle relations (level of individual neural firings and connections):
Biased encoding -> decodes a function of the input signal implemented by connection strengths implemented for a specific data transform. Transformations are causal also.
Therefore, the basic/higher order relations operate by the same causal, statistically dependent encoding. But higher order vehicles are claimed to be different. We choose what higher vehicles map to neurons and what the relevant content or information in each is. We need to justify the choices over functional, physiological reasons. Et cetera. This mapping will be a lot of work.
So content of a specific vehicle depends on transformations and it’s connections to other vehicles. We can’t explain content without regard to computational/ transformational relations.
Essentially, the idea is that higher order vehicles are also related to the external world. Basic/ higher order relations relate neuroscience to psychology. Then, the whole brain and external world can be related in terms of statistical dependence, energy transfer, and biased encoding. We can form a logical closed system.
Discussion - Epiphenomenalistic? All input is from world. No internal (mental) causes?
Do we not generate ideas internally? What good is forming these relations? And objects?
E.g. Descriptionist (discrete) vs. Pictorialists (continuous)
– Vehicles of mental imagery are ether one or the other.
Debates question the identities of basic vehicles contents. There is no theory yet to pick out the content of basic vehicles. Therefore, basic vehicles are only uncontentiously functional units, not carriers of content.
The theory is up for grabs. Theories of content shouldn’t be disconnected from physical theory of vehicles though, which is considered stronger. But this theory depends upon identifying content such as edge detection. Therefore it is stated that we can use neural science to build a theory of content.
Discussion: can we also use theory of content to explain the nervous system? Is it a one-way relation or do both aspects generate the theory?
Highest statistical dependency picks out referents, which are causally related. Referents have energy transfer to or from relevant vehicles.
But, we can’t only use statistical dependencies to pick out the referent. E.g. two people’s same neural states could be in highest statistical dependence, but one’s though is not the referent of the other’s. Cause is the causal energy transfer from the external referent. Statistical theories and energy transfer alone are too weak.
Consider the Solipsism problem: E.g. emulators’ feedback, but refer to other internal neurons, both by energy transfer and statistical dependencies. We don’t want internal states as referents. Require that we can’t have computational relations determining referents. Therefore the external referents are still candidates, but internal states are not.
We then have three constraints to pick out referents.
Discussion: Why can’t we have internal states as referents? Has mind been reduced to solely externally perceptive, not containing it’s own “mental” objects or referents? What might an internal referent?
Content
We need to distinguish referents and contents.
Referents can determine behaviour in twin earth cases. Non-referent based meaning is needed to explain Frege cases.
Also transformations are relevant to determining contents. But there are many transformations. Some states have a natural sign and a natural sense. They don’t mean much outside of the nervous system, and if it can’t affect the behaviour of the system then it has no real meaning. Therefore, just because referents and contents are related through transformations doesn’t imply they are the same.
Then, “the set of relevant transformations is the ordered set of all those causally related transformations that succeed (or precede) the vehicle.”
Therefore potential transformations capture the meaning. We will then have a core meaning based on the most relevant transformations.
Summary: Content determination comes from transforms preceding and succeeding a vehicle. Referent is determined by the consequent set and highest statistical dependency. The other set determines the relation the vehicle has with other vehicles.
A detailed example
Chris?
CH. 8 - Concerns with Content
Can statistical dependence explain content?
Cause requires energy transfer as well as statistical dependence. Such causal relations help fix content. This theory of cause could be wrong but it shouldn’t hurt the theory of content since it doesn’t depend on there being a causal relation between the referent and vehicle, only dependencies and energy. We need to identify when the vehicle and referent are causally related, then use energy and dependencies to relate content.
So we don’t require a causal theory, but this would be good.
Highest dependence accounts for content determination. The high order vehicles depend on particular properties of the referent, not on any particular stimuli (e.g. photon). I.e. choose dog as referent and not the light that hit the retina, for a number of reasons (e.g. Ockham’s razor). So, to get the right statistical dependence we need to regard the properties that transforms may token and then be clever about choosing the referent.
Good things about statistical dependence:
Strength of the dependence maps onto the precision of the representation.
We can test the hypothesis about proper vehicles by knowing referents are external.
Syntax and semantics (vehicles and content) aren’t independent. Vehicles, referents and contents all related. Vehicles constrain content, which is encouraging. Meaning is constrained by what carries it as expected.
Occurent and conceptual content
Understanding now the nature of concepts:
Occurent content: content of currently active vehicles.
Conceptual content has stability. For example, evolutionary function to represent a thing is in a brain region. Or, the reference relation suggests that there is a unique law relating the concept and thing.
Concepts should follow some criteria:
A) Are mental particulars and function as mental causes and effects
B) Are categories
C) Are learned
D) Are public (people share them).
Consider a Neural Net: weighted to do transformations. It could categorize or distinguish. Mental particulars are vehicles. Causes and effects are accounted for in the net. It can be trained (learn) by allowing the weights to be modified. Public concepts can be accounted for if people have similar networks (as they do). Not there is room for variation in concepts in people and in nets.
Offline mental manipulation: vehicles that are statistically related to an image in the environment can be activated by (higher?) vehicles elsewhere. (Kosslyn) Dog-related vehicles could then get occurent content offline. The referent is then the dog in the external world that is usually related to this vehicle.
Reference: How do we refer to (make property ascriptions) non-existent objects or objects that are not present?
The reference relation is undefined when no element in the range (concept) can be mapped to the element in the domain (world). (E.g. unicorns). Related referents allow compound to form. This depiction is the referent of the non-existent (unicorn) vehicle. The signal is still external. (E.g. horns and horses).
For a dog outside of my light cone, we assign projections to current referents of the relevant representations. Projections are hypothetical properties assigned by a vehicle to a referent. The referent of the term, be it true or not, stays the same. So non-confirmable situations have contents dependent on cause, in the sense of being related to a referent, though the truth condition is not so dependent.
Misrepresentation revisited
Disjunction problem: how we assign the wrong properties to an object.
Ascribing properties to an object that isn’t present.
There are personal, social, and absolute types of representation. Personal misrepresentation is focussed on.
We can see misrepresentation more subtly in terms of accuracy (difference between the measurement and the correct answer) and precision (variance of a set of measurements).
Under certain stimulus conditions an object will have higher statistical dependencies with some other vehicle. How we are to individuate these stimulus conditions should evolve with experiment. Differences in physical variable could be differences in stimulus conditions or the term could be left for other more focussed sciences to define.
The disjunction problem is avoided by the ability of statistical dependencies theory to avoid ‘within language’ disjunctions.
There is an analogous social and absolute misrepresentation definition for the statistical dependencies hypothesis. The absolute form is in the realm of the metaphysical since all relevant possible observers are considered. Each of these versions changes the correct answer (from a norm, metaphysical fact) so changes accuracy. Those solve the disjunction problem. (Pages 79, 80).
A hallucination (an example of the other kind of misrepresentation) is a representation of something with no referent. There must be some naturalistic cause for this to occur. This is handled by claiming that the misrepresentation is very bad at its property ascriptions and the referent is still the thing in the world.
Statistical dependencies also hold for Millikanian cases of false positives. The probability of calling a non-cat a cat can be higher than calling a non-cat a non-cat for example.
The theory of content is still incomplete. Vehicles must be mapped within the neurological system.
Implications for folk psychology, concepts and consciousness are still unclear.