text
stringlengths
0
2.12k
Automatic formal systems (Turing machines and computers)
Automatic formal system: a physical device that automatically manipulates the tokens of a formal system according to the rules of that system
Complications:
Getting the device to obey the rules
The control problem: how does the device select which move to make when there are several legal options?
Characteristics of a Turing machine:
Has an unlimited number of storage bins (inputs and outputs)
A finite number of execution units (states)
One indicator unit (machine head)
Any automatic formal system can be formally imitated by some Turing machine
Formal imitation: like formal equivalence, but with the following changes
An imitating system has to offer corresponding legal options in each corresponding position AND make the corresponding choice in each case
A system that satisfies this clause is dynamically equivalent
The imitating system is divided into two parts:
The virtual machine: the part which directly corresponds to the system being imitated (an encapsulated formal system within the larger computer system)
The program: another part that works behind the scenes, making everything come out right (make virtual machines obey each other’s rules and corresponding tokens)
Universal Turing machines: imitate any other Turing machine
A universal Turing machine can imitate any automatic formal system
Any standard digital computer can formally imitate any automatic formal system
The control problem
How does an automatic formal system decide among several legal options at a given state?
Divide the machine into two submachines:
One machine that generates a number of legal options
One machine that chooses among the legal options (this is the control problem)
Combinatorial explosion: the number of choices at each additional step multiplies the number of possible combinations so far
Solution: consider only the relevant possibilities and bypass worthless moves
Heuristic: method that bypasses worthless moves (a “rule of thumb”)
Instead of making rules algorithms (a rule that is guaranteed to give a result meeting certain conditions), we make some heuristics (rule of thumb)
A machine can follow a bunch of inconclusive rules of thumb as long as there are algorithms defining the heuristics themselves
Digital and analog
Digital system: self-contained, perfectly definite, and finitely checkable
Eg. automatic formal systems
More advantageous than analog systems
Can simulate analog systems (when the operative relationships of the analog systems can be described in a compact and precise way)
More versatile
Each token is exact/perfectly definite –– no need for judgment calls
Analog systems: relevant factors have not been defined and segregated to the point where it is always perfectly definite what the current state is, and whether it is doing what it is supposed to do
There will often be slight inaccuracies and marginal judgment calls
More spectral in nature (as opposed to something like a binary system)
Can often be digitally simulated
Semantics
Tokens of formal systems can have interpretations that relate them to the outside world, aka they have meanings
Interpretation: a systematic specification of what all the tokens of a system mean
Semantics: the general theory of interpretations and meanings
Semantic property: what any token means or its truth value
Not formal properties –– formal systems are meaningless
Formal tokens can be:
Syntactical: meaningless markers moved according to the rules of a self-contained game
Semantic: have meanings and significant relations to the outside world
If you take care of the syntax, the semantics will take care of itself
Semantic engine: an automatic formal system with an interpretation such that the semantics will take care of itself
Interpretation and truth
Q: Why is truth important?
Formal tokens never intrinsically favor one interpretation scheme over any other, so the interpretation that is adopted must be distinctive in some way
A: Truth matters to interpretations because it provides a nonarbitrary choice among candidate schemes, this choice reflects some relation between the system and what it is interpreted to be about, and wild falsehoods amount to nonsense
Interpretation and pragmatics
Truth is not all that matters all the time, like in conversation, so what is our criterion for the adequacy of interpretations?
“Making sense” is a very abstract notion
Criterion for “making sense”:
Rationality: obvious consequences of tokens in the current position should be relatively easy to evoke from the system (as outputs)
The system should have the tendency to root out inconsistencies among the tokens in its positions
Transducer: an automatic encoder or decoder which either reacts to the physical environment and adds tokens to the current position (an input transducer), or else reacts to certain tokens in the current position and produces physical behavior in the environment (an output transducer)
Guarantees reliable interaction with the world
Conversational cooperativeness: the system’s output is communicative and responds within the scope of the question
Felicity conditions: speech acts have various prerequisites, and if they are not satisfied, there is something wrong with doing the speech act
Emphasizes the importance of considering outputs in relation to one another and the overall context of the semantic system
Interpreting an automatic formal system is finding a way of construing its outputs such that they consistently make reasonable sense in the light of the system’s prior inputs and other outputs
Truth is not always a sufficient condition on the output tokens making sense
Cognitive science (again)
Basic tenet of cognitive science: intelligent beings are semantic engines –– automatic formal systems with interpretations under which they consistently make sense
People and computers are different manifestations of the same underlying phenomenon
Any semantic engine can be formally imitated by a computer
Is cognitive science misconceived? Are people not semantic engines? Here are two reasons why we may be:
Hollow shell strategy: no matter how well a semantic engine acts as if it understands, it can’t really understand anything because it isn’t some X factor
X = consciousness
Since we have no idea what consciousness is, how can we be sure that genuine understanding is impossible without it or that semantic engines won’t ever have it?
X = original intentionality
A semantic engine’s tokens only have meaning because we give it to them; their intentionality is derivative
In other words, computers don’t mean anything by their tokens, they only mean what we say they do
X = caring
A system couldn’t mean anything unless it had a stake in what it was saying
Poor substitute strategy: denies that semantic engines are even capable of acting like they understand