Once I get to the end, his claim is less drastic than I thought throughout and bothers me less.
The rest of this post is a jumble of tangents my thoughts took.
Chinese Room
Searle's Chinese Room is prominent in Schulman's article.
I don't like it, because understanding Chinese isn't really the question, it's understanding what the other person is saying. A language like Chinese or English is empty – the incoming notes could be a well-formed but completely non-sensical Chinese statement and the person in the room wouldn't know what to say back. The Chinese Room example just wraps a known intelligence – a human – in a layer of machine (the rules of translation).
This relates to the misstep that I see Schulman taking: he talks about "intelligence" or "understanding a children's story" as if such things exist separate from humans exhibiting them. We have no real definition for "intelligence". I believe it's because it doesn't exist separate from us &ndash we are machines interacting and we think we do it in a special way called intelligence. The Chinese Room, like the Turing Test before it, involves a human in the intelligence test because we need to sprinkle in some "real intelligence" into the scenario, since we cannot otherwise characterize how to pass the test.
What?
I don't even know where he's going with this, but I disagree with it. He eventually squirrels it into relevance with one of Searle's apparent contradictions.
But it would be incorrect to take the notion of a hierarchy to mean that the lowest layer—or any particular layer—can better explain the computer’s behavior than higher layers. Suppose that you open a file sitting on your computer’s desktop. The statement “when I clicked the mouse, the file opened” is causally equivalent to a description of the series of state changes that occurred in the transistors of your computer when you opened the file. Each is an equally correct way of interpreting what the computer does, as each imposes a distinct set of symbolic representations and properties onto the same physical computer, corresponding to two different layers of abstraction. The executing computer cannot be said to be just ones and zeroes, or just a series of machine-level instructions, or just an arithmetic calculator, or just opening a file, because it is in fact a physical object that embodies the unity of all of these symbolic interpretations. Any description of the computer that is not solely physical must admit the equivalent significance of each layer of description.
The layers are not equivalent. He earlier spoke of the duality between the symbols manipulated by a formal system (as a computer implements, e.g.) and the represented real object. I think he 1) chose a bad example regarding "opening a file" since it's a very computer-centric concept and 2) leaves out the notion that the monitor connects all the electronic state changes back to the real world &ndash the file is open because we see it presented to us on the screen. That is the result, showing the file's contents to the user. All the lower layers are just how it was done – a distinction he had previously pounded away on.
Ray Kurzweil
When I say the brain is a machine, I mean there's nothing unnatural about it – no magical piece (nothing playing the role of the man in the Chinese Room). I do not mean it needs to be digital. In fact, I'm doubt it is. Schulman points out that Kurzweil does too. Maybe I should read his book.
An old friend
Also, Shulman doesn't smell the problem of induction.
As an empirical hypothesis, the question of whether the mind can be completely described procedurally remains open (as all empirical hypotheses must), but it should be acknowledged that the failure thus far to achieve this goal suggests that the answer to the question is no—and the longer such a failure persists, the greater our confidence must be in that answer.
There's no basis for this claim.
No comments:
Post a Comment