aZerogodist wrote:Damn now I feel guilty turning of this machine (one day it will have rights)
No need to feel guilt, think of it as the computer equivalent of lower than a sea sponge (in an evolutionary sense) - it is too primitive to feel either pain or distress.
Of course, you can still give your computer a reassuring hug from time to time. I do.
mkaobrih wrote:How would you get around the Chinese room problem? Where the computer is just identifying “shapes” and has no comprehension as to what it’s doing.
Thanks for linking to that, I must admit its the first time I've heard of it.
I’ve only had a brief look at the argument and its counter arguments, but here are my initial thoughts on Searle’s Chinese Room argument. (Anyone reading this thread who, like me, hasn’t come across this argument before - basically it’s the most famous argument against the development of Artificial Intelligence.)
I don’t think that his end
position is radically different to mine. I was speculating on the possibility that eventually AI would successfully and faultlessly be able to simulate human intelligence. The question was how would one be able to distinguish the two; and my reply was that distinction under those conditions was irrelevant.
My position is (apparently) called the weak AI view; and what Searle is arguing against is the strong AI view (where computer would actually be able to understand and have other human mental abilities - sort of like C3PO or Twiki), so in that sense we wouldn’t have an argument. Where he and I would differ is that I don’t see that the weak AI view has to preclude the strong AI position.
One of the problems in the Chinese Room argument is that it is old enough to predate all the recent advances made in the Theory of Mind, bearing in mind (no pun intended) that we as yet don’t have a complete Theory of Mind for humans, let alone for other species such as higher mammals (bonobos, dogs) or corvids who do display evidence of some intelligence. It seems to me to fall into the trap of the Cartesian theatre (Daniel Dennett’s phrase) by making it appear that human consciousness and understanding are somehow significantly different and distinct from the algorithms and scripts that run a computer “brain”. Certainly, ours are more complex at the moment
, but that is not what is being argued here.
When looking at the actual thought-exercise contained in the argument, a number of issues occurred to me. His analogy doesn’t work because his “human” in this scenario is the equivalent of a robot arm in a factory. I wouldn’t dispute that the human has no comprehension of Chinese; but it does not follow that the computer in the room does not. Given the complex nature of language acquisition, for this setup to work (it appears to everyone outside the room that someone inside the room understands Chinese), it suggests that the computer has a masterful command of the language. Does it “understand” the language?
I don’t know.
But how would you prove that it does not?
Let’s look at a simpler and dramatically more primitive comparison: Deep Blue (IBM’s chess-playing computer) beat the human world champion Kasparov in 1997 in a six game match. Now, I’m not arguing that Deep Blue has self awareness, or anything that amounts to human consciousness. It was designed to play chess, nothing else. It couldn’t tie its own shoe laces if it tried.
My question is:
Does Deep Blue understand chess?
Before you answer, consider this:
Does Garry Kasparov understand chess?
How do you know?
As far as their ability to play chess goes, what is the difference between Garry and Blue?
How would you prove that either of them is merely following instructions without “understanding” chess?
My point is that although they may be able to perform certain actions by following significantly different routes, as far as chess goes, both the human and the computer display similar mastery of the game. In fact (warning: anecdotal evidence!) one of the most fondly remembered moments of their encounter was when Deep Blue “took back” or reversed a move it had just made. It decided it had made a mistake; and corrected it on its next turn. The incident made everyone laugh, because it was generally agreed that a human would be too embarrassed to admit in public it had made a mistake.
Ok, true, Kasparov can out-perform Deep Blue on every other front. No arguments there. Deep Blue was a one-trick pony in every respect. But my point is, that in the matter of chess playing , it would be hard to establish that Kasparov has chess abilities that Deep Blue does not have.
Corvid intelligence: http://news.nationalgeographic.com/news ... _apes.html
Canine vocabulary: http://www.sciencedaily.com/releases/20 ... 072744.htm
Bonobo language ability: http://www.smithsonianmag.com/science-n ... onobo.html
The Chinese Room Argument (Stanford Encyclopedia of Philosophy)