Anarchic Fox on 2/1/2024 at 19:30
Quote Posted by demagogue
The problem with what you're saying is that Searle's argument is that it's impossible for humans to have the experience of speaking or translating language by executing a computer program. You're making the same point he's making with that thought experiment. He's saying whatever language is, it's not the brain running a computer program because that's, as you say, impossible, or he's inviting the explanation why the brain's natural operation is different in kind than a person doing the same work intentionally, which is impossible to be like natural language.
Searle argues that a computer cannot understand language because a human being mimicking a computer would not understand language being translated. I argue, rather, that Searle's thought experiment demonstrates nothing; in particular, it demonstrates nothing about what a computer could or could not understand.
Quote:
That said, I think if you gave someone 60 years working 364 days a year, they would be able to accomplish a translation program within that time frame. I don't think it'd take much more than several million operations, and a person can count to 1 million in ~11.5 days.
...No, I do not think that is within human capabilities, particularly once you recall that only minimal errors in execution are allowed, and the person
also has to frequently and reliably look up information within a dataset whose size is on the order of gigabytes.
Quote:
But the real point is, if you imagine the person could do it, you still can't imagine the point where they'd have consciousness of the language, right up until the end. You don't have to even get to the end of the process to see that much, which is what his point is standing on.
Here's a different way I can phrase my point. A being capable of executing computer code accurately enough and long enough to execute translation software would not be a human being. As such, we can make no claims about it would or would not understand.
Also, I regard with suspicion any argument that relies on me being unable to imagine something, because that may simply be a limit of my imagination. It's like Schopenhauer's nonsense, where he claims that everything is essentially Will, because we cannot imagine it being anything else.
Quote Posted by heywood
I just tried to read Searle's 1990 paper and it's terrible.
At some point in the class I mentioned, I developed an allergy to Searle's writing. He is... tedious at best.
heywood on 2/1/2024 at 22:03
The thought experiment doesn't require the person in the room to literally act out a program for a binary computer system, executing low level operations like a CPU. We humans operate at a much higher level of intent, so the instructions in the book would be much higher level than machine code. Searle states the rule book is in English and contains sufficient information for a person to pass a Turing test for Chinese. But he also says the database is in Chinese. How is a person going to utilize that without learning any Chinese? It looks like a poor thought experiment from multiple angles. I'm surprised this made it into Scientific American back then.
heywood on 2/1/2024 at 22:20
Quote:
But the real point is, if you imagine the person could do it, you still can't imagine the point where they'd have consciousness of the language, right up until the end. You don't have to even get to the end of the process to see that much, which is what his point is standing on.
Suppose it's you. You're given a rule book in English for answering questions in Chinese, using a database in Chinese. You can't imagine being conscious of the Chinese language? The book doesn't have to tell you it's Chinese and you don't need to know the purpose of the experiment, but how could you not be conscious of the fact that you are communicating in a foreign language, when the instructions you are following are for doing just that?
Anarchic Fox on 3/1/2024 at 01:05
Quote Posted by heywood
The thought experiment doesn't require the person in the room to literally act out a program for a binary computer system, executing low level operations like a CPU.
Indeed. For instance, my dictionary and grammar satisfy the original paper's criteria. They're a (quite hefty) set of rules, and they're "formal" in the sense that they require no prior understanding of the target language, only a knowledge of its glyphs. However, Searle also goes on to say that the set of rules is a program, which adds additional unstated attributes.
And that brings us back to my other point, which is that executing a translation
program is beyond human capabilities, even if the program is in a high-level programming language. And since it's beyond human capabilities, a hypothetical successful execution of the program demonstrates nothing. "If a human being could move faster than the speed of light, they could send messages to the past" is a true statement, but it does not demonstrate that one can send messages to the past.
Quote:
We humans operate at a much higher level of intent, so the instructions in the book would be much higher level than machine code. Searle states the rule book is in English and contains sufficient information for a person to pass a Turing test for Chinese. But he also says the database is in Chinese. How is a person going to utilize that without learning any Chinese? It looks like a poor thought experiment from multiple angles. I'm surprised this made it into Scientific American back then.
The bulk of the work has been put in by other philosophers, who have made more reasonable versions of the argument in a less awful writing style. However, at this point there are enough variants that I'm not sure we can talk about THE thought experiment anymore. It's not the original in all its deficiencies.
demagogue on 3/1/2024 at 02:06
Just to answer that, your input is questions in Chinese characters and your output is the answer or an appropriate response in Chinese characters, you don't understand Chinese characters, and ex hypothesi nothing in the code links any character or sequence with any hint of a meaning in English. You can probably deduce that you're communicating in a foreign language, but you still don't know a thing about what you're saying.
heywood on 3/1/2024 at 03:37
Again, you couldn't pass the Turing test for Chinese without having the equivalent background knowledge of a typical native speaker. That knowledge must exist in the book and/or database and the person doing the job would be exposed to more and more of it over time. The idea that they wouldn't learn any Chinese by doing it is absurd.
Sulphur on 3/1/2024 at 04:04
Over X time for Y interactions for a person to make some guesses about what they're doing in response to questions, sure, maybe they'll get a smattering of semantic knowledge. But I don't think that means anything for a single transaction in isolation; that is, isolated from all other transactions.
So are we saying if a computer has a state record, and it's given components of awareness and pattern matching, and a will to use those in a manner similar to inquisitiveness, that constitutes consciousness for an entity? Because that seems to be part of an extrapolation to the problem that TCR is saying machines aren't capable of.
Anarchic Fox on 3/1/2024 at 04:29
Quote Posted by Sulphur
So are we saying if a computer has a state record, and it's given components of awareness and pattern matching, and a will to use those in a manner similar to inquisitiveness, that constitutes consciousness for an entity? Because that seems to be part of an extrapolation to the problem that TCR is saying machines aren't capable of.
Dunno if that was directed at me, but if so, I'm agnostic on the whole question. It's a subject where I'm incapable of unbiased reasoning. I criticize TCR because I think it's a terrible argument, not because I believe something contrary to its conclusion.
heywood on 3/1/2024 at 12:32
Quote Posted by Sulphur
Over X time for Y interactions for a person to make some guesses about what they're doing in response to questions, sure, maybe they'll get a smattering of semantic knowledge. But I don't think that means anything for a single transaction in isolation; that is, isolated from all other transactions.
We don't learn a language all at once.
All the necessary semantic knowledge to answer general questions has to be in the book or database. The book has to introduce enough Chinese for the person to parse the input, determine what the question is, form queries for the required information, operate the database, and assemble the response. There is no way you could do that job and remain totally ignorant of Chinese as Searle insists. That's just one of the issues I have with this thought experiment.
Quote:
So are we saying if a computer has a state record, and it's given components of awareness and pattern matching, and a will to use those in a manner similar to inquisitiveness, that constitutes consciousness for an entity? Because that seems to be part of an extrapolation to the problem that TCR is saying machines aren't capable of.
I'm not saying anything about consciousness, and I don't think it's worth speculating on that until there is a testable definition, a point you made earlier. Same goes for sentience. These are empty words. When it comes to natural languages though, we do have a practical test proposed by Turing. If that's inadequate, we would want to know why and what's missing.
I would not approach it as Searle did with a problem of parsing questions and retrieving information, which computers are designed for. Instead, I would look at the aspects of languages that are used for purely human tasks that we don't look to computers to help us with. Like managing a consersation, adapting your language based on someone's personality, recognizing someone's mood, using humor appropriately, providing moral support, making a pitch, persuasion, etc.
demagogue on 3/1/2024 at 17:20
For the record, Searle's go-to situation for his Chinese Room argument is looking up at a menu, walking up to the clerk behind the counter of a fast food joint, and properly ordering a hamburger.
He's saying computers may be able to be programmed to say the right thing, but they don't understand what a hamburger is, or that what they're saying is going to motivate a hamburger being delivered to them, or what the point of having a hamburger delivered to them is (because they're hungry and the whole reason for coming into this situation was to get one, take it back to a table, and eat it), and then a whole world of social norms surrounding the situation, like that you have to stand in a line, there's a certain way you have to phrase your order so it'll be understood, etc. His starting point I think lines up pretty exactly with your last sentence. When we walk up to that situation, there's a whole host of conscious experiences at play that get us to say the right thing and we're satisfied.
When the Chinese Room walks up to that situation, the person is inside the box and doesn't see any of this. All they know is squiggle squiggle comes in and then squiggle squiggle goes out. It doesn't even really matter if it took 4000 years and has a ton of mistakes, because the person in the box doesn't even know that there's another person on the other side, what kind of person, nothing about a restaurant, much less what kind of food is involved, much less what's being said about that food and why.
I guess the insight that helps in understanding why Searle is framing it this way at all is the context of formal linguistics at the time, which was the context of formal Chomskyism and the rush of over-optimism that first generation AI was going to "solve language" by purely formal means very soon. The early peak of that was around Eliza, which was the mid-1960s. The AI scene was already pretty tempered by the late 1970s. Searle wrote the Chinese Room paper in 1980 as a critique, so he was already behind the curve for people in linguistics, AI, and formal cognitive science, but you know academic philosophy is always about 10 years behind the curve. But also, even if people were clear that formal Chomskyism had collapsed in its vanilla form -- there's only syntax and truth tables in some formal predicate calculus and that's all a language needs to work, going all the way back to Carnap and the logical positivists in the 1920s... well really going back to Frege in the 1890s, who was rabidly anti-psychology and thought logic was a formal system in set theory where you only needed a few rules for symbol manipulation, what Turing would formalize for computers, and had nothing to do with human psychology or experience -- by 1980 it was becoming clear that something was wrong with that Fregen legacy, but not everybody was convinced yet, hence Searle writes this paper to sharpen the argument for it, but even more deeply, people weren't sure, if it is wrong, what's wrong with it.
Also, while it's technically talking about a point in formal linguistics about the logic and necessary elements for semantics, it's really a proxy for what brains do in constructing language. There were also people being deflationist about the brain, that neurons or neural systems were basically glorified logical gates triggering logical operations just like Frege would have liked, and there's no need for consciousness or experience in that. You could in principle code a computer to exactly copy those logical gate operations and, while it might require more time and capacity than we have with our present tech, in principle that's all you need. Searle wanted to have an argument to say that that wasn't sufficient, you needed consciousness and real world experience connecting language to images in practice.
What's interesting about it is that, while many philosophers agreed that Searle missed the mark in his criticism or with this thought experiment, there was still a lot of disagreement at the time about why his argument collapsed.
But what I found interesting or kind of ironic is that Searle's paper was a few years short of the 2nd AI revolution (connectionism) and the sea change that happened in the late 1980s up through the mid-1990s, when consciousness itself became a valid subject for scientific inquiry. Of course there are features of pure consciousness subject to scientific scrutiny, like introspection studies (e.g., how fast can you rotate an object in your imagination, etc.) And then of course fMRI tech allowed scientists to start linking introspection reports with regions of the brain. One of the first big salvos in that movement was David Marr's book on Vision, which developed the field of computational neuroscience (brain regions aren't blind logical gate operators like Frege would have liked, but they have a holistic function you can convert into an algorithm; he kind of cut down Searle's starting point), then you had first gen connectionist systems (neural nets) that could do things like identify faces, or whether something was a person or a dog, and it was peaking with Chalmer's The Conscious Mind in 1996 that was even flirting with dualism (a wholly different physics is involved in consciousness that "mere logical gates"), which Searle had been trying to say for ages by that point.
Searle was trying to prefigure that movement in this 1980 paper, but he didn't really have the resources that were going to explode on to the scene another decade later, and Searle did go on to publish a lot when he did have those resources, although I don't think he ever really caught up to it and got stuck in his early 1980s way of thinking.
Anyway I wrote that out because I think understanding the context in which he wrote his paper can help in understanding his motivation and the point he's trying to make, I mean the fact it was written a time when formal semantics was much more of a dogma than it was now--give me enough logic gates and I can do anything a human can--and there was more resistance to criticism to it, and there hadn't been that many holes poked into it yet.