Pyrian on 23/12/2023 at 22:05
Quote Posted by Sulphur
Just coded responses to any possible question with a specific, coded answer. ...the way TCR is constructed means it should have a pre-constructed realistic response for
every possible question ever posed.
"What was the last question I asked you?" I kinda suspect that Turing's incompleteness theorem can be used to prove this isn't
possible even in an abstract thought-experiment sort of way.
zombe on 24/12/2023 at 11:14
Quote Posted by Nicker
Are hive-minds sentient?
That one is easy. You are a hive-mind of neurons (~ really crappy ants). Are you sentient?
Usually people just stumble what to call sentient to distinguish their self importance - ie. where and on what grounds to draw the arbitrary line of separation. Is my perpetually drunk neighbor sentient? Is a dog sentient? Bird? Fish? Plants? Plate tectonics? Rocks/crystals? F'n "empty" space? Google search results (we being the ants that through Google Inc form a feedback loop that could cause sentience that neither party could be aware of - i am only half joking - think of memetics)?
Most of that being un-bloody-likely for any level of reasonable "sentience" I would be willing to accept ... but not inconceivable.
It is hard to judge sentience without being able to directly inspect it - especially without having settled on what sentience exactly is supposed to mean.
I feel like this is one of thous questions that define their answer and are therefore not a question to begin with (ie. to answer it you have to define what you are asking till the question becomes the answer).
Nicker on 24/12/2023 at 21:38
Quote:
I feel like this is one of those questions that define their answer and are therefore not a question to begin with (ie. to answer it you have to define what you are asking till the question becomes the answer).
That was kind of my point. We can't even offer operable definitions of terms like consciousness and sentience. We conflate intelligence with humanity, humanity with consciousness, emotions with consciousness, hominids with humanity. Two legs conscious - everything else NOPE! And the only reference we have is ourselves. Used to be that humans were distinguished by being the only creatures that used tools and made war. But now we are not alone in that.
So if we decide that a machine cannot be conscious or a hive can not be conscious, no matter how complex they become, by what right do we do that? Aren't we just saying that they are not similar enough to us? We can't even point at the thing in us that makes us conscious/sentient. We just know it's there. We think.
We are just processing information and rendering outputs. Semantic processors. Emotional processors. Biochemical processors. We feel singular but are we?
SO... complexity. If consciousness is not just an emergent property of complexity, then where does it come from (assuming you can define what it is)?
Cipheron on 26/12/2023 at 22:39
Quote Posted by Pyrian
"What was the last question I asked you?" I kinda suspect that Turing's incompleteness theorem can be used to prove this isn't
possible even in an abstract thought-experiment sort of way.
That's a little confused. It's Godel's Incompleteness Theorem. Turing proved the Halting Problem is not solvable.
And neither of those things seem to apply to the situation you describe. The Incompleteness Theorem is about how some truths cannot be proven within a specific set of axioms. But if you add more axioms they can be solved. However, the new bigger set of axioms will always have more constructable "unprovable truths".
The Halting Problem is about whether you can write an algorithm which will tell you whether any other algorithm will end in finite time. That might be more applicable, but you could always construct the rules of TCR so that it halts. The Halting Problem doesn't mean you can NEVER tell if a specific system will halt, you just can't universally decide this for all theoretical systems.
As for the question, it would not be solvable if the book doesn't have state. As soon as you allow any sort of marks, bookmarks, counters or tokens to be used by the man in TCR then it's a solvable question.
However, the main issue is that being able to write answers to questions has ZERO to do with whether or not a system is actually conscious. You can write a system that only uses basic statistical language modelling to write fake responses and make them realistic and adaptive. See ChatGPT.
Sulphur on 27/12/2023 at 02:34
Yep. If anything, TCR illustrates that we haven't quite sussed out how to test for consciousness with any sort of universally accepted theoretical model yet.
Kamlorn on 27/12/2023 at 07:19
Quote Posted by Cipheron
However, the main issue is that being able to write answers to questions has ZERO to do with whether or not a system is actually conscious. You can write a system that only uses basic statistical language modelling to write fake responses and make them realistic and adaptive. See ChatGPT.
Why do you think that ChatGPT is not actually conscious?
Pyrian on 27/12/2023 at 08:26
Quote Posted by Cipheron
Turing proved the Halting Problem is not solvable.
Sorry, Halting Problem.
Quote Posted by Cipheron
And neither of those things seem to apply to the situation you describe.
Well, I thought it would be obvious, but I'll elaborate.
Quote Posted by Cipheron
...you could always construct the rules of TCR so that it halts. The Halting Problem doesn't mean you can NEVER tell if a specific system will halt, you just can't universally decide this for all theoretical systems.
As for the question, it would not be solvable if the book doesn't have state. As soon as you allow any sort of marks, bookmarks, counters or tokens to be used by the man in TCR then it's a solvable question.
We agree that if the book doesn't have state, it can't answer even a simple question like what the last question was. If it
has state, it can answer what the last question was. But I posit that it
cannot be built to answer
any question (in finite time, which IMO is effectively the same thing as "can't"), even with state, on the grounds that questions are essentially algorithms, TCR is literally an algorithm, and per the Halting problem, you can't make an algorithm that can reliably decide if there's a finite result of
all arbitrary other algorithms.
You could still do it, if you put a limit on the size of the question. But if the question can be arbitrarily large, a TCR to answer it can't exist.
Anarchic Fox on 31/12/2023 at 18:22
Ah, thank you for digging it up. Searle describes a human being executing a computer program, which is impossible for all but the simplest programs, and certainly impossible for translation. And a thought experiment which is impossible demonstrates nothing.
Even in the limited realms where thought experiments are useful, they are useful as didactic aids, like those used to demonstrate the relativity of simultaneity. Stories are not arguments.
demagogue on 31/12/2023 at 19:48
The problem with what you're saying is that Searle's argument is that it's impossible for humans to have the experience of speaking or translating language by executing a computer program. You're making the same point he's making with that thought experiment. He's saying whatever language is, it's not the brain running a computer program because that's, as you say, impossible, or he's inviting the explanation why the brain's natural operation is different in kind than a person doing the same work intentionally, which is impossible to be like natural language.
Or to be more specific, you're making an ex fortiori point. He's saying even if a person could do it, it wouldn't be real language. You're saying they can't even do it, so it's not real language all the more.
That said, I think if you gave someone 60 years working 364 days a year, they would be able to accomplish a translation program within that time frame. I don't think it'd take much more than several million operations, and a person can count to 1 million in ~11.5 days.
But the real point is, if you imagine the person could do it, you still can't imagine the point where they'd have consciousness of the language, right up until the end. You don't have to even get to the end of the process to see that much, which is what his point is standing on.
My critique of his argument would handwave at complexity theory. I think a program could in principle "understand" language like a human does, although the computer would need to be a good number of orders of magnitude more powerful. But I don't think it's different in kind (like Searle is arguing: "computers operating code can never be conscious & understand language in principle"), although when you get to that very high order of magnitude, the difference in scale becomes like a difference in kind, like the way complex systems can manifest higher-order hydrodynamics that you can no longer explain in terms of the parts because complexity gets involved. Looking at it, that might be similar to or the same point you're making with a different focus.
heywood on 2/1/2024 at 15:42
I just tried to read Searle's 1990 paper and it's terrible.
Let's start with his thought experiment. If the book of rules is sufficient to produce responses indistinguishable from a native Chinese speaker, then it must include all the necessary knowledge, including the meanings of symbols and the common knowledge needed to employ them in context. Thus the person learning the rule book is learning Chinese.
Axiom 1 says computer programs are formal, which is incorrect and it misses the point. Computer programming languages are formal. But most computer programs are not. Formal methods can only be applied to certain limited classes of problems. And as any systems programmer will attest to, it is impossible to define the state machine of even a moderately complex program. He also misses the point because computer programs are more than code. In most applications, the program's behavior is determined more by the data than the code, especially in the context of natural language processing.
Axiom 2 says human minds have mental contents (semantics). So do computers. In the case of LLMs, the computer's mental contents are enormous, containing much more semantic information than any human brain can hold. I could give Searle a pass for not having the foresight to imagine a system that large, but even the ELIZA program from the 1960s had some semantic knowledge programmed into its DOCTOR script.
Axiom 3 says syntax is neither constitutive of nor sufficient for semantics. The first part of that is wrong. We think in our native languages. The second part is obvious, but irrelevant since a computer program is more than syntax.
The rest of it is him just repeating the same incorrect and irrelevant points in different ways.