Cipheron on 20/12/2023 at 18:57
Quote Posted by Nicker
Are hive-minds sentient?
Well you could imagine a system where every person was asked to do and share calculations, and the sum total of the calculations added up to a realistic simulation of a human brain. The fact that the calculations aren't all happening in one place or in one location shouldn't have *anything* to do with whether the end result is that "something" was sentient and aware. It would just be a completely disembodied mind.
So my view is that if one "box" can be sentient then you could just distribute the work done by the box across a network of boxes, and when you take the broader picture we would realize we were just being silly and anthropomorphic by thinking the
box was sentient but not being sure whether the same interactions but
not in a box somehow couldn't be sentient.
Yeah, i have a lot of issues with the Chinese Room argument. It falls down because of a category error, basically. The human in the room is acting like the "CPU", and it's wrong to ask whether the CPU "knows" how to do anything.
Also, you can replace "knows Chinese" with "knows" anything else, or even just "can do" anything else, thus showing it didn't prove anything about awareness or sentience at all. For example you could state that the room cannot "play chess" because the man in the room doesn't learn how to play chess. I got ChatGPT to write up a counter-example:
Quote:
Imagine a room with a person inside, let's call him Alex. Alex does not know how to play chess; in fact, he has no understanding of the rules, strategies, or even the pieces involved. However, in the room, there is a vast collection of chess moves written in a rule book, and Alex has a set of instructions that tell him which moves to make in response to specific board positions.
Now, someone outside the room passes chess positions and moves through a slot in the door. Alex, following the instructions, manipulates the chess pieces on the board accordingly, producing responses that are indistinguishable from those of a skilled chess player. Observers from outside the room might be convinced that Alex is a grandmaster chess player, given the quality of his moves.
The external observers perceive the room that in the room is a competent chess player because the moves are accurate and appropriate. However, Alex has no comprehension of the game, strategy, or even the meaning of the moves; he is merely following instructions.
But since Alex doesn't understand chess we conclude that the room cannot genuinely "play chess."
Yet *something* in the room just whooped your ass at chess, it just wasn't Alex. Focusing on Alex's role was in a complete red herring.
But if they say "yes but you haven't proved the room is conscious!" ... that's the point. The Chinese Room argument cannot prove OR disprove how the room operates. Putting a conscious human into the role of the CPU is complete misdirection. Imagine a "neuron room" where a human carries out the operations of each neuron. We could argue that since the human is not aware of what the neurons in aggregate are doing then the collection of neurons cannot be conscious.
Anarchic Fox on 20/12/2023 at 20:47
Quote Posted by Cipheron
But if they say "yes but you haven't proved the room is conscious!" ... that's the point. The Chinese Room argument cannot prove OR disprove how the room operates. Putting a conscious human into the role of the CPU is complete misdirection. Imagine a "neuron room" where a human carries out the operations of each neuron. We could argue that since the human is not aware of what the neurons in aggregate are doing then the collection of neurons cannot be conscious.
The Chinese Room argument is also facile for another reason. Stick a human in a room to translate, and let their instruction set be... an English/Mandarin dictionary and grammar. They'll translate slowly, _and_ they'll understand what they translate.
Sulphur on 21/12/2023 at 06:08
Quote Posted by Cipheron
Yeah, i have a lot of issues with the Chinese Room argument. It falls down because of a category error, basically. The human in the room is acting like the "CPU", and it's wrong to ask whether the CPU "knows" how to do anything.
I don't particularly agree or disagree with TCR, because I don't think it answers everything myself. But there's still something to the question it poses. So if it's not the thing performing the operations that is conscious, then what are you proposing is when it comes down to it, and is that machine-replicable?
Quote:
Also, you can replace "knows Chinese" with "knows" anything else, or even just "can do" anything else, thus showing it didn't prove anything about awareness or sentience at all. For example you could state that the room cannot "play chess" because the man in the room doesn't learn how to play chess.
...
Yet *something* in the room just whooped your ass at chess, it just wasn't Alex. Focusing on Alex's role was in a complete red herring.
Well, let's go deeper into that. If you're talking about chess, and it's a set of instructions to be followed that have been written down, are we saying that the instructions themselves are a sign of conscious intelligence? That the actual intelligence was offloaded into the pre-prepared moves in the manual and whatever did
that was the actual conscious intelligence? Because that makes sense and doesn't change anything about the experiment's conclusion.
Quote:
But if they say "yes but you haven't proved the room is conscious!" ... that's the point. The Chinese Room argument cannot prove OR disprove how the room operates. Putting a conscious human into the role of the CPU is complete misdirection. Imagine a "neuron room" where a human carries out the operations of each neuron. We could argue that since the human is not aware of what the neurons in aggregate are doing then the collection of neurons cannot be conscious.
Let me preface by saying I'm probably not getting this completely, and I haven't formally studied philosophy or philosophy of mind in the past, so that's my fault, and feel free to clarify if I am missing something.
I think that the last sentence you said is, in fact, the point. Neurons by themselves aren't conscious, because an additional something needs to develop - that is to say, intentionality, or affect per Dema's note of HOT. For this we'll have to chew through some theories of consciousness and agree what consciousness is and which theory we believe works best, I think, before we can pin down whether TCR's conclusion makes sense or doesn't. Or maybe there's purely logical reasoning that can circumvent all of that which you're getting at, and I haven't quite grokked it yet.
Cipheron on 21/12/2023 at 06:14
Quote Posted by Sulphur
I don't particularly agree or disagree with TCR, because I don't think it answers everything myself. But there's still something to the question it poses. So if it's not the thing performing the operations that is conscious, then what are you proposing is when it comes down to it, and is that machine-replicable?
My point is that to even ask if the
CPU is "conscious" is a meaningless question.
Can the "CPU" play chess? no it cannot. I can't do ANYTHING, because all the connections to do those things are held in data, not in the CPU. A CPU is a simple adding machine that does low-level computations. So pointing at the CPU and saying "is that conscious? I DON'T THINK SO" is dumb, it's like pointing at neurons and asking which one is the conscious neuron.
It's an inherently idiotic question to even ask. It's like asking which atoms in your brain are conscious. None of them are, because that's not what atoms do. We just have this lame idea that you point at a piece of inert matter and ask whether consciousness resides in that piece of matter. But that's not how consciousness works.
Consciousness is not a property of a specific lump of atoms, it's a emergent property of a process of interactions, in the same way that "playing chess" isn't a property of the CPU, it's an emergent property of running a program from data storage through the CPU.
So you can point to a CPU and ask if it's "being conscious" right now, assuming that you had enough memory and time and ran a brain simulation through it. No, it's not "being conscious" it's just adding up numbers any time you look at it. But by the same exact logic, you can point to the CPU while it's running a chess program and ask if it's "playing chess" right now, and you get the same answer. No it's not "playing chess", it's just adding up numbers.
Anarchic Fox on 21/12/2023 at 16:18
Please address my point too, Cipheron.
Kamlorn on 21/12/2023 at 20:53
Quote Posted by mxleader
AI can already beat chess masters so it has already won.
White? Black? I am the guy with the plug.
Nicker on 22/12/2023 at 05:47
Quote Posted by Qooper
What do you mean "allowed"? What type of "allowed" are you referring to here?
I think I was musing that, as non-hive minds, we may not be qualified to decide that on behalf of another potential mind. We simply have no frame of reference. But we do know that we are quick to dismiss the humanity of other species and even of other humans. And once we have decided something, we are difficult to move. Just because we cannot conceive that a nest of scurrying ants has a meta-awareness, that nest of ants might scoff at the notion that a single, independent, giant, bipedal organism could possibly be uniquely self aware. It's preposterous!
Which asks the question, what it is about our beings which convinces us that we are individuals? What objective evidence do we have and how legitimate is our conclusion? If we were ourselves, nodes in a hive mind, would we even know it?
Quote:
... I don't believe just any complex system becomes "sentient" just because it's complex.
I agree but it does seem that self awareness is correlated with complexity, perhaps an emergent property of it. If so, when does an increasingly complex system make that transition? Is it some sort of higher octave complexity, producing novel, cognitive overtones ? Is there a highest octave or are we like a polygon adding segments, approaching, imitating but never actually becoming a real circle. Are we just simulating self awareness (sentience)?
Or is sentience just a word we created to describe our particular perceptual configuration, and then elevated to appease our egos?
Another option is theistic, that our humanity is from an external source, a character skin applied by a designer. But this is unsatisfactory as it just defers the questions, what really is sentience and how does it arise?
Quote:
Yet *something* in the room just whooped your ass at chess, it just wasn't Alex. Focusing on Alex's role was in a complete red herring.
Just a better algorithm for chess. So what is the difference between being the best at winning chess and "knowing" how to play? I know how to play chess but I am crap at it. And when I say know, I mean I am aware of a game called chess and I can cite the rules of play but I really don't get it. There are billions of people who know chess better than me but do I know chess better than a supercomputer, even one who can beat every human player?
Ah, words defining other words. It's so incestuous.
Cipheron on 23/12/2023 at 03:25
Quote Posted by Anarchic Fox
Please address my point too, Cipheron.
The one about the English/Mandarin dictionary? Not sure how i'm supposed to answer that. That changes Searle's argument too much because the whole point he was making is that the symbols being manipulated were not ones the person understood.
Putting the internal states into English changes the argument, but doesn't really say anything about whether the algorithm the ROOM is carrying out could be conscious. Like I said before, it's a complete red herring to ask about what the HUMAN knows.
The human is taking the role of the CPU. Nobody is claiming a CPU can be conscious. That's not the what the strong-AI argument is claiming. Consciousness is an emergent property of patterns of information, it's not a property of the lump of rock that is the CPU.
Quote Posted by Nicker
Just a better algorithm for chess. So what is the difference between being the best at winning chess and "knowing" how to play?
The difference isn't the point. My point was that you can use Searle's exact argument to prove that the "room" can't "X the Y" for any verb X and any noun Y.
So we can follow his exact logic to "prove" the room cannot "play" "chess". But, we can see in that case that *something* played chess.
So Searle hasn't actually demonstrated how the concept of "knowing" is actually any different to that. His argument is therefore reliant on his own conclusions about what "knowing" is, and that no system can do it other than the human brain. Circular logic, basically.
Adding a conscious human into the room and asking about what the human knows just confuses the issue, as I said before the human is just working as the CPU, and a CPU only knows the exact math operations it's doing at that precise moment. It doesn't get the big picture of what the program or computer is actually doing, whether that's calculating tax returns, playing chess, or literally anything else.
Anarchic Fox on 23/12/2023 at 14:34
Quote Posted by Cipheron
The one about the English/Mandarin dictionary? Not sure how i'm supposed to answer that. That changes Searle's argument too much because the whole point he was making is that the symbols being manipulated were not ones the person understood.
Putting the internal states into English changes the argument, but doesn't really say anything about whether the algorithm the ROOM is carrying out could be conscious. Like I said before, it's a complete red herring to ask about what the HUMAN knows.
The human is taking the role of the CPU. Nobody is claiming a CPU can be conscious. That's not the what the strong-AI argument is claiming. Consciousness is an emergent property of patterns of information, it's not a property of the lump of rock that is the CPU.
For context, two decades ago I took a philosophy of mind class which included reading the original Searle paper. Said paper was annoyingly vague about what went on in the Room. I'm having a hard time finding the original right now, and I'll hunt it down if you like, but for now I'll go from memory. As I recall it describes the person in the room executing "formal rules" in order to translate. The argument falls apart on three points: (1) If the rules are a grammar and a dictionary, the human being translates
and understands. That class's professor said these aren't "formal" rules, without ever defining the word "formal." (2) If the rules are computer code, then the human being never finishes translating, never even completes a thousandth of the steps needed to translate; so any entity that sits in the room and produces a translation, is not a human being. (3) A story is not an argument.
Sulphur on 23/12/2023 at 17:12
Quote Posted by Anarchic Fox
For context, two decades ago I took a philosophy of mind class which included reading the original Searle paper. Said paper was annoyingly vague about what went on in the Room.
Here are the words:
Quote Posted by "Searle"
Suppose that I'm locked in a room and given a large batch of Chinese writing.
Suppose furthermore (as is indeed the case) that I know no Chinese, either written or spoken, and that I'm not
even confident that I could recognize Chinese writing as Chinese writing distinct from, say, Japanese writing or
meaningless squiggles. To me, Chinese writing is just so many meaningless squiggles.
Now suppose further that after this first batch of Chinese writing I am given a second batch of Chinese script
together with a set of rules for correlating the second batch with the first batch. The rules are in English, and I
understand these rules as well as any other native speaker of English. They enable me to correlate one set of
formal symbols with another set of formal symbols, and all that 'formal' means here is that I can identify the
symbols entirely by their shapes. Now suppose also that I am given a third batch of Chinese symbols together
with some instructions, again in English, that enable me to correlate elements of this third batch with the first two
batches, and these rules instruct me how to give back certain Chinese symbols with certain sorts of shapes in
response to certain sorts of shapes given me in the third batch. Unknown to me, the people who are giving me
all of these symbols call the first batch "a script," they call the second batch a "story. ' and they call the third
batch "questions." Furthermore, they call the symbols I give them back in response to the third batch "answers
to the questions." and the set of rules in English that they gave me, they call "the program
Not the most articulately worded, but that was his 1980 version. Not grammar, not a dictionary. Just coded responses to any possible question with a specific, coded answer. Not physically feasible, of course, but it's a thought experiment. Here's the (
https://web-archive.southampton.ac.uk/cogprints.org/7150/1/10.1.1.83.5248.pdf) paper if you want it. The (
https://www.cs.princeton.edu/courses/archive/spr06/cos116/Is_The_Brains_Mind_A_Computer_Program.pdf) one from 1990's a bit clearer.
Quote Posted by Cipheron
My point is that to even ask if the
CPU is "conscious" is a meaningless question.
Can the "CPU" play chess? no it cannot. I can't do ANYTHING, because all the connections to do those things are held in data, not in the CPU. A CPU is a simple adding machine that does low-level computations. So pointing at the CPU and saying "is that conscious? I DON'T THINK SO" is dumb, it's like pointing at neurons and asking which one is the conscious neuron.
It's an inherently idiotic question to even ask. It's like asking which atoms in your brain are conscious. None of them are, because that's not what atoms do. We just have this lame idea that you point at a piece of inert matter and ask whether consciousness resides in that piece of matter. But that's not how consciousness works.
Right, so in my admittedly superficial search of the responses to TCR, yours is what's called the System response. So this ascribes the feature of consciousness to the system as a whole, where the system includes everything in the room - and that's definitely one perspective to take. But there's a problem, because even if you discount the man and just replace him with an actual CPU performing lookup operations, there's a philosophical quandary here: for theory of mind to apply, there has to be something intelligent performing dynamic, on-the-fly evaluations of context with semantic understanding, but the way TCR is constructed means it should have a pre-constructed realistic response for
every possible question ever posed. Fundamentally, it boils down to a variant of the p-zombie issue from this perspective.
Quote:
Consciousness is not a property of a specific lump of atoms, it's a emergent property of a process of interactions, in the same way that "playing chess" isn't a property of the CPU, it's an emergent property of running a program from data storage through the CPU.
Somewhat tangential: while that makes sense intuitively, I don't think we've been able to make a definitive claim that that is, in fact, how consciousness as we understand it arises. It's still a theory - the emergent one - and the thing is that proving it is a hell of a lot more difficult than posing it; at least, there's a lack of strong evidence for it (or against it, for that matter), as far as I can tell. If you've got sources that say otherwise on this, do share.