Anarchic Fox on 3/1/2024 at 23:08
Quote Posted by demagogue
What's interesting about it is that, while many philosophers agreed that Searle missed the mark in his criticism or with this thought experiment, there was still a lot of disagreement at the time about why his argument collapsed.
There are also various attempts to amend or update the argument, which is why I try not to phrase my objections too strongly. For all I know, some of the variations resolve my criticisms.
Also, I object to your characterization of Frege. What I gathered of him (different class this time, one on analytic philosophy) was that he was far from "rabid." When Russell wrote Frege a letter pointing out the paradox that caused his entire endeavor to collapse, Frege thanked Russell for his discovery. Also, Frege would have had zero influence on analytic philosophy had Russell not continued his work, so it's wrong to attribute to him the influence that you did. Carnap was the really influential one there, with his Vienna Circle including Wittgenstein and Godel.
Quote:
Anyway I wrote that out because I think understanding the context in which he wrote his paper can help in understanding his motivation and the point he's trying to make, I mean the fact it was written a time when formal semantics was much more of a dogma than it was now--give me enough logic gates and I can do anything a human can--and there was more resistance to criticism to it, and there hadn't been that many holes poked into it yet.
For whatever it's worth, Hilary Putnam works in similar areas as Searle, but is a far better writer and philosopher, and an accomplished logician to boot. Anyone interested in critiques of the computational model of consciousness should read Putnam, not Searle.
heywood on 4/1/2024 at 00:26
Quote Posted by demagogue
For the record, Searle's go-to situation for his Chinese Room argument is looking up at a menu, walking up to the clerk behind the counter of a fast food joint, and properly ordering a hamburger.
He's saying computers may be able to be programmed to say the right thing, but they don't understand what a hamburger is, or that what they're saying is going to motivate a hamburger being delivered to them, or what the point of having a hamburger delivered to them is (because they're hungry and the whole reason for coming into this situation was to get one, take it back to a table, and eat it), and then a whole world of social norms surrounding the situation, like that you have to stand in a line, there's a certain way you have to phrase your order so it'll be understood, etc. His starting point I think lines up pretty exactly with your last sentence. When we walk up to that situation, there's a whole host of conscious experiences at play that get us to say the right thing and we're satisfied.
When the Chinese Room walks up to that situation, the person is inside the box and doesn't see any of this. All they know is squiggle squiggle comes in and then squiggle squiggle goes out. It doesn't even really matter if it took 4000 years and has a ton of mistakes, because the person in the box doesn't even know that there's another person on the other side, what kind of person, nothing about a restaurant, much less what kind of food is involved, much less what's being said about that food and why.
I understand Searle's assertions. But they are contradictory which breaks the experiment. Responding convincingly is necessary to meet Turing's test, which is what he's trying to refute, so he stated the book is written such that the person in the room can respond convincingly and pass the test. He also repeatedly pointed out that using natural language involves semantics, and that requires background knowledge. So to be self-consistent, his hypothetical book and database have to contain that knowledge for the person in the room to draw from. But he also denies that the person can acquire any of that knowledge. You simply can't have it both ways. Either the information content in the room is sufficient to allow an untrained person to communicate convincingly in Chinese, in which case that person is effectively learning Chinese. Or you deny the person the necessary information and they can't pass the Turing test.
Anyway, a computer isn't going to have biochemical motivations that make it want to eat a hamburger, but it can certainly know what a hamburger is and order one to your liking. And keep track of what you eat when, keep you stocked, track your vitals, assess the healthiness of your diet, make suggestions, develop meal plans, and order for you. And with expanding automated food prep and automated delivery, in the plausibly near future a person who isn't particularly interested in quality food could have a computer agent take care of nearly all but the eating part.
A more interesting form of Turing test (to me) is whether a computer can be trained to do social engineering. Because if it succeeds even a small amount of the time, it could be a very powerful tool or weapon for all kinds of nefarious purposes.
Vae on 8/1/2024 at 07:10
Care for some coffee?
[video=youtube;Q5MKo7Idsok]https://www.youtube.com/watch?v=Q5MKo7Idsok[/video]
Qooper on 8/1/2024 at 10:26
Quote Posted by Nicker
We are just processing information and rendering outputs. Semantic processors. Emotional processors. Biochemical processors.
But "just processing information" isn't the important part I think. Computers can do simple if-statements on boolean values, and they can also run neural networks, in both cases "just processing information". But they're not the same kind of processing. Reducing to basic building blocks is a useful tool for certain things, but I'm not sure where that gets us here.
Also emotions need to be looked at in more detail. What's the difference between a person experiencing pain and a concentration of biology that looks human and physically is exactly like us but where the pain is simply an input that gets processed and that causes an output without anybody inside experiencing anything? This philosophical zombie says it experiences pain and lives inside a body, because that's the output it generates.
Imagine you're playing Quake. You connect with your client to a Quake server and occupy a player character. Some of the players there on the server are bots, some are real players. Let's say the bots are so well written that they replicate a lot of the nuances of real players, and their behaviour looks human. Quakewise it makes no difference whether they're bots or human. But Quake isn't the whole world. You are you, and in addition to producing a set of movement commands for your Quake character, you also experience playing Quake. You playing Quake is not merely an IO-system. The input from the screen and speakers goes to YOU, and you experience it. Then, you make choices, and from you the output goes to your keyboard and mouse. With the bots the diagram is identical from the perspective of Quake, but from the perspective of the real world it goes to and from a CPU instead of someone experiencing it.
Quote:
SO... complexity. If consciousness is not just an emergent property of complexity, then where does it come from (assuming you can define what it is)?
I've given my thoughts on what I think consciousness is. What about you, what do you mean by the word 'consciousness'?
Nicker on 9/1/2024 at 04:42
Thank you for reviving this aspect of the discussion. The discussion of thought experiments and such is fascinating but it's a bit like playing horseshoes in a ten hectare field of chest-high wheat. The rules we invented tell us there is a target out there somewhere, but where it is, what it is, and if it actually exists are all unknown, IMO.
Quote Posted by Qooper
But "just processing information" isn't the important part I think.
Agreed. The ingredients are not the cake but without the ingredients you can't have cake, unless you assert the disembodied "soul", an external existence, the real you, which temporarily inhabits the baked ingredients, bestowing cakiness upon them until it leaves to inhabit a fresh cake or goes to the Great Bakery In The Sky...
That doesn't work for me. At best, it only defers the matter.
I believe that the qualities which make us identifiable individuals, emerge from the integration of semantic, emotional, biochemical (etc.) ingredients, and that those ingredients are undeniably consolidated in the configuration of organic chemicals called our bodies. Materialism. And if there is anything which appears immaterial about us it is either due to ignorance or fancy.
We know that if we disrupt that organic configuration, especially the brain, we can significantly alter, inhibit or even terminate the individual associated with it. But when do the organic ingredients become an individual? And if we replicate those ingredients, using inorganic materials, why can't an individual emerge from that configuration?
Thought experiments can tells us a lot about where consciousness is not found but nothing about where it is or what it is. It's like art or pornography; you can't define it but you know it when you see it. The importance of the Chinese Room or the Turing Test is not in determining when we have created an artificial being, only in describing the limitations of our anthropomorphism.
Quote Posted by Qooper
Also emotions need to be looked at in more detail. What's the difference between a person experiencing pain and a concentration of biology that looks human and physically is exactly like us but where the pain is simply an input that gets processed and that causes an output without anybody inside experiencing anything?
You are kind of begging the question, asserting that there is an individual, inside the vessel experiencing the pain, when the existence and nature of that individual is what we are trying to define. We know that there is a person in there because the person in there tells us they are there.
It certainly feels like I am a person, not a meat-puppet but I can't justify that feeling objectively. I can only assert it by agreeing that we both share that quality.
But we still can't isolate or define that quality. It's just an agreement we have. We only recently imagined that other animals might share that sense of being. Many hominids still refuse to recognize humanity in other hominids with different skin colour.
Quote Posted by Qooper
What about you, what do you mean by the word 'consciousness'?
I thought I knew but I am more and more certain that I have no clue. Other than my feelings.
Our language is inadequate to break through the armour of our arrogance. We assume that a true AI is the same as an artificial person, mostly because intelligence is the one quality we possess in obvious abundance compared to other creatures. And yet there is far more to being than just processing information.
heywood on 9/1/2024 at 15:44
I don't see the reason why we have to agree there's a target out there. AI is a class of tools. Let it be judged by what it can accomplish for us, not whether it can produce an unobservable quality that we can't define.
Vae on 9/1/2024 at 19:31
NVIDIA ACE Brings Digital Characters to Life with Generative AI...
[video=youtube;psrXGPh80UM]https://www.youtube.com/watch?v=psrXGPh80UM[/video]
Nicker on 9/1/2024 at 19:54
Quote Posted by heywood
I don't see the reason why we have to agree there's a target out there. AI is a class of tools. Let it be judged by what it can accomplish for us, not whether it can produce an unobservable quality that we can't define.
The poll leading this thread proposed that an autonomous AI (i.e. capable of forming the intent to help, harm or ignore humans) was possible. I am responding in the spirit of that proposition.
Vae on 9/1/2024 at 20:58
Pattern recognition is observable and measurable in relation to the spectrum of human intelligence. Alignment and intention is consequential, whether it be a programmed response or a conscious self-aware action.
heywood on 9/1/2024 at 21:27
The answers to the poll question, and the positive and negative effects of AI technology on humanity, don't depend on whether an AI system can have human consciousness.
I was motivated to respond primarily by this statement:
Quote Posted by Nicker
We assume that a true AI is the same as an artificial person
I don't assume that. I think artificial persons are an inconsequential topic compared to other uses of AI. We don't really need or want machines to simulate all the behaviors of humans, we want them to do useful things for us, and they only need to emulate humans to the extent necessary to accomplish something, usually communication.