Pyrian on 10/1/2024 at 01:23
Worth noting specifically that a lot of the difficulties AI's encounter with accomplishing those tasks is a lack of understanding of context, precisely because they aren't humans and don't have our shared experiences.
Nicker on 10/1/2024 at 05:53
Quote Posted by heywood
The answers to the poll question, and the positive and negative effects of AI technology on humanity, don't depend on whether an AI system can have human consciousness.
So are you saying that these two propositions: “Ai will
seek to destroy humanity.” and “AI will carry out
goals without regard to human consequence.” - do not in any way suggest that the hypothetical AI, in the poll, might possess a human like autonomy, in the form of desires, goals and intent?
Rightly or wrongly, the conflation of AI with artificial people, is an inescapable feature of any casual discussion of the topic. To that end I have attempted to untangle some of the semantic knots that confound the issue. If you have a complaint with the construction of the poll, you should take it up with the creator.
Quote Posted by heywood
I was motivated to respond primarily by this statement: “We assume that a true AI is the same as an artificial person.”
That was a bit lazy of me. How about...
Many people, conflate AI with artificial beings, simulacra, cyborgs and self aware machines.Quote Posted by heywood
I think artificial persons are an inconsequential topic compared to other uses of AI.
I heartily disagree.
How organic matter achieves self-awareness and whether inorganic matter could eventually do the same, seems infinitely more important and fascinating than evolutions in the utility of next level information processing. It appears I am not alone in that fascination. I see no reason why the two perspectives cannot be respectfully explored in parallel.
Sulphur on 10/1/2024 at 07:42
Autonomy and goals are not implicit indicators of consciousness, depending on how you define consciousness. A Roomba isn't conscious as far as most people would care to categorise it, but if you attach a chainsaw to it and remove its safeguards, it's going to shred your feet and your pets because it's following its designers' mandated goals, not because of malevolence or some internalised personality-based worldview.
Artificial people are... what? They don't exist, as far as I know, except in science fiction. Not sure what that even means here - if you're going to make a robot to mimic a human being, at least in the immediate future its brain will be constructed with several manners of ANNs chained into several subsystems, which in aggregate will be an AI. Is the physical shell the same as its neural components? No, of course not. But are both required to make a hypothetical 'android'? Well, yes? I'm not sure why anyone would commit a conflation error there.
heywood on 11/1/2024 at 19:44
Quote Posted by Nicker
So are you saying that these two propositions: “Ai will
seek to destroy humanity.” and “AI will carry out
goals without regard to human consequence.” - do not in any way suggest that the hypothetical AI, in the poll, might possess a human like autonomy, in the form of desires, goals and intent?
Those propositions don't require AI to possess any human like qualities. Autonomy, goals, and intent are slam dunks. Most of our computing systems have all three. Desire is unnecessary. Also, AI is not a singular system. We're talking about a class of interconnected technologies and systems collectively called AI, not a movie villain.
Quote:
Rightly or wrongly, the conflation of AI with artificial people, is an inescapable feature of any casual discussion of the topic. To that end I have attempted to untangle some of the semantic knots that confound the issue. If you have a complaint with the construction of the poll, you should take it up with the creator.
It is only inescapable for those who can't let go of sci-fi tropes.
Quote:
How organic matter achieves self-awareness and whether inorganic matter could eventually do the same, seems infinitely more important and fascinating than evolutions in the utility of next level information processing. It appears I am not alone in that fascination. I see no reason why the two perspectives cannot be respectfully explored in parallel.
I'd rather concern myself with what's happening in the world than religious and philosophical arguments about what makes humans special. I agree there's room for both. My whole point to you was don't assume that copying humans is the goal or the pinnacle of AI. It's not.
Qooper on 14/1/2024 at 01:02
I find the topic very interesting!
A quick prefaceTo make it a bit clearer what I refer to when I say 'consciousness', I mean the am-ness of a human. The experiences are routed to the am-ness. To use crude and clumsy language, sensory input goes to the am-ness and the am-ness gives output through actuators. This is blocky, but I'll try to make more sense in context.
Quote Posted by Nicker
I believe that the qualities which make us identifiable individuals, emerge from the integration of semantic, emotional, biochemical (etc.) ingredients, and that those ingredients are undeniably consolidated in the configuration of organic chemicals called our bodies. Materialism. And if there is anything which appears immaterial about us it is either due to ignorance or fancy.
I respect your belief, and the first point I'd like to make is that in order to approach this topic, one needs to have a presupposition about the nature of reality. They're very closely tied together.
By this I mean that if it were possible to measure an am-ness by objective instruments, it would place the am-ness in the physical realm at least partly, and it wouldn't require a belief in the same sense. But the only way to measure an am-ness is to actually
be an am-ness and note that you have a body you can command and can get sensory input from. In other words an am-ness can only be directly observed by itself.
Quote:
We know that if we disrupt that organic configuration, especially the brain, we can significantly alter, inhibit or even terminate the individual associated with it. But when do the organic ingredients become an individual? And if we replicate those ingredients, using inorganic materials, why can't an individual emerge from that configuration?
My second point is that what if the reality is such that even if an am-ness completely destroys their own brain, they still find themselves existing? Sorry about the macabre example. We are all born into this world, and we all die. But once again, the only way to measure what happens to an am-ness once its body dies is for the am-ness' body to die. So same as with point number one, the only one that can observe what happens to an am-ness after its death is the am-ness itself.
Quote:
You are kind of begging the question, asserting that there is an individual, inside the vessel experiencing the pain, when the existence and nature of that individual is what we are trying to define. We know that there is a person in there because the person in there tells us they are there.
Consciousness certainly doesn't automatically follow from something saying it is conscious. We don't
know there is a person, but we might become
convinced. It is a matter of belief and trust.
But to reiterate, consciousness can only be observed by the particular consciousness itself, and all consciousnesses will measure death. The question is not scientific, because there is no way to measure if something is conscious or not. So if we create a system of AIs that can behave similarly to us humans, we still have no way to measure if an am-ness has emerged to experience anything.
Note: an am-ness is not necessary for anything that we humans do. It is not necessary for there to be an am-ness experiencing life. Technically the human body could just as well be an automaton and behave exactly like we do and build exactly the same society and all the tech that we have. But the reality is that I am here, I read your posts and think about them, and I ponder and drink coffee and then I sometimes type. I also make games.
Quote:
It certainly feels like I am a person, not a meat-puppet but I can't justify that feeling objectively. I can only assert it by agreeing that we both share that quality.
Exactly. You know you are there in the pilot seat. Now imagine not being there yet still your body doing all the stuff that you do and having the conversation you have. That's what I mean when I say that technically and physically there isn't a need for an am-ness, but the reality is that there simply is an am-ness inside every human body.
Quote:
bestowing cakiness upon them until it leaves to inhabit a fresh cake or goes to the Great Bakery In The Sky...
This made me chuckle!!! :D I'm not sure why but it fits my sense of humor. Kinda reminds me of a Dexter's Lab episode, but can't say which one.
Nicker on 14/1/2024 at 06:21
Quote:
Those propositions don't require AI to possess any human like qualities.
That kind of depends on what one means by "seeking" and "goals". You could constrain those words to functional objectives but it seems clear to me, and to others here, that some sort of self generated intent is implied in the wording of the poll. Just because I read that into the question doesn't mean I hold "sci fi tropes" as gospel or that I wholesale subscribe to the many straw-men you have offered in place of my actual words. Don't taze me, Bro.
I haven't been advocating any sort of woo-woo or displaying a worshipful attitude to any sci-fi tropes. Human consciousness arises from the mechanisms of our bodies. Whether a similar, human like self-awareness might eventually emerge from machines is not a sci-fi fantasy, it's a serious question. Not least it has serious implications on what it really means to be human.
Quote:
My whole point to you was don't assume that copying humans is the goal or the pinnacle of AI. It's not.
Pretty sure I haven't done that. One of my first acts in this thread was trying to differentiate between two, commonly conflated uses of the term AI. That said, you might agree that engineering an artificial person would be a defining achievement of AI, not simply a marginal improvement?
Have I offended you in some way? Is this because I said, "We assume that a true AI is the same as an artificial person."? You seemed to take offense at that statement ("I don't assume that. "), and I hastily apologised for my lazy figure of speech. But you appear to know it's just decorative, since you used the very same construction later in post - "We don't really need or want machines to simulate all the behaviors of humans..."
Should I be miffed about you including me in your "we"? Colour me confused but I stand ready to apologise again, if necessary.
Nicker on 14/1/2024 at 07:25
Quote Posted by Qooper
To make it a bit clearer what I refer to when I say 'consciousness', I mean the am-ness of a human.
Am-ness. I like it. When I was crafting my reply to heywood, I originally used the term
be-ing to label that ineffable sense of individuality. It sounds a bit new-agey and I figured that might drive us further apart, so I ditched it. I think the closest formal term is "theory of mind" but I am not sure if that entirely fits the bill. The problems with language are immense. I agree with Wittgenstein that "Philosophy is just the by-product of misunderstanding language." If we had precise terms for am-ness we wouldn't have to keep inventing new ones.
Quote:
By this I mean that if it were possible to measure an am-ness by objective instruments, it would place the am-ness in the physical realm at least partly, and it wouldn't require a belief in the same sense. But the only way to measure an am-ness is to actually
be an am-ness and note that you have a body you can command and can get sensory input from. In other words an am-ness can only be directly observed by itself.
Reality. Add that to love, art and rock'n'roll as things we are convinced we know but cannot define. I largely agree with you on the above, but with the added caution that we cannot know if our perception of am-ness is reliable or delusional. But then doesn't it require some aspect of us to be tapped into reality in order to be deluded?
Quote:
My second point is that what if the reality is such that even if an am-ness completely destroys their own brain, they still find themselves existing?
That is unanswerable. I think the best bet is that once we are physically de-configured, that's it. We are not vessels. We are generators. Energy is eternal, configurations are temporary.
Quote:
Consciousness certainly doesn't automatically follow from something saying it is conscious.
Which is one reason why we might not recognise an artificial being if we made one. If it doesn't even declare itself, how could we know? Would it have a "theory of mind"? Would it know that it was so it could tell us? We can't even define or locate our own awareness. As I said previously, humans have only recently seriously considered that there might be other persons in the animal kingdom.
Quote:
But to reiterate, consciousness can only be observed by the particular consciousness itself, and all consciousnesses will measure death. The question is not scientific, because there is no way to measure if something is conscious or not. So if we create a system of AIs that can behave similarly to us humans, we still have no way to measure if an am-ness has emerged to experience anything.
Yes and no. Consciousness (am-ness / be-ing) seems more like an agreement between similar organic creatures. Similar enough to agree that we belong in the same category but distinct enough to assume we are separate individuals.
I don't understand what you mean by this - "and all consciousnesses will measure death." All organisms are aware of death as something to be avoided but humans take it personally.
Quote:
Note: an am-ness is not necessary for anything that we humans do. It is not necessary for there to be an am-ness experiencing life. Technically the human body could just as well be an automaton and behave exactly like we do and build exactly the same society and all the tech that we have. But the reality is that I am here, I read your posts and think about them, and I ponder and drink coffee and then I sometimes type. I also make games.
I disagree. If am-ness is something that distinguishes us from most other animals, then it is absolutely an critical component of our nature. I don't think we could invent and imagine the things we have, without that core of self, like the way a pearl needs a grain of sand at its center. I think that our obsession with our mortality has a lot to do with it.
If a blob of protoplasm can become self-aware, why not eventually a machine? What might be the grain of sand in that pearl?
Fire Arrow on 17/1/2024 at 19:39
Would it be a good assumption that most people here dislike Hubert Dreyfus? I hope you'll forgive me if someone has already talked about him, I did skim the thread first.
demagogue on 17/1/2024 at 20:00
We can talk about him. I think Dreyfus is fantastic when he comments on Heidegger, like in his famous (
https://www.youtube.com/watch?v=QBMySi3veVs) course on B&T. When he starts commenting on AI, I think he's way out of his element. I mean he's concluding that consciousness is impossible to construct computationally on less than empirical grounds.
But I think you can reconstruct some of his point, but you'd have to do a lot of work, and my intuition is he wouldn't sanction the result, but I still think there's something in there worth salvaging.
He's not going to think AI have a lifeworld in which its discourse on things is embedded (Heidegger would put it the other way around, "language is the house of Being", but anyway the point is there's no meaning if there's no Being), because he thinks AI can't have conscious experience, so there's no real meaning in what they're talking about. In a way he's not too far from Searle on intentionality (you need conscious experience to ensure that words are "genuinely about" objects in the world), except Heidegger & Dreyfus don't want to talk about "objects", but I think he's saying basically the same thing in the structure of MH's system & Drefus's reconstruction of it.
I think that basic point is right, but I also think what they're talking about is able to be computationally modeled, and people like Radu Bogdan (who is the reconstructed Donald Davidson) and the Decision Neuroscience & Neurolinguistic people, etc., help give us the theory to find out how.
---
Edit: That's a long story, but a short version is two pieces (1) Bogdan's system is that language is a skeleton that gives instructions to the mind to flesh its meaning out in terms of its own intentions, understanding of social norms, and construction of the world (the classic Self-Norm-World triangle), and (2) the neurolinguistic people are suggesting that inferior and medial parietal lobe are core places doing the fleshing out (in the context of a very big and tangled system), where a coherent narrative and "world" the agent is embedded in are being constructed out of the elements of perception in a very systematic way we can cautiously optimistically think about modeling. If you really dug into the details, I think Dreyfus would begin to respect how many of the things in the "discourse-lived experience" connection that he cares about are accounted for.
This is not what Deep Learning is modeling though, at least not directly, though one can debate the extent it gets into the structure of the weights through the backdoor. It's also very different to the trend of formalistic analytic philosophy of language up to the mid-1980s, when Donald Davidson really started the attack & computational neuroscience and brain mapping techniques, etc., started becoming credible.
Fire Arrow on 17/1/2024 at 21:05
Quote Posted by demagogue
We can talk about him. I think Dreyfus is fantastic when he comments on Heidegger, like in his famous (
https://www.youtube.com/watch?v=QBMySi3veVs) course on B&T. When he starts commenting on AI, I think he's way out of his element. I mean he's concluding that consciousness is impossible to construct computationally on less than empirical grounds.
That's a pleasant surprise, often when I try to bring up phenomenology it's an uphill struggle. I usually expect educated people to go along with Dennett's criticism of intentionality. But I understand the onus is on we who think phenomenology is worthwhile to demonstrate it to those who don't.
Quote:
He's not going to think AI have a lifeworld in which its discourse on things is embedded (Heidegger would put it the other way around, "language is the house of Being", but anyway the point is there's no meaning if there's no Being), because he thinks AI can't have conscious experience, so there's no real meaning in what they're talking about. In a way he's not too far from Searle on intentionality (you need conscious experience to ensure that words are "genuinely about" objects in the world), except Heidegger & Dreyfus don't want to talk about "objects", but I think he's saying basically the same thing in the structure of MH's system & Drefus's reconstruction of it.
I feel quite ambivalent about Searle. Intuitively, I like the idea that syntax isn't sufficient for semantics, but his ideas seem confused.
Quote:
I think that basic point is right, but I also think what they're talking about is able to be computationally modeled, and people like Radu Bogdan (who is the reconstructed Donald Davidson) and the Decision Neuroscience & Neurolinguistic people, etc., help give us the theory to find out how.
This may be a dumb question, but would you know if there are any researchers that make use of Spinoza? My admittedly crude day-dream is: Husserl = Cartesian = Dualist, and that Spinozist monism would somehow fix a lot the problems (not just in artificial intelligence, but also in cognitive science).
Quote:
Edit: That's a long story, but a short version is two pieces (1) Bogdan's system is that language is a skeleton that gives instructions to the mind to flesh its meaning out in terms of its own intentions, understanding of social norms, and construction of the world (the classic Self-Norm-World triangle), and (2) the neurolinguistic people are suggesting that inferior and medial parietal lobe are core places doing the fleshing out (in the context of a very big and tangled system), where a coherent narrative and "world" the agent is embedded in are being constructed out of the elements of perception in a very systematic way we can cautiously optimistically think about modeling. If you really dug into the details, I think Dreyfus would begin to respect how many of the things in the "discourse-lived experience" connection that he cares about are accounted for.
Again, I'll make a point that I expect I'll have a lot of push back on (which is OK with me). But isn't part of the problem that this view of social norms is implicitly relative? If you look at how people actually question or recreate social norms, usually there is implicitly an idea that there is a "better" or "worse" independent of what the norms actually are (incidentally, this is my theory of why the internet has gradually gotten more difficult: in the past we had an idea of "normal" which wasn't actually true that we could push people towards, now that we're all aware of how low the bar can be it's difficult to criticise anything, but I digress).
If there was such a thing as "objective good", you wouldn't have to explain the variety of norms, you would just say that many of them are mistaken.
Quote:
This is not what Deep Learning is modeling though, at least not directly, though one can debate the extent it gets into the structure of the weights through the backdoor. It's also very different to the trend of formalistic analytic philosophy of language up to the mid-1980s, when Donald Davidson really started the attack & computational neuroscience and brain mapping techniques, etc., started becoming credible.
I remember reading about connectionism, and thinking it was odd that people related it to phenomenology. Merleau-Ponty (whether or not you agree with him) didn't really agree with the "bottom-up" approach to the mind. One of the first things he talks about in the Phenomenology of perception is the (
https://en.wikipedia.org/wiki/Moon_illusion) Moon Illusion, and how rather than being the result of the quantity of sense datum, it was the result of seeing things as gestalts.
Now gestalt psychology is something very different from something "bottom-up" in my opinion. Still, I'd be interested if there is a counter-argument.