Nicker on 14/6/2022 at 18:50
No AI developed to imitate or serve humans is likely to become sentient. We will always place deliberate or unconscious limitations on it.
Before an artificial being can become sentient it must first become self-willed. We probably wouldn't not even recognize the first truly sentient machine.
demagogue on 14/6/2022 at 22:29
Quote Posted by Jason Moyer
It's not an AI, it's an algorithm pulling results out of a database.
This is suggesting that human thought is different.
I'm not sure it is.
Anyway, long story short, I don't think this is what we'd call human sentience because the basic model is still some type of recurrent neural net, and the basic model that I think is closer to humans is Grossbergian, plus some neuroeconomics a la Paul Glimcher, and some other pieces that neuroscientists are working on but AI researchers haven't touched much as far as I can tell. I think the model is the important factor, more than the brute force itself.
But speaking of brute force, the number of nodes is still on the order of neurons in an earthworm, which granted is one of the smarter worms, but we're not even to fish or amphibians yet. Mammals are still far off, and primates galactically far off. In theory we'll get there soon enough, but we're already running into the physical limits of Moore's Law. That said, I wouldn't bet too much against people finding clever ways around that. But as above, the main issue isn't the power but the models.
But there are a few punchlines beyond that. One is that AI cognition doesn't have to be like human cognition. And it's an emergent phenomenon. So it may be hard to "measure"; it's something that may just suddenly and surprisingly be there once it reaches a certain point one may not be able to predict. They can reach their own level of whatever they are, and may deserve a term like sentience, even though it has nothing to do with the human brand. I think it's something we'll recognize in the output.
That's why I think people should take this a little more seriously than what I'm seeing. This is the guy that works with these things every day, and if
he feels like he's talking to a self-aware system, then he's the one that would know. It's possible he's so emotionally invested in it that he wants it to be true -- it validates his work and his efforts -- but then I think, in other areas of computing we're reaching the limits of human perception. Games and video processing are basically photoreal in real time now, if an artist has the time to actually make it so. I think AI don't have to reach the Full Monty of human brilliance. It's enough that it's beyond humans' capacity to meaningfully tell the difference, again in AI's own terms, not necessarily our own (if you've ever read ethology papers you know the difference).
But the bigger point is, if this guy who works with them every day feels like this, I think masses of people, that know much less what's happening under the hood, will feel the same way for the same reason when they start actually interacting with these things, and that will be its own kind of social and spiritual revolution, when we have these AI all around us communicating with us in a way most people feel is meaningful. That's the part I think is coming quite soon, and that's the revolutionary part, not so much one's position in the debates on node density or what parietal lobe or the insula are really doing.
lowenz on 14/6/2022 at 23:30
Problem is: humans are - conversation-wise - sentient or mostly act by presets and backtracking/adjustment themselves in a conversation?
It's a different thing to express/show to be sentient and engaged in a "word contest" from actually being a sentient partner in a dialogue.
That's the problem with with kind of AI (and humans too.......)
Tocky on 15/6/2022 at 05:15
Quote Posted by demagogue
But the bigger point is, if this guy who works with them every day feels like this, I think masses of people, that know much less what's happening under the hood, will feel the same way for the same reason when they start actually interacting with these things, and that will be its own kind of social and spiritual revolution, when we have these AI all around us communicating with us in a way most people feel is meaningful. That's the part I think is coming quite soon, and that's the revolutionary part, not so much one's position in the debates on node density or what parietal lobe or the insula are really doing.
I think he just got too immersed in the conversations to be properly skeptical. I've had some amusing conversations with chat bots and been surprised at the sudden turns they took but it's nowhere near sentient even when it claims to be. As for the second part, the potential for misuse is enormous. Russia would no longer have to pay Africans or Macedonian teens for propaganda on facebook for one. They could also bypass the Republicans in manipulation of the American public opinion and go direct. Businesses could create demand for product without using the same channels they have. Gun producers could bypass the NRA and instill fear and paranoia directly in the public to sell more. People would have to think independently to arrive at truth and we all know how difficult that is for many.
MriyaMachine on 15/6/2022 at 07:45
Quote Posted by Brethren
According to the article, he was suspended, and it was for sharing proprietary info (the chat logs with the AI). That's quite a bit different than the way you stated it.
They will find a way to fire him for saying something they aren't comfortable with. Humans aren't the most honest creatures.
Sulphur on 15/6/2022 at 10:25
Quote Posted by demagogue
That's why I think people should take this a little more seriously than what I'm seeing. This is the guy that works with these things every day, and if
he feels like he's talking to a self-aware system, then he's the one that would know. It's possible he's so emotionally invested in it that he wants it to be true -- it validates his work and his efforts -- but then I think, in other areas of computing we're reaching the limits of human perception. Games and video processing are basically photoreal in real time now, if an artist has the time to actually make it so. I think AI don't have to reach the Full Monty of human brilliance. It's enough that it's beyond humans' capacity to meaningfully tell the difference, again in AI's own terms, not necessarily our own (if you've ever read ethology papers you know the difference).
I would love to take him seriously. But what we have from him is some output and a declaration of sentience without a meaningful explanation of how he ended up with that conclusion. It could be that his definition of sentience is different from ours, that he believes sentience can be generated by any model regardless of neuron quantity, or that he's simply proud of the work and wanted to get it publicity, and this was the quickest path.
Regardless, there's missing information that needs to be supplied to hold up his assertion, and that's where actual discussion on whether it's true or not can get productive. At the moment, what we know for sure is that LaMDA has impressive NLP capabilities and an impressive dataset to pull responses from. What we don't have is proof of anything else.
I suppose the further question is that now that the Turing test is proving insufficient for our needs, what do we need to do to test for actual sentience and in a way that's 100% foolproof.
Edit: I realise I'm ignoring large swathes of your post that address this to various degrees, but part of me baulks at the idea that humans being unable to tell the difference between a human and an AI means that they're functionally equivalent. You probably aren't exactly saying that, but it's something that's giving my brain an itch.
lowenz on 15/6/2022 at 10:29
Just separate "NLP" from "sentience". For humans too.
The AI is just trying to "not lose" a word contest (and this is true for "skillful" humans too).
"Story conjuring" is another thing.....that's the interesting aspect (toward sentience, but we can't say if AI want to express its desire to evolve in some kind of protector like the owl nor to willingly choose to become it). And yes, it has learned that human acts like an apex predator as a species :p
rachel on 15/6/2022 at 10:29
Quote Posted by Sulphur
I suppose the further question is that now that the Turing test is proving insufficient for our needs, what do we need to do to test for actual sentience and in a way that's 100% foolproof.
That's an interesting question: What could be the equivalent of the (
https://en.wikipedia.org/wiki/Mirror_test) rouge test for digital entities?
lowenz on 15/6/2022 at 10:46
First thing first: if sentience is an emergent phenomenon just throw more AI interacting one with another, NOT with humans.
demagogue on 15/6/2022 at 11:19
To keep it simple, I can say two things.
One, I didn't actually read what the guy was saying, or what other people were saying about him when I said that. I still haven't read that much. But he's at least being portrayed as a bit on the mystical loopy end of the spectrum. I think if he got a job at Google as a software engineer he should know his chops, but that's not inconsistent with him being a crackpot eccentric on the humanities end of the spectrum. Anyway I probably shouldn't say anything about his credibility one way or another until there's a good summary of the whole situation by somebody that knows what they're talking about. So I'll even retract that bit.
But two, yeah, my main punchline wasn't that he was saying something about actual sentience, or any fair definition of it that neuroscientists or whoever would define it. I was saying something more like if it felt like sentience to this guy, I bet it's going to feel (mistakenly) like sentience to masses of the population. I guess you could have said the same thing about Eliza back in the 1980s, but I think the difference is the illusion of Eliza fell apart at the slightest scrutiny.
But now I wonder if we've crossed a line where these things are going to do a much better job of sustainably faking people out to the point people just start treating these things as if they were sentient, and the line between sentience and the illusion of it just gets blurred out altogether. There can still be a difference between next gen AI and sentient AI, but if the masses can't tell, then I'm wondering if there won't be much meaningful difference between society's attitude towards next gen AI and sentient AI. It's close to the same social impact with different tech. In the big scheme of things, people's attitude to the tech may be a bigger deal practically than what AI are really doing. I don't know though. I'm just talking off the cuff.