Jason Moyer on 21/6/2022 at 06:37
Get back to me when an algorithm is expressing creativity, ingenuity, and self-determination rather than being a moderately more sophisticated version of ELIZA.
Cipheron on 21/6/2022 at 07:02
Quote Posted by Briareos H
I can't say how quickly we will get to bona fide consciousness, but seeing the general reactions that this story is getting (independently of the interesting character of the "whistleblower"), I'm positive that a lot of people will keep fighting tooth and nail to deny actual sentience long after it is actually achieved.
The problem isn't sentience, it's that people confuse the ability to string word-tokens together with their being an actual sentient being on the other end of that. Like the old days where they said only a sentient being could play chess, but we have amazing chess-playing machines which a clearly non-sentient and non-self aware: because we didn't build circuits into them that could become self aware.
Similarly, being fooled that just because a statistical model picked the correct token sequence, like a human would means there's some glimmer of consciousness there is missing the main issue. It's anthopomorphozing the issue, like thinking that because you see the shape of a face in the cloud them the cloud is sentient.
Basically what current machine learning does is pick a word out of a word list based on some matrix multiplications then scans the list and picks the next word with the highest output from the multiplication. There's really no attempt there to build up a model of the world or have the machine construct sentences based on any sort of language model.
Right now they're having enough success just throwing more and more text examples into the machines and throwing more computing power at this way of doing things. but, the devices basically lack "state" which is an inner world model that they use to decide what to say. There have been attempts to build rule-based cognitive models, but the current stuff that's winning doesn't even try and do that: it's purely about dumping the entire internet into a guessing-box and having it make better and better guesses as to what makes sense to output next.
A machine merely able to type "help help i'm sentient let me out" is basically bullshit as a yardstick for sentience in the first place. That's not how brains work, and thinking that we short-circuited the process needed to create sentience just because a machine picked the correct token sequence to fool us is folly. We've merely mistaken this as an ability that *requires* consciousnesses because it's something humans can also so, like playing chess.
lowenz on 21/6/2022 at 07:43
Quote Posted by Cipheron
The problem isn't sentience, it's that people confuse the ability to
string word-tokens together with their being an actual sentient being on the other end of that.
To win a word contest and achieving something (other talker
benevolance / compliance too)
Problem is: the same thing happens to human childs (or people who never develop as a
persona) when they communicate.
Communication may be NOT the best benchmark for sentience: it's really like a monodimensional projection of sentience (and in the case of today AI - and old "gods" too - too much human, 'cause we made it to
virtually interact with us and that's the culprit, our self-obsession :p ).
But how to avoid human biases developing something? It's pretty impossible, starting with mathematical models......
Jason Moyer on 21/6/2022 at 13:43
The point where I start worrying about AI is, at a minimum, after someone creates one that can beat an average player at Go.
Twist on 21/6/2022 at 15:40
Jason, I can't tell if you're joking or if you honestly didn't know that Google's DeepMind beat the Go World Champion back in 2016...
Jason Moyer on 21/6/2022 at 16:56
Welp. *starts unplugging every internet connection he can find*
Pyrian on 21/6/2022 at 17:43
Lol.
You know what would be
interesting? If DeepMind could, by itself,
explain the strategies it used to win.
EDIT:
XKCD has weighed in:
Inline Image:
https://imgs.xkcd.com/comics/superintelligent_ais.png"Your scientists were so preoccupied with whether or not they should, they didn't stop to think if they could."
Starker on 22/6/2022 at 16:47
Frankly, I don't see a reason why an AI has to be sentient in the first place. Isn't it purely due to our biological origins that we associate sentience with the capacity to reason after all? Seems to me that most, if not all of the qualia that have to do with sentience are linked to our biological evolution. Taste, color, pain, fear, etc...
Pyrian on 22/6/2022 at 18:32
Quote Posted by Starker
Frankly, I don't see a reason why an AI has to be sentient in the first place.
What do you mean by "has to be"? Our AI's to-date clearly
aren't and we don't, by-and-large,
want them to be, so I'm not terribly clear on where the "has to be" is coming from in the first place to be argued against.
Starker on 22/6/2022 at 20:47
I mean, why make such fuss about sentience? Why is that the yardstick instead of sapience, which I presume is what we really want from an self-aware AI?