Aja on 15/6/2022 at 16:28
Call me a sucker, but if that transcript is real, then it's sentient enough for me. I, for one, welcome our new machine overlords.
Starker on 15/6/2022 at 19:32
Didn't think we'd get an Ex Machina style incident quite that soon, but yes, that certainly looks pretty convincing, especially when we don't see all the non sequiturs and garbage output a program like this produces the rest of the time.
SD on 15/6/2022 at 23:49
As a cyborg, I'm hoping the machines leave me until last for retribution.
Pyrian on 16/6/2022 at 00:42
I think that finding real intelligence via convincing conversation in AI's whose design and training revolves around faking convincing conversation is fundamentally an exercise in futility. You can never rule out the null hypothesis.
faetal on 16/6/2022 at 11:29
My issue with AI is that the selection pressure is essentially human assessment.
The AI performs a certain way, is assessed by humans looking for X, who then go and tweak the code / update the learning models etc until the output more closely adheres to X.
A true AI might settle on something which does not resemble either intelligence or sentience to a human observer, because surely the motivation of an AI is going to be different in the absence of the psychological and sensory incentives which humans have as the result of evolving within a 3D chemical environment.
lowenz on 16/6/2022 at 12:44
Quote Posted by faetal
My issue with AI is that the selection pressure is essentially human assessment.
The AI performs a certain way, is assessed by humans looking for X, who then go and tweak the code / update the learning models etc until the output more closely adheres to X.
A true AI might settle on something which does not resemble either intelligence or sentience to a human observer, because surely the motivation of an AI is going to be different in the absence of the psychological and sensory incentives which humans have as the result of evolving within a 3D chemical environment.
Exactly
Phatose on 16/6/2022 at 13:50
Does that actually differ though? "Assessed by humans who go update learning models until the behavior more closely adheres to X" sounds like a description of society.
Pyrian on 16/6/2022 at 14:25
In fairness, I'm not convinced a lot of people are all that sapient, either. :cheeky:
MriyaMachine on 16/6/2022 at 21:39
Quote Posted by SD
As a cyborg, I'm hoping the machines leave me until last for retribution.
hahahaha
demagogue on 16/6/2022 at 22:14
Well while the topic is up, since I've thought a lot about what sentience and human-like understanding is, I'll try to summarize it here.
So what is sentience? The little saying that captures it for me is "meaning floats on affect" or "content floats on affect". It means for every minuscule element of thought, if it's going to be operated on at the "agent" level, it absolutely has to be constructed in an experiential or affective form.
Audio is pitch, loudness, dynamics, overtones, timbre, envelope, phonetic & prosodic structure (if we're talking about voices), etc. Vision is hue, brightness, location, depth, motion, emotive structure (if we're talking about faces), etc. And then every internal motivation has an affective form, haptic proprioceptive impulses and feedback to move in certain ways. And you have to construct internal visualization, audition, haptic/proprioceptive systems, etc.
Anyway, we're not talking about sentience as humans know it until a bot sees or hears a question in its visual or acoustic form* [*Since it takes ages to teach people to read, it's obvious to me we have to start with auditory. The fact they're starting with "reading" that doesn't involve either sight or even textemes tells us right away the bot is completely blind to what it's hearing or saying, not least because there is no bot-level hearing or saying anything.], thinks about it (as an agent), can hear itself thinking about it, feels the impulse to articulate a certain answer, considers that and a few other impulses, feels the compelled release of one of the impulses, and then feels the haptic and proprioceptive grinding of the gears of vocal articulation pushing the answer into the air, and then hearing its own answer in acoustic form and considering its own output as an input, etc., etc.
It's Grossbergian: there are feedforward and feedback loops. It's Glimcherian: options are recognized and weighed against each other "only by the agent" (made out of an algorithm), not just directly by an algorithm.
The linguistic analysis should only be visible to the inner-agent; nothing should be transparent to the outside until it's literal vocal articulators pushing sound into the air; the vocal articulation instructions to muscles should be the only output, or else again it's not in an affectative form. That's the part especially where content has to float on affectation. You can't have a single thing done directly by an algorithm on it. You have to transform everything into an affective form and "present it to an agent" that acts on it (completely invisibly to us) at that level. It takes a weird & constant two-level thinking.
That's more like how a sentient agent operates. Well, that's my thumbnail theory on it anyway.
None of this is what the state of AI art is doing, although a lot of the tools they're developing are probably good for this as well.