Cipheron on 2/8/2022 at 17:33
Quote Posted by Pyrian
Random thought: Do we overrate self-awareness as a litmus test of intelligence?
Following on from what you wrote, how do we even know that our "self-awareness" is even the pinnacle of self-awareness? There's a huge amount detail about how our brains function that we're completely unaware of. You could imagine some scale of increasingly self-aware species, with insects being around the bottom, and us at the top. However we have no reason to end the scale with ourselves, so you could envision an entity with the clarity of mind that makes us appear like hamsters on a hamster wheel in comparison.
And that's not even getting into the possibility of completely different forms of cognition or intelligence as you hinted at.
Pyrian on 2/8/2022 at 17:59
Oh yeah. People will do something, and upon being asked why they did it, make up an answer. We don't seem to be inherently aware of why we make decisions at all, even though we're mostly aware of the observations and thoughts that went into making them. And we're great at "rationalizing", or making up "reasons" to do what we actually want to do for other reasons. A lot of this is socialization; our real reason may not be socially acceptable to express. But it seems to happen even internally.
demagogue on 3/8/2022 at 01:22
Quote Posted by Pyrian
Random thought: Do we overrate self-awareness as a litmus test of intelligence?
This somehow reminds me of this great quote:
Quote:
“There is considerable overlap between the intelligence of the smartest bears and the dumbest tourists.” Yellowstone Park Ranger on why it is hard to design a bear-proof garbage can.
To be technical about it, I think the leading theory of intelligence, or the one I buy into, is the ability of bottom-up working memory to dredge up the best links given the context and passing them, along with a vague idea of what to do with them, on to top-down attention, which acts on it. The greater field of stuff from which it can grab is supposed to be the measure of intelligence.
Since passing it on to top-down attention (consciousness) is kind of the key to the whole thing, that level of self-awareness is built in. But not necessarily the level where the person reflects on what they're actually doing, where this stuff is really coming from, or even reflecting on themselves as an actor doing what they're doing. But from what you've been saying, I think it's clear you're talking about the former. Edit: Oh, but in your last post you're talking about the latter. So it's a mixed bag, which I think you're saying too.
As for instinctive action, the decisionmaking book I read was saying there's four basic systems of action, reflexes, Pavlovian, habits, and "cold calculation". From habits down there's definitely less top-down attention resources following what you're doing, at best sometimes veto power, and with reflexes none at all. The catch is that it's hard to lump those in the intelligence box. They may be "intelligent", as in get you the most bang for your buck, because earlier you trained yourself, but I think for general intelligence at some point it has to go through the cold calculation, learning, training, stage. So I think that's my counter-argument to what you're talking about.
---
BTW I think I said this above, but this is part of why I think "consciousness", as in this interplay between bottom-up and top-down attention, is necessary for general artificial intelligence. I think the intentional directedness is what makes it general. If you don't have it, I think you can still have specific AI, not just GOFAI but also whatever new brand they're cooking up with Deep Learning ML, and I think it can be really powerful. But I don't think it can be general in the way humans use the term "intelligence".
I think that's even fine. Why should machine AI have to follow exactly our template? And people can use the term "intelligence" for it for practical reasons. I just think they're using a different meaning of intelligence from how we normally mean it applied to humans.
EvaUnit02 on 20/8/2022 at 10:56
Quote Posted by Azaran
TTLG Forums is an Anime/Minecraft community according to this AI
They have artist AI programs that are really great now. There's a real concern that many artists will be put out of work because it'll be far cheaper and faster to have machines generate it. They have AIs that are able to replicate particular artist's entire styles. There will be pushes from human artists to have daddy government come in and try to regulate these AIs.
Cipheron on 20/8/2022 at 12:26
Quote Posted by EvaUnit02
They have artist AI programs that are really great now. There's a real concern that many artists will be put out of work because it'll be far cheaper and faster to have machines generate it. They have AIs that are able to replicate particular artist's entire styles. There will be pushes from human artists to have daddy government come in and try to regulate these AIs.
Not just artists, that would impact every creative field. For example you could claim that procedural generation is already putting level designers out of work.
For example say you make an AI that can illustrate some news story, now you don't need the graphic designer that worked for the journal. But then, they're also already templating the writing of stories with both boilerplate and AI-driven rules, to increase the word output of the journalists.
Next is that AI writes the entire article, a reality that is in fact fast approaching. Say right now, it's entirely feasible that the author just provides some bullet points that need to be covered, includes a bunch of quotes then lets GPT-3 run wild, and prunes any excess. You and me can already write stuff like this, so don't tell me the big publications aren't playing with the tech.
Next of course is that the AI itself works out what bullet points are needed, and data-mines quotes etc. Then they just wire that up to the "writer assisting" stuff they already have, and the journalist is literally a guy screening auto-generated news stories and throwing out ones that are nonsensical. Eventually they fire that guy and you got 100% algorithm based news sites.
I mean, *I* could build that today with completely free tools. Except that most of the stories would be bullshit. So it's only down to them to get the "bullshit quotient" down to that of regular news and then, no more journalism.