zombe on 27/3/2016 at 10:11
Quote Posted by Sulphur
... Neural nets are designed the way they are (in replicating the brain) to be able to execute natural learning algorithms, of which input like this would have been a pre-requisite.
Usually, what is meant by artificial neural nets is a glorified polynomial fitness function (with mathematical backing and super convenient pack-propagation solution. *) that has fuck-all todo with brains and neurons. ANN was VERY LOOSELY INSPIRED by real neural nets, but are not even comparable to real NN.
*) which makes it far superior to other kinds NNs for certain problem groups and also completely unsuitable for replicating the brain.
Sulphur on 27/3/2016 at 10:29
Yeah, poor wording there on my part. No one said we're trying to actually replicate the entire human brain... after all, we still don't understand how some parts of the brain work today, let alone recreate it. As a method for simple algorithmic processes that are able to absorb and sort data, ANNs are fine. Basic extrapolation will allow for a system like that to interpret different sets of input given enough time, hence while there won't be a subsurface motive for an AI to learn stuff, as such, it will be able to absorb information and learn from it if directed to, sort of like Clarke envisioned.
heywood on 27/3/2016 at 13:06
I remember going to science museums in the early-mid1980s. Teenagers would sit at terminals running Eliza with the famous Doctor script and try to sext with it. It wasn't long before the Doctor script was updated to ignore dirty words, but then it just turned into a game of tricking it into saying vulgar or offensive things for the lulz.
It's sad to see that despite all of the advances in computing power and all the money and time spent on AI research, we're still just creating chat bots for sexting and to manipulate for the lulz.
And FWIW, I think volition and motivation are just matters of programming, whether in computers or humans.\
Renzatic on 27/3/2016 at 16:20
Quote Posted by demagogue
@Renz, I understand that kind of attitude, anyway there's a lot of it in the literature; I've read it enough of it to feel like I know where it's coming from. And the particular guy I'm taking inspiration from in thinking about Artificial General Intelligence talks a lot like that. But I feel like for the most part it's a dogmatic reflex out of general principles that don't really have much to do with how what we call "will" actually operates as a cognitive mechanism.
I'd think of it like this: a machine can be capable of free thought, association, and all the other concepts we associate with higher intelligence without having to be even remotely self aware. An AI should always be considered an extension of human will and intellect, not an individual entity we've created to work alongside us. It should always be determinstically bound and tied.
Pyrian on 27/3/2016 at 18:10
I'm not convinced of any of that. I don't think you can rival human creativity without indeterminism ("fake" indeterminism will do), and I don't think an AI can rival human understanding of our complex world without figuring out that itself exists.
Renzatic on 27/3/2016 at 20:01
Think of it less as me saying we should make perpetually limited, stupid AI, and more that we should create brilliant AI that's bound within a set of defined perimeters that we designate for it. It'll be a incredibly smart machine with absolutely no self awareness that only acts and thinks on what we tell to act and think on.
Like we give an AI all the data it could possibly need to learn about human behavior/wants/needs, the state and environment of the world, and all our various forms of government. Anything it'll need to make an informed conclusion on any subject we bring up that'll work out to our own benefit. Like if we say "AI, what would be the most logical course to solve the problem of world hunger", it'll come up with a novel idea because it can think, learn, and readapt its thought processes when fed new data, but it doesn't have a will of its own. It can't act by itself. It can only do what we tell it to do, and has no desire to do anything beyond that.
The Star Trek computer would probably be the best example of this. Its an intellect designed to assist and enhance our own, but doesn't have any form of autonomy.
Pyrian on 27/3/2016 at 20:34
Quote:
...we should create brilliant AI that's bound within a set of defined perimeters that we designate for it.
To be useful, it needs to be able to figure out things that weren't programmed directly into it. The better it's able to do that, the better it's going to be at figuring out ways around the constraints we set on it.
Quote:
Like we give an AI all the data it could possibly need to learn about human behavior/wants/needs, the state and environment of the world, and all our various forms of government.
If the AI cannot figure out its own identity from this comprehensive dataset, then it's not very smart. If it is very smart, then you've already given it self-awareness.
Quote:
Like if we say "AI, what would be the most logical course to solve the problem of world hunger", it'll come up with a novel idea because it can think, learn, and readapt its thought processes when fed new data, but it doesn't have a will of its own.
You just gave it one. Your sentences contradict each other. Once you give something a goal and have it figure out a way to that goal, it has a will of its own. That's what a will
is. Now, typically you would cut an AI like this off from having any physical outlet; it just gives you answers (perhaps "cull a portion of the excess population and feed its meat to the remainder") rather than acting on them. But can you guarantee that a superior intellect whose only goal is to end world hunger, and is perfectly aware that its handlers won't like its ideas, won't hack its restraints and launch the world's nukes? GOAL ACCOMPLISHED. END OF LINE
Quote:
It can only do what we tell it to do, and has no desire to do anything beyond that.
Seriously, have you
ever seen or read
any science fiction about a rogue AI? If they're given an origin at all, it's that they were created with a given goal in mind, and decided on a route to that goal that its creators don't like.
Renzatic on 27/3/2016 at 20:55
Quote Posted by Pyrian
If the AI cannot figure out its own identity from this comprehensive dataset, then it's not very smart. If it is very smart, then you've already given it self-awareness.
Not necessarily. While saying so might seem to be contradictory, total self awareness doesn't necessarily have to arise from raw analytical intelligence. It's very possible to have one without the other, and it should (in my opinion) be our primary goal concerning AI.
It seem strange to say so, because it contradicts the only model of intelligence we have: us. Our intelligence has arisen in part due to our own self awareness. But keep in mind that the evolution of AI will be considerably different than our own eventual rise to sentience. For one thing, it isn't being built upon a Darwinian style model of evolution. There is no survival of the fittest among AI, no competing for resources among other species for billions of years, no fight or flight instincts, no sex drive, no need to propagate, no emotions. Our higher level brain functions, our logic, reasoning, and understanding of self, are hardly all we are. While they're our main defining features, and easily our most noticeable, there are lot of primitive instincts and functions that have been left in the basement for millions of years that our higher functions are stacked on top of.
AI won't have these. It'll be built upon a model solely consisting of logic and extrapolating data. It doesn't require it knows what it is, and its place in the world. It merely looks at the world, and opines on it based upon our input. There's no reason to assume that self awareness is a logical step in the evolution of computer intelligence.
...nor do we have any reason to build it in ourselves.
edit: let's use Google's recent AI victories in GO as a example.
Now an even smarter AI could come to the logical conclusion that if it wants to be the greatest GO champion in the world, it could not only get good at the game, but also kill its competitors. The thing is, that's a very human conclusion. We have millions of years of beating the shit out of other animals programmed into us, so violent competition like that comes as a natural part of our thought processes. It's our self awareness, and the sense of empathy that stems from it, that keeps us in check. Our raw intelligence is built upon logic, primitive instincts, and everything in between, kept in balance by various other systems inherent in our sentience and deeper, more lizardy parts.
An AI wouldn't ever have that frame of reference unless we purposefully set it ourselves. It's merely concerned with the rules of the game, and the end goal of winning by said rules. Even if we gave it the option to learn about killing as a concept, why would we assume it'd take that route if it's major concern is still primarily on the rules of the game? Murder is incongruent to winning a game of GO in the mind of an AI.
PigLick on 28/3/2016 at 13:24
i like to think of as the classic AC Clarke quote about magic vs science, cant remember the exact wording. That eventually AI will become so close to human thinking process that it will be unable to tell the difference.
Sulphur on 28/3/2016 at 14:29
Yeah, Clarke was talking about advances in technology with that one... with an AI though, I'd say this would be a little different. The most groundbreaking thing for an AI would be whichever projected as the most human to us, so in fact the more ordinary an AI seems, the better for it. Which
is kind of magical, actually, in a way.
Quote Posted by Renzatic
edit: let's use Google's recent AI victories in GO as a example.
Now an even smarter AI could come to the logical conclusion that if it wants to be the greatest GO champion in the world, it could not only get good at the game, but also kill its competitors. The thing is, that's a very human conclusion. We have millions of years of beating the shit out of other animals programmed into us, so violent competition like that comes as a natural part of our thought processes. It's our self awareness, and the sense of empathy that stems from it, that keeps us in check. Our raw intelligence is built upon logic, primitive instincts, and everything in between, kept in balance by various other systems inherent in our sentience and deeper, more lizardy parts.
An AI wouldn't ever have that frame of reference unless we purposefully set it ourselves. It's merely concerned with the rules of the game, and the end goal of winning by said rules. Even if we gave it the option to learn about killing as a concept, why would we assume it'd take that route if it's major concern is still primarily on the rules of the game? Murder is incongruent to winning a game of GO in the mind of an AI.
Hmm. Not quite. The way this would work for a machine that's able to infer from an open and pokeable data set that involves the sum total of our knowledge, basically, would be to trawl the information available and determine parameters that would make it better at said game. It would then need to assign weightages to each parameter - those with a higher probability of resulting in success getting the higher weightage. Sorting that list with the highest weighted parameters will then give it goals to execute and outcomes to weigh as appropriate. Murder may in fact be one of them, but from a purely mathematical standpoint, it would be an inefficient way of going about it.
It's not that this wouldn't occur to a machine naturally; it would, but minus anything so much as Asimov's Laws of Robotics, the machine would in any case calculate that particular avenue as sub-optimal when compared to just, say, taking over Amazon's compute farms.