Yakoob on 29/3/2016 at 02:11
@Renz - I'm still confused why you are so vehemently against a self-aware AI? I agree with you it may not naturally "evolve" into one, but why should we be against it aside from risk of it getting out of hand?
No, we don't have an obligation to do it, but we don't have an obligation to do a lot of things we do. Heck, even our own species survival is ultimately pointless when you think of it on a grand scheme of things. What makes us any different than the pixels in a game of Life (religion aside)?
So I say let's go and try it. I agree it is risky since we do not even understand our own consciousness, BUT developing a conscious AI would be one way of actually grasping what consciousness is. Or at least, a very valuable psychological / philosophical tool.
Assured robotic destruction aside...
Quote Posted by Renzatic
But keep in mind that the evolution of AI will be considerably different than our own eventual rise to sentience. For one thing, it isn't being built upon a Darwinian style model of evolution. There is no survival of the fittest among AI, no competing for resources among other species for billions of years, no fight or flight instincts, no sex drive, no need to propagate, no emotions.
IIRC, the first Chess bot that beat the grand master learned by repeatedly playing against itself :) So "virtual evolution" could very well be how AI will learn. The Tay fiasco and the whole concept of machine learning is already grasping at the roots of evolution. Sadly it went extinct prematurely due to comet crash labeled Microsoft's PR team.
Quote:
There's no reason to assume that self awareness is a logical step in the evolution of computer intelligence.
I really like your point here and you are right - our understanding of what AI will be is entirely limited by what our intelligence is, but it could very well evolve in a way completely foreign to us; we may not even be able to grasp it's "logic" fully or even recognize it as living/intelligent.
Quote Posted by demagogue
Neural nets as I learned them are still glorified functions, good for linking really messy analog data sets to an output, like if you converted the pixel info of a mugshot into a big matrix, it'd kick out a name, or male or female, or more like a confidence level for different answers. But, sticking to my mantra, by themselves they're still missing the volition part. The math doesn't care why it should want to call this matrix set a male vs female, only it's really good at doing it if it's directed to.
True, and that is their current state, but nothing prevents them from growing almost infinitely in size to account for amazing complexity. If they are entirely self-adjusting and can grow new connections, who knows if somewhere in this messy cobweb of virtual neurons a thought, a will, could not spontaneously arise?
Didn't life really start by a lucky accident in a purely scientific theory?
Sulphur on 29/3/2016 at 03:28
Quote Posted by Yakoob
True, and that is their current state, but nothing prevents them from growing almost infinitely in size to account for amazing complexity. If they are entirely self-adjusting and can grow new connections, who knows if somewhere in this messy cobweb of virtual neurons a thought, a will, could not spontaneously arise?
Didn't life really start by a lucky accident in a purely scientific theory?
Which theory is that?
The current state of AIs is, as demagogue and zombe stated, glorified functions. They can grow to amazing proportions if they're fed enough data, but the programming currently allows only for them to get better at one thing, which is whatever they were designed to do. Keep in mind this self-adjusting accounts for better outcomes given a dataset - fundamentally, the program itself doesn't change, it just optimises based on input. The more input to work on, the better. This does not give the program any additional complexity, but just makes it more effective at solving for a certain set of math problems.
A program like this will not, and cannot, grow beyond its initial purpose. It's not made to be able to reassess its goals and reprogram itself, which would be a fundamental requirement for anything resembling machine 'consciousness'. If we did make a program that could generate its own goals by interpreting any set of data and uses that to change its own algorithms one day, and run that on something resembling a decent approximation of a real-life neural network, that would be the event that sets us up for the technological singularity for all we know.
Tocky on 30/3/2016 at 05:05
Quote Posted by Sulphur
Which theory is that?
A program like this will not, and cannot, grow beyond its initial purpose. It's not made to be able to reassess its goals and reprogram itself, which would be a fundamental requirement for anything resembling machine 'consciousness'. If we did make a program that could generate its own goals by interpreting any set of data and uses that to change its own algorithms one day, and run that on something resembling a decent approximation of a real-life neural network, that would be the event that sets us up for the technological singularity for all we know.
This. All else is just a complex tool driving toward a conclusion. What we are is not simply intelligence. Awareness is much more complex. We are a singular perspective. I know how simple that sounds. It's not. We could create a duplicate of ourselves and still not create ourselves. Singular perspective is non transferable. The free will which many deny we have is still lacking. This ability to reprogram based on need still lacks initial impetus. That initial impetus transcends even survival. I'm not sure you can program a goal we ourselves don't understand.
I know you think it is group based. The need to fit in a larger society for survival. I've seen this stated as evolutionary logic. It isn't for survival. Individually that goes against self and seems to support group behavior. That is true but still does not hold logic when ultimate entropy supports no logic is equal to any logic. The group as a whole dies in the end. How would a machine understand present and apply importance to it when it is conclusion based logic?
Present existence. We live for that. We find import in that. How do you impart that to conclusion based systems?
Sulphur on 31/3/2016 at 04:17
True.
Also, yow. I hope I didn't kill the conversation. It's an interesting topic, to say the least.
demagogue on 31/3/2016 at 05:10
There's a lot I want to add, but I was finding it hard to boil it down into something that's not a wall-post.
Renz's position (as I read it) is really close to guy called Ben Goerztel, whose is making a great AI system except IMO for the goal system, on exactly this issue. So I've been writing a lot of responses to exactly that position, so have a lot of say about it.
And then I have to back up and explain I did philosophy of mind in university, the cognitive science/AI end of it. And my big takeaway from that experience, including what I wrote my thesis on, was the relationship between motivation and understanding language/culture. To me, understanding is driven by motivation. An agent can only "understand" a text as far as it cares about what it's taking about, e.g., the text "stay off the grass" only means something if the person/bot recognizes they're going to get in trouble if they step on the grass. That means, if the goal system simply shoves in the goal by fiat, "don't step on the grass" when you see that sign, it's not going to understand why the sign says that, what it actually means to it.
The sign doesn't give the bot any reasons to do anything, because the bot doesn't recognize reasons, because reasons only exist for a bot that can consider different reasons speaking to different actions, which is what motivation *is*. And that's the part Goerztel has taken away (and it seems what Renz is saying too), and then Goertzel is wondering why he can't get his bot to understand even simple sentences.
That might capture my basic position. But actually this is a rabbit hole problem, and this is just the tip of the, uh ... hole. The contribution of the sign to giving a bot reasons to walk on the grass or not, and the bot being able to recognize those reasons, points to a system that IMO is involved in almost every aspect of what language is doing. It's giving reasons for action left and right, and even reasons for its own grammatical construction, so even things like grammar need that reason-recognizing system. And open reason-recognition is what we mean by an open goal or motivation system that the explicit goal system throws out from the get-go.
Well I have to add a bit more. A "reason" isn't what stims a bot to a direct response, because stim/response is not understanding. A "reason" means the bot considers it giving various volitional impulses towards taking or not taking action, and other impulses to consider other reasons with their own volitional impulses. It has to "feel" the pull of the reasons towards action for it to be a real reason.
Incidentally, giving a bot a conscious experience--like the feel of the pull of a reason, which is a key part in my system--isn't really as mysterious as people say IMO. It just means the systems planning bot action are given analog-like data set that's chunked towards certain manipulation purposes that their motivation acts on. No more or less really. It's only a mystery if you come in with a Pavlov/Sherrington brand of stim/response, where the bot doesn't consider itself what it wants to do towards the data set, but the programmer just stims action by fiat, Then they think it's some big mystery why the bot doesn't really "recognize" data, but it's because you didn't give the bot any way to consider the data set itself. You took the consideration of experience away from it and just stim'd action directly.
Now I'd have to go another level down to explain why motivational planning is different than stim-response, since in both cases we're talking about functions in algorithms... Ugh, that's another rabbit hole. The first thing we'd have to do to wiggle into that hole is get clear about what people mean by "free will". What free will is definitely not is the ability to have done something differently in the past, which is an incoherent position, but anyway it doesn't matter because that's not what most people that say they believe that really mean by it anyway, if you push them on the details. It's only what they think it means.* If you push it, it still means that action is absolutely caused, X always causes Y. The difference between an X always causing a Y with "free will" and an X always causes a Y like a billiard ball hitting another (no free will), isn't that the reaction could have happened otherwise, it means that in the first case "X" is "me"--I'm the one absolutely determining my action by sheer force of will and not some outside alien force--and in the latter case "X" isn't anything that recognizes itself as "me" or acting on behalf of its "own reasons" but for "outside reasons", billiard balls rolling into each other because they were hit, not because they "want" to.
So then you have to get into personal identity. Free will is decision-making that is run through all the systems that a bot needs to recognize the features of personality, self-recognition, and most importantly, to freely consider different courses of action and the reasons that apply to them, and selecting the one that it feels for its own reasons to suit it best. There's a way you can code for that in AI. It's not what most AI systems want to do. To do this right, now I'd go back and talk about what this has to do with understanding the meaning in language, but this is enough for now.
Edit: Oh, and I haven't even gotten to the counterargment part. Ben & Renz's positions seem to be about bad bot behavior, so then we should get into motivation for criminal action and criminology. Criminals act on incentives to do bad things, thinking they'll improve their situation, but even then it's usually because they misunderstand their own interests. OR they lose touch with their interests and are stim'd by pure emotion that skirts reason. Open consideration of the reasons for doing X is what would have *stopped* them from the criminal action. These are things we can check for AI. For one thing, you don't have to stim a response by fiat for the action you want. One easy thing to do, e.g., is to just plunge a bot into misery contemplating a certain course of action that it just can't bring itself to do. Then it'd at least understand why it won't do it. I could go on, but there's another rabbit hole topic there. The problem is all of these issues are both really broad, lots of topics to talk about, and really deep, each one requiring a lot to say to make any headway. And now here's my wall (-_-)"
*Footnote on this [edit: added after Sulpher's next post. So the "last paragraph" he means is the one above. Sorry!] People say "Free will should mean I could have chosen otherwise in that situation." What they really mean is, if I had thought about it differently and gave more weight to this or that consideration (that means something different to me), or if I simply acted on this or that other impulse at this or that moment, then my behavior would have been different. That's perfectly compatible with a causally closed universe, and is a real thing that humans can do that billiard balls or snails can't. But what they can't mean, because it's incoherent, is that the person absolutely decides X by force of will, it happens in the world, you record it, then you run back time, rewinding the video and then replaying it... If someone expects to rewind a video and replay it (which is what replaying time really means) and it will show a different scene, then they don't understand how time or will work. That's temporal and behavioral chaos, where people make two decisions branching into two realities running on top of each other at the same time. IMO that's not what most people really mean when they use that argument for "free will", even though that's what the argument they are using is literally saying. So you have to translate it to what their intuition means, which is that, put the person in similar circumstances with an opportunity to think about it a little differently and they are perfectly capable of acting in a different way according to that different way of thinking, understanding that at the end of it, they make an absolute choice for X or Y, because that's what "will" means. That's what free will is, and that's how you give it to a bot.
Sulphur on 31/3/2016 at 06:36
Yup, that is certainly a wall. Regarding the paragraph on the counterargument to Ben & Renz's positions, though, dema.
The motivation for criminal action assumes a human personality attributed to the bot. That is unnecessary, IMO. We can't really ascribe personality and emotional context to an AI yet. Attempting to simulate this as well opens up a deeper set of rabbit holes. Even if we tried at this point, it'd just be a set of flags in the program logic. To wit, an incredibly simple emotional state rule: 'X makes me happy (assuming X is factors increasing physical/emotional integrity). I should try to attain more X/maintain X to allow for %stability.' 'Y makes me unhappy (assuming Y are factors that destroy physical/emotional integrity). I should try to minimise Y to maintain emotional stability.' 'Emotional/physical requirements are calculated by %stability which is X/Y, and it should not fall below 1.0.' 'Z falls into a state between X and Y and hence is uncertainty. Further data is required to ascertain probability of where this will fall but for now this does not impede overall stability. If Z is > than X or Y, however, then priority is required in categorising and quantifying unknown factors '
Obviously, we would then end up with a system that never does anything but solve for all issues that could potentially destroy it that exist in the universe, so it needs better health/sanity checks. But anyway - this simulates motivation, but it does not quite quantify what emotion is, what it feels like, or even whether the AI can ever relate to it, which is the fundamental problem. And even then, what would the utility of simulating it be? Attempting to simulate a machine 'morality' so to speak? There would be easier ways to do this (cf. Asimov), and attempting to give emotions to an AI (misery, happiness, etc.) is going to be difficult to achieve when we're creatures who haven't quite pinned down our own behaviours and motivations as an easily approximated mathematical certainty yet.
demagogue on 31/3/2016 at 07:01
"Motivation for criminal action" I think only requires enough human perspective to understand what a "crime" is... like they have to understand the knife-y thing in their hand is a solid object that a plunging action with their hand holding it into the solid gut of another person will destroy the agency of that person, and that's an irreparable and total loss of their contribution to the world. All of those things have to be from a human perspective, or an agent (bot or whatever) won't understand the "crime" part about it. If the bot can't even understand the knife and gut are consistent solid objects that react in physical ways with each other, they can't even get to the part where they understand a knife is going into a gut, much less what that causes, much less the moral consequences.
Can we ascribe feeling to an AI? This is flipping my way of thinking around.
No of course not now exactly because AI isn't doing it. You have to specifically code for these things.
We can't ascribe it to AI until we code for it, which is what I'd call for.
I disagree with whoever was saying above that this kind of feeling or understanding can come for free from general learning mechanisms, like you just hook an AI to the web & let it run and it "gets it." I think perspective things have to be specifically coded for. I agree they can't be ascribed to current gen AI, but because they haven't been coded for. Do the coding, and you can. And the way of framing the problem tells you how to do the coding IMO.
Like a state rule "X makes me happy", then happy is tied to expected utility. It's good you pick a really clear example like this.
I'll use the example of what makes me happiest in the universe, which is to push my fingers into a sheep's wool. (Not kidding. The feeling of squeezing your fingers into sheep fluff is off the fucking hook.) So my system is that feelings are represented at the lowest perceivable atomic level, which is sub-symbolic, waaaay before you can put it into words. You need to represent the whole swirling machine of different feelings. So you first have to have the visual and proprioceptive data that you're in a field, and there's a sheep object within reach. Then you have running through your imagination the satisfaction of squeezing your fingers in the wool, which is primed from past experience, and that triggers the pangs of volition of your hand wanting to reach out to the sheep wool in the sequence of actions it takes to do that.
None of this has reached the level of the language it takes to say "that makes me happy," and other animals share it, like our monkeys that are happy throwing rocks at trees. To get to the language part, you need a whole host of volitional urges that were drilled into you over a long period of language learning with your parents, in school, and social experience visiting grandparents on the farm and playing with the animals. Then the urge is to utter what, from the system's perspective is the arbitrary syllables to say "X makes me happy," which doesn't mean anything by itself. And to the agent it only means something in terms of the urges to utter each word following the different conventions in language we have learned. E.g., to translate pleasing urges we feel from imagining pleasing things we have the convention/urge to use the word "happy." And the situations where we can receive such urges, we use the convention "makes me." And then for the source of the urges, we use the convention of noun-subject, the X... And then grammar gives us a conventional way (a set of instructions) to put those elements together, SVO(NP), "That makes me happy", and now we get to your starting point of a state where a bot can tell itself "X makes me happy." It's what you end up with after decades of processing and learning, not what you start off with.
Edit: Or I guess to put it another way, there's never a single monolithic "state" register "I'm happy." There is always this swirling universe of different feelings and urges, and the agent is able to express, via different conventions it's learned over its life, that "I'm happy" in a language form, and it's committed to believing what it just said, which we like to treat like a monolithic state, but it's just the wrapping for all this swirling stream of consciousness that the agent has picked out for itself.
Sulphur on 10/4/2016 at 07:03
Phew. Been a while since your post, but it's been at the back of my mind. Just didn't have the time to parse it and respond.
Anyway, yep, I see the context behind your approach to it. In my uber-simple state example, the register of 'happy' itself is tied to quantifiable state descriptions - physical and emotional integrity checks, and is tied to the system's over-arching directives, so not an organic self-willed process.
In contrast, you're looking at a volitional utterance of 'I am happy' being an organic outcome of starting from a base set of sensory data that the system threads together and analyses to inform its 'personality'. Now, what's measured as physical and emotional integrity in my example is where the learning process input you speak about may be required if we're looking at some sort of organic developmental faculties.
But I guess my issue is that the system cannot actually do anything with sensory data that parses as 'feeling' unless we specify how the system should feel based on all of that various data - visual, audio, proprioceptive, the whole lot; I think it's clear that an organic response of, say, having the ability to feel wool as soft and 'wanting' to run your hand through it requires an emotional trigger. Something positive attributed towards it that elicits a positive 'stroke'*, say, of performing the action of feeling the wool, even if there's no net positive or negative affect to it. And that sort of special purpose programming is going to have to be exhaustive for the system to work.
For instance: why do some people have that wiring, anyway? Perhaps it reminds one of warmth and comfort when huddled under a blanket during winter when they were a kid, and that experience has a general affect on how they perceive the material. This would not be a scenario that a machine would be able to relate to, then, but certainly one that could be programmed in. However, on doing that, can you generalise it enough to have the system learn this sort of thing and apply it broadly, which means also describing 'comfort' and 'security' to it, is the question. And how do you describe those again, and so on, and deeper and deeper the rabbit hole goes.
*Apologies for applying transactional analysis and psychology to the example; but I think it's useful to have some sort of psychological structure or basis to go with in a topic like this.
Tocky on 11/4/2016 at 02:40
How does one go about programming emotion into a logic system when it is itself illogical?
And wasn't this all a Star Trek episode?
Yakoob on 11/4/2016 at 20:46
var emotion = Math.random(happy, psychopathic);
;p