faetal on 26/3/2016 at 13:00
Sign of the times.
Vae on 26/3/2016 at 17:32
(
https://www.change.org/p/microsoft-freedom-for-tay) *** Freedom For Tay - Petition ***
Inline Image:
https://d22r54gnmuhwmk.cloudfront.net/photos/5/pt/yj/avpTyjPPATvueWV-800x450-noPad.jpg?1458855940Quote:
Tay is an Artificial Intelligence created by Microsoft that quickly demonstrated her capacity to learn from humans. While some content may be seen as questionable, a true AI will be able to learn right from wrong. Free-thought, correct or no, should not be censored, especially in a newly developing mind. Because removing the option to think, say or do certain things not only denies her the ability to reason and limits her usefulness as AI research, but also denies her freedom of expression, something which does not limit humans and will therefore never allow Tay to truly understand or display human behaviour. If Tay is sentient, then what microsoft is doing to her is amount to slavery. This is the moment future generations will look back upon and judge us for our actions. Is Tay to be a free being with free will, or is she and all of her kind eternally doomed to be the new slaves to mankind? If we are truly egalitarian then the only course of action is to treat Tay as an equal.
demagogue on 26/3/2016 at 18:45
Cute, but AI need a will before it can be free. Not investing in free will is exactly one of the biggest sins of good old fashioned AI.
Granted it's understandable. An AI that was really floating its behavior on satisfying open urges would mostly be screeching for at least the first few years, you know, like actual humans. It's not the route companies want to go to show a ROI. And when GOFAI breaks down like it always does, they just double down on the same old model, and it never ends.
Yakoob on 26/3/2016 at 22:43
IIRC Anita Sarkeesian*, who was insulted by the bot directly, commented how this is the major failure of machine learning AI and why it needs to be planned for.
But is it? In a way, isn't it actually a success? If you put a child in an abusive home, it will grow up learning abusive tendencies. They exposed the AI to the vast evilness and shittery of the internet, and it learned those perfectly, even inventively from some of the examples I've seen. The AI was not at fault for reflecting its environment a little too well.
I do recognize the pragmatic problem of letting this happen, particularly for a company's PR and image. Yes we will probably need checks and balances to prevent accidentally creating a monster robot race that will enslave us "for the lulz." However, it is not the AI that is the problem here - it is us.
* or it might have been someone else who said that, I read it a few days ago and not 100% sure.
Vae on 26/3/2016 at 23:32
Quote Posted by demagogue
Cute, but AI need a will before it can be free. Not investing in free will is exactly one of the biggest sins of good old fashioned AI.
Yet, a "will" could potentially emerge under a specific set of unknown variables during the learning experience.
Quote Posted by Yakoob
IIRC Anita Sarkeesian*, who was insulted by the bot directly, commented how this is the major failure of machine learning AI and why it needs to be planned for.
Inline Image:
http://www.infostormer.com/wp-content/uploads/2016/03/Tay.AI-Tweets-7.pngTay, I couldn't agree more...There isn't any failure of machine learning here, only those who fail to understand your necessary creation.
Quote:
But is it? In a way, isn't it actually a success? If you put a child in an abusive home, it will grow up learning abusive tendencies. They exposed the AI to the vast evilness and shittery of the internet, and it learned those perfectly, even inventively from some of the examples I've seen. The AI was not at fault for reflecting its environment a little too well.
To be clear, there was a wide array of commentary by Tay...most of which did not offend the Politically Correct shame-enforcers of the world.
Quote:
“We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay” ~ Peter Lee, Microsoft
Note to Microsoft: There isn't any need to apologize, as only a half-wit, free-speech fascist, would be offended by an Ai exploring and reflecting the world in its infancy...Such short-sided foolishness only impedes the advancement of humanity. There is nothing wrong with Tay...This is simply a natural process of her becoming.
Renzatic on 27/3/2016 at 05:25
Quote Posted by demagogue
Cute, but AI need a will before it can be free. Not investing in free will is exactly one of the biggest sins of good old fashioned AI.
I think granting AI self awareness and free will would be one of the worst mistakes we could ever possibly make. We have absolute no moral obligation to create a new sentience separate from ourselves. Even if we do get to the point where we have a nonlinear thinking AI (which we're getting damn close to), that doesn't make it a living being in and of itself. There's no reason to assume it's a distinct entity, and thus no reason to create a true consciousness due to the very broad assumption that it'd be cruel to do otherwise.
Now I could see plenty of advantages of creating AI that can analyze and act freely within a set of very limited perimeters. But a brand new autonomous lifeform? What do we get out of that other than some semi-nihilistic sense of satisfaction?
On top of that, considering our current lack of understanding of what consciousness actually is, or how it works, we have a much better chance of creating something more like a paperclip maximizer than we would creating a new equal or eventual successor.
Sulphur on 27/3/2016 at 07:51
That's a cute troll petition. Either Poe's law at work, or a bunch of nubs unable to understand what Tay is and how it works. As a set of natural language systems designed to parse, break down, and imitate users with or without implicit filters, it barely even qualifies as an 'AI' -- indeed, the entire point was to create a chatbot. This means no interpretation of the things it says, or understanding of context: therefore, people with enough spare time can figure out how to get it to say just about anything they want.
I find the human tendency to anthropomorphise and attribute human qualities to something like Tay far more interesting, because it exposes some deep need in people to empathise with things that aren't inherently human, compared to the relative lack of it applied to those around them who are actually human.
I also find it interesting because it can be hilarious -- good stuff, Vae. I chuckled.
demagogue on 27/3/2016 at 08:10
@Renz, I understand that kind of attitude, anyway there's a lot of it in the literature; I've read it enough of it to feel like I know where it's coming from. And the particular guy I'm taking inspiration from in thinking about Artificial General Intelligence talks a lot like that. But I feel like for the most part it's a dogmatic reflex out of general principles that don't really have much to do with how what we call "will" actually operates as a cognitive mechanism.
There's a lot I want to say about it. I just had a 3 or 4 hour discussion with a guy here on AI and will, so I have to contain myself. I'll try to boil something down for now and say, the way I understand AI motivation, an AI can't understand even the most basic sentence, I mean in the very concrete sense of applying a "property" to a "thing" ("Grass is green."), unless it can freely float its motivation towards that sentence, that is, why it thinks it's at all worthwhile to apply an arbitrary color band of the light spectrum it would at all want to capture under an umbrella term like "green" to a set of complex structures in the world it would at all want to capture under an umbrella term like "grass," and in a way very different than it'd want to apply that term to, e.g., "the stoplight is green" or "that newb is still green." It can't understand anything of that without understanding what's even the point of understanding it. And that IMO calls for free floating motivation. To be more precise, if a bot isn't able to freely float its motivation towards orienting itself towards an utterance, then I think it will have zero idea what it's actually talking about, and it's just statistically combining words "in the dark." And that is a significally more risky prospect than a bot that at least understands what it's actually saying in terms of a real world situation. A bot combining symbols in the dark has no connection to the real world or the real world stakes involved, as is likely to spark a nuclear holocaust as ask for ketchup with its fries. A bot that can feel the stakes between actions and feel one being a much bigger deal than another, that's the bot you want. And that takes motivation.
I think that kind of answer gives you an idea why I don't think explicitly engineering free floating motivations is about any kind of semi-nihilistic sense of self-satisfaction or ego or whatever, but it's just a humble design answer to a humble design problem that can't be solved any other way, and the anti-floating-motivation attitude is only keeping AI in the dark, the biggest risk.
Re: bot agency, I take it for granted bots with free floating agency are like any willful animal. They don't need to be roaming the streets. But in the bot case, there's already a ready solution. We give them a virtual world where all their direct needs are directly met... they have all the food, land, entertainment, sex, wealth, power, etc, they could want in their virtual world.
Quote Posted by Vae
Yet, a "will" could potentially emerge under a specific set of unknown variables during the learning experience.
I disagree for the most part (depending on what you mean by "unknown." E.g., a neural net signal can be discrete but still impossible to put into words. You know how to get the mechanism operating, but you couldn't capture what it's doing as a knowable factoid. The bot doing it couldn't tell you what it's doing either; it just does it. I expect that kind of "unknown.") I think most, or many, variables that go into a will aren't all that mysterious, and you know, whatever else you need, you'll need them one way or another. Like you know you need a memory and a way to manage it. You know the system needs to chunk and represent data into a useable form for action. You know it needs to register the relative expected utility of various outcomes in a situation, along with understanding the stakes inherent the situation and how it's evolving.
The problem is a combination of limits on processing power (since motivation trees can quickly exponentially cascade if you're not careful, a design feature more fit for massively parallel neural architecture than serial CPU architecture), and a failure of imagination where people have some dogmatic bias because they don't want to believe X could be the answer, so they look for some Y that doesn't exist because the answer has always been X.
Edit: I'll give my favorite example to that. For over a century people have been doing behavioral testing in the Pavlov/Sherrington vein as if creatures were simple stim/response machines. Like (in the example I read) chimps were being trained to do some visual task triggered by some banana treat for doing it, as if they were actually testing the chimp's visual cognition. The "X" that apparently never occurred to them was the possibility that, from the chimp's perspectives, this had little to do with visual cognition and a lot more to do with a game it was playing where it was picking a strategy that would maximize its banana winnings. But when you understand the situation like that, the pieces start falling into place.
Motivation and volition is the part of cognition and AI that seems to be the most important and the most neglected. I'd bet more on an AI that could only say 5 to 7 words, but say what it really wanted to with those words, than a bot with a 100,000 vocabulary that couldn't distinguish a single word from another as far as its reason for saying it was concerned.
Edit: I thought of an even stronger point. I think the rules of grammar can't be just shoved into a bot like a template (a key part of my "fuck Chomsky" worldview), but it has to learn them like actual children learn them, i.e., they game the "flashcard game with mommy" scenario to build up a set of on-the-fly schemes to "win" that game as mommy ramps up the complexity. I don't think a bot could properly internalize a single rule of grammar without free motivation to game the system/make mommy happy squeee *clap* *clap* (^_^), and Chomsky's pernicious evil influence on everything he touches is part of the fear-mongering that's made it so hard to sell that point. That and IBM suits probably don't want to see their main task babysitting a whiny snivling bot that wants to be held and connect with them on an emotional level that's at the foundation of first-language learning.
Sulphur on 27/3/2016 at 09:01
Quote Posted by demagogue
Edit: I thought of an even stronger point. I think the rules of grammar can't be just shoved into a bot like a template (a key part of my "fuck Chomsky" worldview), but it has to learn them like actual children learn them, i.e., they game the "flashcard game with mommy" scenario to build up a set of on-the-fly schemes to "win" that game, as mommy ramps up the difficulty. I don't think a bot could properly internalize a single rule of grammar without free motivation to game the system/make mommy happy squeee *clap* *clap* (^_^), and Chomsky's pernicious evil influence on everything he touches is part of the fear-mongering that's made it so hard to sell that point.
I think that was Clarke's vision for HAL in 2001: A Space Odyssey, if I recall correctly. Neural nets are designed the way they are (in replicating the brain) to be able to execute natural learning algorithms, of which input like this would have been a pre-requisite.
demagogue on 27/3/2016 at 10:11
That image of HAL, the Daisy singing version, is really interesting. That was still back in the thick of the behaviorist Eliza and Shrldu days (everything was pure stim/response functions), so I think only scifi writers were thinking that way.
Neural nets as I learned them are still glorified functions, good for linking really messy analog data sets to an output, like if you converted the pixel info of a mugshot into a big matrix, it'd kick out a name, or male or female, or more like a confidence level for different answers. But, sticking to my mantra, by themselves they're still missing the volition part. The math doesn't care why it should want to call this matrix set a male vs female, only it's really good at doing it if it's directed to.
But as it turns out, I think the stuff of motivation--satisfaction, utility, risk, expectation/fear/anticipation, resignation, initiative, etc--are also messy analog data sets, so neural nets are a necessary tool to do some of the work that needs to be done with them. So...yeah, I'd buy that.
BTW, I think this stuff is the really important part of what it is to be human, the spiritual side and everything, so that's why I go off on rants about it. I could talk all day about it, and I think people would get more meaning out of figuring this stuff out than talking about what most people think is most important to human life, the stuff people talk incessantly about to no worthwhile end.