Maxmillion on 25/4/2007 at 01:36
In the System Shock manual, I found a very interesting quote.
Quote:
“The public’s basic understanding of Artificial Intelligence is all wrong. Because
AIs, Like the ones directing traffic in most major cities. are usually given a facial
Logo and a name, most peopte consider them electronic personalities. This is
terribly off target. An Al is utterly alien to the human mind. It is an incomprehensible.
autonomous thinking construct to which all concepts of morality and
emotion are foreign. lust because the face on the monitor at your local library
smites at you doesn’t mean that the Al behind it has any understanding of
friendliness whatsoever. To an Al. a human being is just another variable.”
Kyojio Sashumori. psychiatric consultant specializing in aberrant Al behavior
This is a rather frightening view of A.I constructs. I know that we are no where near the level of tech that the original SS shows, but do you think that this quote has a basis in reality? Could have a basis in the future?
Angelfire on 25/4/2007 at 03:55
Absofraggin lutely. Soon we will have nanomachines and then weapons that will sweep the realm of earth free from human filth.
Yeah!
Kolya on 25/4/2007 at 10:20
Now that's hypothetical: If we had an AI, how would it be...?
The interesting part of such discussions is mostly about how we see ourselves. That being said...
It sounds scary when someone says: "The AI controlling the traffic has no concept of morality or emotions."
But just imagine if it had emotions and a bad day... It would guide you off a cliff because your views collide with it's morality! hehe
Now humans took some time to invent morality and everyone of us is still learning what it actually is, because the term changes like a chameleon in every possible context and even individually.
Have you ever watched little kids kill a toad or some other small animal? Do they have no morality? Maybe not for this case. Do they have no emotions? Far from that. They enjoy their feeling of superiority, of forcing someone weak under their will. That is a human emotion. I'm not sure if it's learned, in some long perspective it certainly is, that is humans like to be in control because it goes well with survival, but maybe it's learned individually too. By experience of being controlled by others.
Anyway, if an AI had no emotions it wouldn't have this either, right?
Wrong. If an AI had human thinking and learning skills it would have emotions by definition. If it's so similar to us and all it has to learn from are creatures whose thinking is based largely on emotions it would learn that or get utterly lost in a world full of paradoxes.
Heh, mindgames - all of that. AIs like that don't exist and the direction serious AI science goes to is not to create humanlike AI. All that's possible for now is to create a program that can make decisions in a hard coded context. Think of soccer playing robots. Or check computers. Take any of these "AIs" out of it's context and it's lost. It's capabilities of learning and adapting are far too limited. It's inferior to us. Let's beat it with a stick and see what happens!
rachel on 25/4/2007 at 10:35
True AI by definition will be self-aware and very difficult, perhaps impossible to control.
We can improve slave devices like computers and stuff to the point they mimick AI and human behaviour, but I'm quite wary of those who seek to create a true AI. It's Pandora's box. It would probably be akin to a first encounter with alien beings.
Plus there's this whole "how do we define intelligence" issue to resolve first...
Kolya on 25/4/2007 at 10:53
Quote Posted by raph
True AI by definition will be self-aware and very difficult, perhaps impossible to control.
True. Now replace "True AI" with "A child". :D
Thinking about it...a limited lifespan might be a good idea.
YogSo on 25/4/2007 at 11:12
Quote Posted by Kolya
Thinking about it...a limited lifespan might be a good idea.
"I've seen things you people wouldn't believe. (...) All those moments will be lost in time, like tears in rain. Time to die." ;)
rachel on 25/4/2007 at 13:58
Quote Posted by Kolya
True. Now replace "True AI" with "A child". :D
Thinking about it...a limited lifespan might be a good idea.
That brings up the ethical question. If we do create a true AI, with everything this notion implies, wouldn't limiting its lifespan be equivalent to murder?
What about slavery issues? Technically a self-aware being should decide of its own fate, not be bound by the will of others. (And humans can't even do that with each other yet)
We can create robots with pseudo-AI that can obey our slightest whims and perhaps we will mourn their loss like we do when our pet dies, but it will be no big deal at the end of the day. However if you introduce real AI in the equation, that's a whole new story.
I guess eventually there would be a trial, a bit like in
Star Trek when they had to decide what Data was.
:thumb: YogSo, classic!
Matthew on 25/4/2007 at 14:28
Consider also a situation where we might be able to transfer a human consciousness into a silicon form. Would there then be any justification for treating such a consciousness differently from an A.I. simply based on the container that it came from?
Nameless Voice on 25/4/2007 at 14:32
Quote Posted by raph
I guess eventually there would be a trial, a bit like in
Star Trek when they had to decide what Data was.
The Measure of a Man. This thread was already putting me in mind of that episode. :o
Kolya on 25/4/2007 at 14:39
Quote Posted by Matthew
Consider also a situation where we might be able to transfer a human consciousness into a silicon form. Would there then be any justification for treating such a consciousness differently from an A.I. simply based on the container that it came from?
No, but there would be much justification to fight it with laser cannons!
EVIL AI IS EVIL!