Syndy/3 on 10/2/2010 at 08:39
An alien civilization might not be so obsessed with size.
Muzman on 12/2/2010 at 00:54
Kurzweil's stuff is fun in a "Whoah cool man" kinda way, but it's really terribly thin when you think about it. Progress is not a series of converging lines. We can look back at history and say one development was required for subsequent ones, but that doesn't tell us how things are going to go.
The talk treats energy and processing power like resources in an RTS; get more of them and you can do what you need to do. But look around. We have pretty awesome energy and computing power available to us now, which has been spent on thngs like, say, creating a zillion of the same mobile phone with slighty variant features (and incompatible chargers) so people get new ones year in year out. This is what we do with our finest technological achievements.
How do we jump that track? Well of course AIs will appear and show us the error of our ways and do everything for us. How will they appear? Well computing power will be so massive by then they'll be possible. Yeah, but computing power might be so massive by then we won't need complex AI as brute force solutions will be just as easy to do and lot easier to program.
Kurzweil's brand of extrapolation is kind of mechanically naive, and it has to be because its so all encompassing. Which isn't to say he's stupid, but there's a lot of jumps in his work that don't seem all that large to the lay reader but involve almost insurmoutable problems and probably require whole new fields of knowledge, technology and techniques. His historical progressions are very convenient in a lot of ways. A closer look at a lot of innovations doesnt show these bursts of movement and paradigm shifting toward some point but weaving lines that miss each other for decades on end, some doing backflips, some vanish for hundreds of years. Extrapolation like that from what technology at present seems to offer frequently misses great hurdles and diversions and misreadings of what something actually can do that the past is full of.
DNA is a good example of this; its discovery was supposed to unlock the door to life and our species and let us do all kinds of things. It really sort of didn't, being too complex and delicate to really do a lot with at first. It helped our understanding, but it probably wasn't quite as earth shattering as people were hoping. But that's ok, one day technolgy would improve and then, baby, then. The Human Genome project was going to unlock people and we'd be right around the corner from reshaping humanity. Well, no. It too helped a hell of a lot, made for great advances in medicine. But what it did more than anything else was point out how little we know. Enthusiasm about the vast potential of the project waned before it even finished (not if you were a geneticist of course), Instead it made it obvious that, as huge and complicated as the human genome is, it's barely the germ of what goes into making us (well, it literally is the germ, but anyway). This is partly why clones aren't common (and no one really cares that much about clones anymore). If you really want to hurt your brain look up stuff on epigenetics and protein mapping research. They make DNA seem fairly straightforward.
This goes into a few criticisms of his historical progression anyway:
(
http://scienceblogs.com/pharyngula/2009/02/singularly_silly_singularity.php)
There are a few other futurist notions that I find amusing, regarding progress and ET life. One is that once technology reaches a certain level of simulation an intelligent species will just slap on the VR goggles/plug in and amuse themselves forever. The reason we never find dyson spheres and what not is they never left home. They're watching TV.
Also: (
http://baetzler.de/humor/meat_beings.html) They're Made Out of Meat
Renzatic on 12/2/2010 at 02:54
Quote Posted by Muzman
How do we jump that track? Well of course AIs will appear and show us the error of our ways and do everything for us. How will they appear? Well computing power will be so massive by then they'll be possible.
This is a subject I've always found a bit fascinating. How we're gonna discover AI, how it's gonna discover sentience, how it's gonna totally whup up on us on the intelligence front, and how we're gonna do it within the next 50 years. While I have no doubt we'll eventually create a brilliant AI, I do have my doubts it'll be much more than a very clever machine we use to model the weather, or do other relatively singular problem solving tasks. An actual honest to go sentient AI, an AI that's capable of first holding our hand then eventually discarding us as inferior trash while it becomes entity an entity in and of itself during the Technological Singularity is more than likely hundreds and hundreds of years away.
In human terms, I believe the first AI capable of sentience will be akin to an autistic, mute, deaf, blind, and completely paralyzed child. This child will have a brilliant mind, tethered to a body that's nothing more than life support for this brilliance. It's world is a dark and silent place, completely walled off from the world outside itself. Or at least that's the case until scientists find a way to feed it information directly to it's brain.
Now from this information, this child will probably come to form a very rudimentary sense of self based off certain information it's being fed. What it is, where it is, and how it came to be. They'll try to teach it the concepts of self, of self preservation, and creativity. But ultimately, how can you explain such concepts to a creature that's incapable of understanding itself through it's own means? How you can you explain such concepts when we, who are fully capable and sentient, don't truly understand the concepts ourselves? If you want a basic analogy, it's like explaining colors to someone who's been blind since birth in such detail that they themselves can imagine it perfectly based off the description. We all know what red is. What it looks like, the mechanisms that causes us to see that particular shade as red. But try to explain it. Saying it's an "angry color" isn't really explaining red.
Sentience isn't based off intelligence or computation power alone. It's a combination of mind and body. Experiences and communication. To give an AI sentience as we ourselves know it, we'd have to give it as human an experience as possible. And we're surely not within 50 years of doing that. Best we can do in the near future is make a machine that can compute mathematical equations far more efficiently than we ever could, but lacks our brilliance and creativity.
Edit: Hell, I got another neat idea.
Concepts. That's the way your average human being thinks. In concepts. Even the most hardcore mathematician always bases his work off a concept. Einstein didn't discover relativity by shouting out "345464411...holy shit, that's how light works". No, he had a concept. A beam of light, and how it travels through the universe. The concept was used as the catalyst for the formation of the hard data, and the resulting hard data was used to discover more concepts.
Now take current AI technology. Even the most sophisticated AI is only able to process data. You give it a number, it gives you back another number. You give it a task, it completes the task. It has no true idea of what it's doing, no concept of what it's working towards. Now someday someone will devise an AI that is so amazingly competent at problem solving, it'll be self sufficent. Whoever creates this AI will probably claim you have a computer smarter than a human being. Quite simply, it won't be, and I'll tell you why.
Now say I'm asked what an orange is. Well, I eat it when I'm hungry, and it gives me energy. Without this orange I might die. They have seeds in them I can use to plant more oranges. I also kinda like the color and the texture. They smell nice, too. I might like putting a whole bunch of them on a windowsill because they look pretty up there. I also like throwing them and watching them splatter. I might like to throw one against someones head because that'd be funny.
In short, I have a concept of an orange.
Now this clever AI, based on current computer technology, will see this orange as energy and might store it up. Maybe even as a potential projectile if it's programmed to think of it as such. Thing is, no matter how many lines of code you add to this AI, it still doesn't have the concept of an orange. Just a bunch of mathematical equations that tell it the potential uses of an orange. It isn't able to think beyond this, to the more philosophical side of an orange. Take this and extrapolate it to everything, and you'll see why it'd be hard for computers as we know them to gain sentience, let alone equal the intelligence of even the dumbest human being on the face of the earth.
Anyway, this is my longest post since forever. Take it apart and pick at it at your convenience.
Renzatic on 12/2/2010 at 08:45
No. I've always known what they were since I saw that one Star Trek episode. You know, the one with whats-his-face in it.
doctorfrog on 12/2/2010 at 10:10
Quote Posted by Renzatic
things
I think very little of this post will ever happen, but it does have the virtue of making me want to try playing A Mind Forever Voyaging again.
And also read I Have No Mouth and Must Scream.
At the same time.
OnionBob on 12/2/2010 at 10:59
YesssssssssssRaymond Kurzweil's work (like most of his starry-eyed teleological futurist contemporaries) has always been rather philosophically boneheaded. Retrograde liberal humanist teleology masquerading as posthumanism. Hmm, I wonder why a man in his 60s suffering from diabetes would constantly assert that human immortality is just around the corner???
Anyway, it's gratifying to see that there are serious problems with his science, too. Too many people (notably the kind of people who won't listen to philosophical or theoretical arguments outside of the current scientific paradigm; the (
http://isxkcdshittytoday.com) randall munroes of this world) really think that his work on the human subject is adequately rigorous when it's really very very teenage.
DDL on 12/2/2010 at 13:19
@Renzatic, your concept of an orange basically IS simply a long list of physical properties, it's simply a much much more comprehensive list than that of your proposed AI. You essentially have enough information regarding "the nature of oranges" to create a virtual model of an orange in your mind. All it takes is 'more processing power' and the AI is eminently capable of the same thing.
There's no significant fundamental differences between human brains and computer processors running simulations of human brains: it's simply the latter requires a fuckton more power. There's no real conceptual barrier therefore to simply generating a neural network that effectively IS a human brain, and like a human brain would be capable of self awareness (and guilt, lies, and all that good stuff): your comments re: the input are valid, admittedly, so depending on how well connected it'd be a severely handicapped human brain, but a brain nonetheless (there is in fact a neuroscience group attempting pretty much exactly this).
BUT: it requires a hell of a lot of computing power to accuractely simulate a job which we do effortlessly in a much more compact, fleshy lump, because it's simulating, rather than "being". Where we simply have glutamate receptors remodelling at the cell surface to reinforce a memory, it has simulations of glutamate receptors being simulatedly remodelled at a simulated cell surface to do the same thing. So it's...well: impractical, essentially.
What will be interesting to discover is exactly to what extent this simulation can be simplified: biology is massively based on essentially random events, stochastic interactions occuring in an environment established to ever so slightly favour a given outcome, rather than directed forcing. In a cell, the problem is generally not really how to make a given reaction happen on demand, but how to stop all other possible reactions happening all the damn time.
If we can get away with simulating a lot of this with simplifications, (i.e. instead of modelling an entire glutamate receptor, you can model a box with inputs, outputs and 'activity'), the job is easier, but it's still complex. If we can't (and I suspect we won't in a large number of cases), it's a horribly expensive task.
So there's that.
The other approach, I guess, is some sort of non-biological brain architecture system, something built from scratch (presumably by simulated evolution) according to no predefined rules, which may be capable of creating a self aware AI using less computational power, but then this of course means we're unlikely to understand how the hell it works, as it would be based on entirely de-novo architecture. And whether it would view the world in anything like the same way as organic minds do... fuck knows.
And it'd probably still require a ton of computing power.