Vivian on 3/2/2011 at 22:20
Quote Posted by Sg3
What sorts of people?
Well, not specialist mathematicians, but control engineers, biomechanists, a string-theory PhD and high-level network geeks at work (social, neurological etc, not LAN). People with working familiarity with limit cycles and other kinds of complex manifold shit (mainly to do with stability), so people who should have at least a pundits idea of what's going on. Like I said, I found the idea pretty uncomfortable, and I still don't really get how randomness would work (although I guess you can never predict the outcome of measuring a quantum particle beyond a few choices - is that what you would call unpredictable?), but the aforementioned smart people seemed to believe in it.
Sg3 on 3/2/2011 at 22:27
If you know any of them personally, do you think that they might be capable of having "denialistic" tendencies toward the idea of not being in control of their own fate? That's what I've often felt when having that sort of discussions; that is, intelligent people with whom I've spoken on the subject of "true random" have, as far as I can tell, always displayed a desperate sense of "But I can't be that predicable!" I think that they may be starting with an emotional bias towards "free will" and against determinism, hence their certainty (which seems odd in the face of the comparative void of evidence) that there is not even a theoretical way to calculate the complex factors which we can't account for.
demagogue on 3/2/2011 at 22:28
If we're really going down this track, I'd like to point out we're not actually talking about the mind or free will or humans anymore, and are getting into the physics of parallel universes, which is a completely different topic.
If you want to get back to the topic of minds and free will, a good starting point isn't parallel universes that are molecularly identical, and how possible that is. I'd start with the observation that brains don't have to be made out of neurons to function the same.
So think of two brains, one made of neurons, one made of high-speed parallel quantum processors, fed with the same inputs & functioning the same. Basically, somebody (
http://www.nytimes.com/2010/12/28/science/28brain.html) maps a human brain down to every synaptic connection & Hebbian weight, makes an exact computer analog, then feed the computer the same signals as goes into the human brain.
In this case, if the logic functioning of every synapse & weighting is exactly the same, then in fact you'll get the same "experience" and "behavior". Ideas like this are why functionalism is now the reigning paradigm for minds and not physicalism (where it depends on the specific neurons firing, or neurons as a physical type). This is a fantastic and profound insight, I think.
Edit: It means, first, that this debate about chaos or quantum divergence in parallel universes, or molecular doppelgangers, are red herrings, I think.
As for the "freedom" or "free will" debate, functionalism is useful in refocusing our attention away from physical determinism, which is again somewhat of a red-herring, and back to its functional roots. The key issue with free will IMO isn't that my actions are exactly determined by the physics of my brain (I would hope my will was exactly determined be me! If it was just random and not determined it wouldn't be free at all!)
The key issue with free will is that the forces determining your will can fairly be said to be "you", again another functional system that rests above (and is instantiated in) the level of the physical substrate and could in fact be substantiated in different physical systems. The "free will" debate is actually the "personal identity" debate in a different guise.
Sg3 on 3/2/2011 at 22:32
Demagogue, I perceive that you just made a highly insightful observation on the subject, and as it is I am extremely interested; however I lack the understanding to comprehend some of the terms and phrases you used.
... oh dash it all, I'm done pretending to be smart: lemme rewrite this post in plain terms:
Demagogue, that's really interesting, but I don't understand some of the words and phrases you used. In particular, I don't know what you mean by the following line:
Quote:
ideas like this are why functionalism is now the reigning paradigm for minds and not physicalism (where it depends on the specific neurons firing, or neurons as a physical type)
Do you think that you could translate some of that into Dumb?
Renzatic on 3/2/2011 at 22:34
Quote Posted by Sg3
Choices based on what? Why?
On whatever. Are all choices binary? Have you ever been faced with having to choose between five equally appealing decisions? In that situation, why did you choose one over the other? Did you actually
think, or was your supposed self made choice merely a predetermined knee jerk reaction based on the ebb and flow of the world around you?
Goddamn Demagogue making me google shit.
demagogue on 3/2/2011 at 23:15
Ok, ok... I did my honors thesis on this so I have a tendency to jump ahead.
Before the 1940s-50s, the reigning paradigm in philosophy of mind was called "physicalism", which identified a mental state with an exact arrangement of neuron firings. "This thought" = "these exact neurons firing". Very influenced by a stim-response model of the brain.
That didn't last long when in the late 50s-60s people started realizing most of the neurons in the brain are very similar, there are just a few types, and similar neurons can substantiate very different types of experience. So maybe it's not "this exact neuron" = red. They renamed that old theory as "token physicalism", and created a new theory called "type physicalism", where it wasn't "these exact neurons" but "neurons of this type = red", or "neurons generally" can equal red if they're arranged this way or of this type, and blue if arranged that way or of that type.
But then the same criticism that killed "token physicalism" attacked "type physicalism" in the late 1960s. First, it seemed even the types of neurons were very limited, certainly not a different type for every experience. They noticed experiments where after brain-damage, cognitive work would move from damaged neurons to use completely different neurons, and they had thought experiments where you replace a neuron with a computer chip doing its same job, one by one, until you replace the entire brain, and you realize nothing has fundamentally changed. The new brain must be just as conscious as the original.
These kinds of ideas led to a new paradigm called "functionalism", which says the physical substrate of neurons isn't the relevant part at all. It's the functional system as it operates in the substrate, limited by the substrate only by a few physical limitations. But you could instantiate the "mind" in even different substrates -- in silicon-chemistry neurons, in computer processors, etc -- and ontologically speaking it would be the same mind, with indistinguishable consciousness.
This theory has only gotten more support since then, and has dominated in the cognitive science era since the 1990s. It gets inspiration from computer science, where the "mind" is the software that runs in the physical "brain" computer, though you have to be careful with analogies and there's more to it than that, but it's a good heuristic to get debate going.
One state of the art theory of functionalism that I like involves topos theory. "Topos" is just another term for "topography", in this context referring to a physical substrate (in a computer it'd be the physical processor with electric signals, 1s & 0s, flying through it). The theory is a system of logic where you have an underlying "topos" (in this case the physical substrate of the actual neurons firing), carrying almost autonomous signals floating freely and talking to each other at their own level (and could be instantiated in other physical systems; unfortunately I don't think we know yet what's an actual "signal" in neurons, is it the output of 1-neuron or of a column of 10K of them?), but in the theory you can even have multiple layers of signals floating on the same topos that even contradict each other (one level saying it sees "black", another higher level saying it sees "white"), and it's all logically consistent and instantiable as long as none of them contradict the underlying topos, that is, they're only ultimately subject to the physical limits of what they are instantiated in (e.g., you can't have chemical-electric signals processed faster than the speed of light, though if you re-instantiated the system in another physical substrate, e.g., a qbit quantum computer, now you can signal over distance faster than the speed of light via quantum entanglement.) But the basic punchline is, it allows you to look at the mental states and consciousness without having to worry as much how it's physically instantiated in the neurons, as long as you know the functional organization, and it naturally deflates a lot of apparent paradoxes people sometimes worry about.
However, there are some hanging problems with functionalism, too, so the debate isn't over. One big one I mentioned in my post asking other questions. It's not clear where in a functional system the consciousness or thought actually happens. You could theoretically have 100 billion people with flags "instantiating" a brain seeing red, but where in that mass of people is the "experience of red"?
Some theories to answer this problem are "supervenience" which is that the experience hovers above & beyond the substrate (whatever that means), possibly in the functioning itself(?), "epiphenomenalism" & "property dualism", which says the experience hovers completely in its own physical domain affected by the substrate (normal physics) but it can't affect the substrate back (so like a casually-closed as-yet-undiscovered new domain of physics), or going the other direction: "reductionism" or "elimitavism", which says that experience is a grand illusion; it's nothing beyond the operation of the function itself, they are one and the same. No need for new physics.
All these really are weird terms, and they are weird theories. None of them feels very satisfying. Then again, when quantum physics was first developing, it was very weird too (it still is).
Functionalism is a cool theory, but it raises big questions. But honestly even the questions and problems it has are cool, and I hope people come up with answers that raise even cooler questions.
SubJeff on 4/2/2011 at 00:34
I totally reject functionalism because I think it's part of the organic nature of the brain that lends that particular substrate it's particular properties. Type physicalism is flawed, imho, because replacing any part of the organic computer (the brain) with an inorganic one will alter the properties of the computer even if it appears that the input and output are handled in the same way. It's this I have issue with;
Quote Posted by demagogue
So think of two brains, one made of neurons, one made of high-speed parallel quantum processors, fed with the same inputs & functioning the same.
Unless they are made of the same material I don't think they can function the same. There will come some point when enough of the organic is replaced that the computer will cease to function in the same way, but I'm sure we'd never be able to measure it. Until it was too late and we had a killer on the loose!
And reductionism scares the crap out of me because, well wtf? On one hand perhaps it's a sub-function of the personality function to reject it. Or perhaps I'm paranoid. On the other hand if were created that way (according to reductionist rules) perhaps the Creator built that doubt in on purpose. Either way it's scary if you think about it too much.
DON'T TAKE THE GREEN ACID
:p
Phatose on 4/2/2011 at 02:34
Strictly speaking, will two neurons ever truly be functionally equivalent? There are an awful lot of molecules in even a single neuron, and two neurons will never have exactly equal configurations. I'd expect so much as a single hydrogen ion out of place will eventually cause one biological neuron to function differently then another.
Each brain is like 100 billion neurons, all of which are slightly different. If there's no higher level of abstraction, why are we allowed to generalize that all brains have minds?
How would we even test the non-organic neuron for equivalence? There are like 4e20 neurons in the brains of humans on this planet alone. Does it have to function differently then all of them?
If different function is enough to disqualify it from being a mind, why don't the differences between brains disqualify them?
What particular properties are shared between all those different neurons which couldn't be emulated by a non-organic machine?
demagogue on 5/2/2011 at 03:06
Quote Posted by Subjective Effect
I totally reject functionalism because I think it's part of the organic nature of the brain that lends that particular substrate it's particular properties.
This is a respectable position. This is John Searle's position in the book The Rediscovery of the Mind, though I wish he went into more detail what was actually special about the carbon-based organic part. I never was clear about that part.
I'm sympathetic with its vibe of respect for nature and humility, in contrast to some functionalist arguments with a bit of hubris in thinking it must be so easy to make a brain and consciousness if only we wired it right. (Though you could argue it's another kind of hubris to think human brains are so special they just
must have a special domain of physics just for them beyond our engineering capacity.)
Anyway, three things come to my mind... (I'll try to keep them short!)
(1) Re: "Unless they are made of the same material I don't think they can function the same." To be clear, are you thinking about this as a fundamental law or just as practical engineering?
Are you saying functionalism is technically right. It's the functional organization doing the actual work (so any substrate organized the same would theoretically hold consciousness) but it'd be practically impossible to engineer it carefully enough outside of actual brains. Or that functionalism is fundamentally wrong? It's not the functional organization at all, but some irreducible physical property of the stuff after all that's doing all the work, as a fundamental law (possibly with new physical properties we have yet to discover)? I don't know if we even know enough yet to answer that, but do you have a gut feeling about it?
(2) Now they've got models of non-carbon-based organic chemistry, and even real-world examples like around ocean vents. If you want to say only earth-type carbon-chemistry brains can have consciousness, (or maybe including a few other "natural" organic chemistries?), then first, doesn't it seems like a huge coincidence that the chemical composition of earth and then X billion years of evolution just happened to stumble in *exactly* the right combination to instantiate consciousness, but other types of chemistry in other evolutionary-trees, or human-made engineering, can't?
(3) And then, getting down to brass tacks -- a normal neuron signal is made up of three parts: (1) In-coming neutrotransmitters trigger a build-up of an electric potential in the nucleus, (2) When that reaches a threshold, gates open and choride ions stream inside, changing the local electric potential which open gates farther down the chain, cascading all the way to the ends. (3) Then at the axiom ends, it releases pods of neurochemicals into the synapse, starting the cycle over.
It's all efficient at what it does, but as a matter of physics there's nothing particularly special about it that Maxwell couldn't have described 150 years ago. It doesn't seem like there's anything more special going on that electrons streaming across gold atoms couldn't do just as well, or a billion other ways to carry a signal from eyes, to decisions, to muscles. This is similar to Phatose's point too.
There are people that like to say there's other kinds of signalling going on, like quantum effects or some mysterious other physical force involved in the signaling, but I don't know why you'd need all that extra physics when plain old vanilla electromagnetism seems to do all the work just fine. (Not saying we might not be surprised, but they've been studying the physical mechanism of neurons for a long time now and AFAIK haven't really needed extra physics yet to get the job done.)
Good fodder for debate, thanks.
Phatose on 5/2/2011 at 05:20
Did Searle ever refine his position since "Minds, Brains, Programs"? I thought it was terribly unconvincing there, and the causal powers he was constantly going on about practically translated to "A wizard did it."