Qooper on 8/2/2024 at 11:19
Nicker, sorry I missed your response :( This conversation requires deep thought and concentration and it's difficult to do that in a hurry, but once I have a quiet moment, I'll get a cup of coffee and write a reply. I think this conversation is very interesting!
DuatDweller on 8/2/2024 at 21:43
Is AI alive? Well if it starts to multiply across different machines, we could say yes.
But being alive doesn't mean is intelligent or aware of itself, lets take ants for example they're alive, they multiply they carry on with their programmed tasks, an AI could do the same without self awareness, execute their goals, make new ones and so on.
Only when an AI will achieve self consciousness things will have to change, will it follow the predetermined rules of the original programming?
Humans know what's right or wrong even without laws, they sense that killing someone for the sake of just doing it is wrong, if they're not psychopaths that is, they will feel the burden of conscience and sooner or later they will come to terms with guilt, some even commit suicide for that very reason.
Will a sentient AI feel empathy or be a psychopath? If their base programming is sound and reasonable could be human like and feel empathy, other wise if a programming is militaristic and focused on war and defense be a threat?
I recall the Isaac Asimov robots rules, would this apply for an AI?
Would an AI forsake the original programming?
I remember that movie Virtuosity (1995 Denzel Washington, Russell Crowe), a multiple murderer AI personality wants to get into the real world to kill, it was design to learn from killer's personalities and predict their behavior.
Qooper on 22/2/2024 at 11:30
Quote Posted by Nicker
Am-ness. I like it. When I was crafting my reply to heywood, I originally used the term
be-ing to label that ineffable sense of individuality. It sounds a bit new-agey and I figured that might drive us further apart, so I ditched it.
I definitely would understand what you meant if you were to use the term be-ing. But yeah, it's not my intention either to sound new-agey, I hope am-ness didn't sound like that.
Quote:
The problems with language are immense. I agree with Wittgenstein that "Philosophy is just the by-product of misunderstanding language." If we had precise terms for am-ness we wouldn't have to keep inventing new ones.
I don't think the terms are that important in a general sense, because what we're really talking about are concepts, which are precise. We can understand what concept the other person means by asking questions and listening to how the person talks about the concept. Like triangulating on a radio source. But specific predefined terms are very handy in a technical discussion because they make referring to very specific concepts with subtle differences a lot easier. So I'll try to define my terms more clearly below.
So by '
am-ness' I largely mean the same as 'consciousness', having the capacity to get measurements/inputs through our senses and being able to change the physical state of our surroundings via actuators/outputs, and also being able to think, remember, plan, make decisions, take initiative, have feelings (joy, sadness, anger, etc.) and have agency. The reason why I used the term am-ness was to emphasize agency, but I believe I mean the same thing you do when you say be-ing or consciousness. Please correct me if there are any differences in what you mean by these terms.
I also use the term '
to measure', by which I mean for an am-ness to get information by observing or instrumenting. Like for instance there's a rock on the ground. An am-ness sees it and as such has performed a visual measurement of it. But then the am-ness picks it up and notices it is very light, too light. Now the am-ness has performed a series of other measurements on it, such as feeling the weight and feeling the texture and solidity of it, and also looking closer at the surface, which looks like it was painted to look like a rock. As a result of these measurements, the am-ness now knows about this object. Without measurement, the am-ness would not know.
Quote:
I largely agree with you on the above, but with the added caution that we cannot know if our perception of am-ness is reliable or delusional.
I'd like to examine it further. Let's say our perception of am-ness is delusional. What exactly is the thing that has this delusional perception? Isn't it the am-ness that we have a delusional perception of? This leads to a recursion, a delusion having a delusional perception, which cannot be.
Quote:
But then doesn't it require some aspect of us to be tapped into reality in order to be deluded?
Good point. What's definitely clear is that we either are completely physical, or we're at least interacting with the physical reality. And if we're completely physical, then the am-ness must be emergent.
Quote:
That is unanswerable. I think the best bet is that once we are physically de-configured, that's it.
It's true that this is unanswerable in the sense that we cannot share the information with other am-nesses once we have it. But we certainly will know once we die. Or to be more precise, if we truly are only physical, then there won't be anything left that would know, but if we are more than our physical bodies, then we will know. Either way, we cannot pass that information to the living.
Quote:
We are not vessels. We are generators. Energy is eternal, configurations are temporary.
Many believe that we are not vessels. But I believe we are. In one of my previous posts I talked about Quake and the pattern of input/output. What I was getting at with this was that humans can also be looked at in terms of just the input and output, stimuli and behaviour. There isn't really a physical need for all that stimuli to be experienced. The exact same behaviour could exist without an experience. If I created a very complex AI with the intention to make an artificial person (not that I would know how or have the resources to do that), I wouldn't expect it to start experiencing anything. I would expect it to remain an automaton, no matter how complex I made it. It would behave like a human, but it wouldn't experience anything, even if it said so or behaved so. It says so and behaves so because that's the kind of mechanism it is. A very, very complex mechanism, but still a mechanism.
Quote:
Yes and no. Consciousness (am-ness / be-ing) seems more like an agreement between similar organic creatures. Similar enough to agree that we belong in the same category but distinct enough to assume we are separate individuals.
Perhaps, except there is no need to merely assume we are separate individuals. We know we are. If I wasn't a separate individual, I'd have experiences from someone else. I am me and I am not Steven Seagal. Steven is Steven.
Quote:
I don't understand what you mean by this - "and all consciousnesses will measure death." All organisms are aware of death as something to be avoided but humans take it personally.
Yeah I admit I was very unclear here. By measurement, I meant that the only way for a consciousness to get information of what happens to the consciousness at death is for the consciousness to step through the boundary of death, to die. The only way to measure what's inside of a black hole is to fall into one. Sorry for my blocky and unclear language.
Quote:
I disagree. If am-ness is something that distinguishes us from most other animals, then it is absolutely an critical component of our nature. I don't think we could invent and imagine the things we have, without that core of self, like the way a pearl needs a grain of sand at its center. I think that our obsession with our mortality has a lot to do with it.
That is certainly something that resonates with me, and I really appreciate that take. I think you have a very good point. I'm almost arguing against myself when I say that if we define two entities to be exactly the same outwardly, behaviourwise, except one experiences and the other does not, then by definition the experience is not a critical component of our nature. But can you see my point here? From the starting point I just mentioned (which is very very theoretical and most likely unreasonable), there is no physical need for an experience, if the input and output stays the same.
Quote:
If a blob of protoplasm can become self-aware, why not eventually a machine? What might be the grain of sand in that pearl?
I like your picture of a grain of sand in a pearl :)
One more thought about the idea of emergent consciousness. Assuming consciousness is only physical and emergent, there might be a reason why it only takes place in something like a brain, and not large-scale structures. The brain is sufficiently small for quantum effects to arise. What if consciousness is somehow tied to the quantum level?
Anyways, I'd like to thank you for this conversation so far. I find it fascinating! I do apologize for taking so long to reply. Topics like these really require a very good cup of coffee, and a lot of effort to properly get at the underlying concepts so as not to sound too blocky. I have a few questions for you, Nicker, but I'll write them in a separate post, or I'll edit this one later today.
Edit: Ok, now the questions. One of these questions I already asked demagogue later in this thread, but I'll rewrite it here.
Do you view consciousness as something that can exist to different amounts in different cases, or is there a threshold of complexity needed after which it begins to be?
Have you played Frictional Games' SOMA? What did you think about the copying and the coin flip?
demagogue on 22/2/2024 at 19:37
I think I'm getting influenced the more I study Quantum Mechanics, but this idea that it's incomprehensible that the be-ing-ness of consciousness could be manifested by a physical system is... not the writing on the wall. Most things in QM seem incomprehensible or anyway very, very, very strange intuitively. But it gives you a recipe to deal with it anyway that usually involves some information processing.
In some ways it seems kind of deflating when you first see it. You just ask yourself what information would a system need if it were going to do that thing. And then you fill in exactly that, and that's it. You don't even realize how powerful it is until much later.
Anyway, what would a system need to "feel" "be-ing"? Well they'd need to feel situated in a larger space. So right off we know we have to process a stable surrounding space, and a position in it. It's not just anything in that space. It needs to be an extended body, and even within that, you have to process a "seat" of consciousness. Then you can get into the very fine details of layers of surrounding space, the space within which your arm can move around relative to your body, the space within which you could walk or fly, the pressure of a ground or seat underneath you if you aren't in free fall, etc. All of this is before you even get to sight. This sense is proprioception.
Anyway, I'm in the camp that constructing that information content--in a simulated or analog way, not like a single boolean register isBeing--and processing on that content with a system constructing the possibility of action in it, that that processing is itself consciousness, and specifically of be-ing, or one part of it. I think actually there are many more pieces, but they're going to look like that. But the point here is that it's a recipe you can manifest in a sufficiently powerful computer.
In terms of the big theories of consciousness, it's definitely a brand of Higher Order Thought; it's not a simulation of space itself that's conscious, but processing the possibility of action or be-ing in that space that makes it conscious qua "space" "for" "me", that is the important part for it be-ing for me. I don't think it has to be a Global Work Space necessarily, but I think practically it serves as one in our case (that is, the GWS isn't the part that makes it conscious), and it's most definitely not integrated information theory since the modality of the information is critical, and it's the type of processing, which is not necessarily very integrated. That is, it's not the deep integration that makes it conscious again; although it turns out that human consciousness integrates a lot of modalities.
Qooper on 22/2/2024 at 21:25
Quote Posted by demagogue
Anyway, what would a system need to "feel" "be-ing"? Well they'd need to feel situated in a larger space. So right off we know we have to process a stable surrounding space, and a position in it. ...
I'm with you there so far. But I think the better question is: At minimum, what information would a system need to behave exactly like a human? I'm still thinking it would not contain anything that would result in an experiencer. Obviously we experience, but I'm sidestepping that on purpose.
Quote:
Anyway, I'm in the camp that constructing that information content--in a simulated or analog way, not like a single boolean register isBeing--and processing on that content with a system constructing the possibility of action in it, that that processing is itself consciousness, and specifically of be-ing, or one part of it. I think actually there are many more pieces, but they're going to look like that. But the point here is that it's a recipe you can manifest in a sufficiently powerful computer.
Do you also consider that this recipe can manifest itself in very large and possibly even very slow structures? Is it enough for merely information to be processed? Can it happen on a very large grid paper?
Also are there degrees of consciousness, or is there a threshold? By this I mean that a crude robot that has cameras, servos and a CPU. It processes the information it senses via its sensors in the space it's in. Does this constitute a small amount of consciousness, a wispy experience?
demagogue on 22/2/2024 at 23:10
Just to quickly respond to that.
For a theory of consciousness, I think you wouldn't want to hew too closely to human experience. Most people believe other animals are conscious. I mostly wanted to talk about the consciousness of be-ing or being-there-ness, which in my way of thinking isn't just having brute consciousness but a specific content that's manifested. That's still light years away from finding oneself a human there. I'm not sure even human infants have that.
Can it manifest in large or slow structures? Well large and small are relative, but there are physical constraints. The problem of both getting too large and too small is that time (for large structures) and space (for small structures) resolution start blurring such that it can't mediate the processing in real time. I think that's more a constraint of the medium and not a theoretical constraint.
Could it happen on grid paper? No. Grid paper doesn't process information from one state to another, as in physical parts of it don't contain information content in their structure where physical changes "process" that information from one state to another in a coherent way. Brains and CPUs can do that though. Mithuna on her Looking Glass Universe channel had a recent video showing how to set up light rays and filters to accomplish very simple quantum computations. So that set up would be a very simple quantum computer. That wouldn't be enough to mediate the kinds of processing that gives content that we call conscious though.
Are there degrees of consciousness or a threshold? I think this question maybe gets to the way I think about it compared to other threads out there. To my way of looking at it, there's a wispy experience if a robot's system processes a wispy experience. If it processes a very stark and visceral experience, then there's a very stark and visceral experience in there. So in my way of looking at it, no, there's not degrees of consciousness built into the theory. You get exactly the consciousness that's processed, which can be wispy or visceral from the perspective of the viewer based on how it's processed. It's not a fundamental thing.
The brain spends a lot of processing resources on culling content from experience based on top-down and bottom-up attention, so I think a lot of what we intuit as fundamental to consciousness is just the way human brains are designed.
Qooper on 23/2/2024 at 08:01
Quote Posted by demagogue
For a theory of consciousness, I think you wouldn't want to hew too closely to human experience.
My point wasn't to center on human experience, I just used the word 'human' as an example, because we are humans. The core of my point was that to me it seems that there's no physical reason why we experience, instead of just our physical bodies performing physical actions just like any other physical object. And that's why I also don't expect a robot with an AI to be sentient. Why we are sentient is another matter and I have a belief regarding that (and I do think animals experience, at least my two prankster parrots definitely do), but it's not my point here and I'm not arguing that.
Quote:
Can it manifest in large or slow structures? Well large and small are relative, but there are physical constraints. The problem of both getting too large and too small is that time (for large structures) and space (for small structures) resolution start blurring such that it can't mediate the processing in real time. I think that's more a constraint of the medium and not a theoretical constraint.
What do you mean by real time? Do you mean the latency between event and processing of its information? If we go to large scales, many events happen much slower, so the latency is proportionally the same.
Quote:
Could it happen on grid paper? No. Grid paper doesn't process information from one state to another, as in physical parts of it don't contain information content in their structure where physical changes "process" that information from one state to another in a coherent way. Brains and CPUs can do that though.
I meant that if something (a human or even a computer) was using very large sheets of grid paper to keep track of state, like a cellular automaton, and update those states according to strict rules, then wouldn't that count as information processing and even "physical" within the subrealm of the CA? Now the substrate is the grid paper and what ever is upholding the rules and drawing onto that grid paper.
I'd like to get clarification on one thing: You mentioned that if a robot processes a wispy experience, then there's a wispy experience, and if it processes a visceral experience, then there's a visceral experience. What do you mean by processing? And also more precisely, what does it mean to process an experience?
demagogue on 23/2/2024 at 09:38
The time factor is an interesting one. You could imagine an experience that's extremely slowed down or sped up, but I don't think people would call that consciousness, because I don't think there'd be coherence in the signal; it'd be like white noise, and coherence in the signal is what I think the contents of consciousness are.
Okay, processing an experience. It's one thing for a state to be recorded and updated in a medium; that's not conscious in my view. It's another thing for the medium to simulate the process, which is conscious, where the process has a physical analog, and the physical changes map on to the informational changes. But I guess it's better to use examples. The first thing you learn in neuroscience is that the brain is full of maps, and most processing is geometrical. Content and the context it's in is mapped to a geometrical space which you can overlay with a grid. Well it's a functional space. It can be and often is multi-dimensional, I mean like 10+ dimensions, because all space and time really are are logical relations among physical processes, here speaking of neural nodes and edges.
So to reach out your arm and grab something, what is not conscious is a template of commands to actuators that blindly follows them and picks up the thing. What is closer to consciousness is this set of maps with activation in them. One is going to be the local space around the arm and object. Another is going to be impulses in the sets of muscles. They're going to act in tandem where the arm is given a strong impulse as if it's being pulled in the direction of the object. That pulling is manifested in the literal geometry of the activation on that map. That's more in the direction of what I mean by processing or simulating the analog of the content of an experience.
While you can write an equation on a paper and change it, it's not like the literal geometry of the paper is manifesting the equation, and changes in the geometry are literally performing the computations, where the computation and geometry-dynamics are one in the same. I mean a pencil that draws a line dividing two terms isn't simulating any division of the paper itself; at best it's simulating a path on a line segment, which is not division. But the geometry-dynamics mapping directly on to processing, where a division is a literal division of the geometry, is more like what these maps are doing. I think that's a necessary element of consciousness. It's not just that the system correctly spits out the right answer. It's that the computation manifesting the content is embedded in the functional geometry of the processing itself, so the geometry-dynamics are the manifestation-of-content are the experience-of-content, all one and the same.
Edit: This by the way is behind why I think we shouldn't be too quick to say neural nets in computer processing isn't conscious, because the structure of their processing is creating a functional geometry in the electrodynamics of a CPU and transforming it. So some of these processes might actually be manifested in there somewhere. I don't think it's consciousness as we describe it, because it's not designed to manifest what we experience. It's more often designed to follow blind functions that get the right answer. But the process it uses to get the right answer may end up (inadvertently to the programmers) creating the kind of geometrical-medium that could be a platform for consciousness. Or anyway that's a question to look deeper into. But another thing is to more explicitly model these neural net maps on the kinds of maps we're already finding in the brain.
DuatDweller on 23/2/2024 at 19:28
How do you relate quantum mechanics with AI, I mean the sub particles are made of resonating waves? Is going more and more into the infinitesimal scale and the more you know the more incredible things are at that sub level.
I mean matter shouldn't even be solid at all? Then what qualifies the reality state of solid matter, is all a compound of resonating frequencies?
Then I'll start to believe that story about Jericho's walls, and the earthquake after blowing their horns for six days... resonance..cascade..what ever.
demagogue on 23/2/2024 at 19:40
I didn't mean to relate them directly. I was connecting them at a high level of abstraction, like there are lessons to take away from QM in looking at consciousness, and it's mostly to do with something like the link between information theory and effective geometry are are the root of stuff that happens in reality. But as physical processes, like you say they're at completely different scales. So I think the rules of QM themselves for the most part aren't relevant to neural signaling or consciousness. Quantum effects would just get completely washed out long before they're at a level that's relevant to the signals that neurons are carrying.
That said, one of my favorite kinds of case study are QM contributions to transduction, to give an example, the raising of energy level of an electron in the double carbon bond at the crick in a rodopsin protein by an incident wave of visible light, which pops the double bond and makes the crick kick out like a loaded spring, kicking a mechanism that signals to a G-protein in first layer of the retina, potentially beginning a cascading signal up to the visual cortex and eventually the manifestation of sight of the thing. So QM is in there at places. But most everything happening you can model much better without it, most especially the high level functional maps, or the neural activity that's creating that structure, that are actually manifesting consciousness in the way I'm thinking about it.
Edit: If you want to talk about fundamental reality, then my guess is that reality is ultimately the hydrodynamics of some vast sea of oscillating elements, whatever they are, with complex links to each other. There isn't "stuff" at the bottom; there's persistent structure in the hydrodynamics, one reason some people sometimes say energy, mass, time, and space are all aspects of the same underlying process. But a consequence of that is that there can be signals propagating at different scales, Quantum Field Theory at a very low level, Newtonian and Maxwellian physics & the Classical world at a higher level, and consciousness at a still higher level. So consciousness isn't fundamental in my view, and an AI system designed the right way could avail itself of a structure in its operations that creates a platform for mediating the kinds of signals that manifest consciousness.