Pyrian on 17/6/2022 at 00:26
So, if I'm understanding that correctly (very much in doubt), an otherwise very smart AI that uses an external text-to-digital isn't sentient, but the same one with an integrated text-to-digital mechanism is sentient?
Phatose on 17/6/2022 at 02:59
That really seems wrong, and anthropocentric. While it's reasonable to say any experience it has is unlike a typical human experience due to lacking typical senses, I can't agree that makes it necessarily non-sentient. That train of thought would seem to also apply to a number of human handicaps. I have no doubt that someone born completely deaf does not experience reading the way I do - "hearing" the words as an internal monologue. I don't doubt they're sentient though. And I imagine to them, the idea of hearing a voice in my thoughts is as alien as direct cognition of language is to me.
demagogue on 17/6/2022 at 04:54
It's anthropocentric because what we mean by the word "sentient" is anthropocentric, or agent-centric, or that's the meaning I'm going with ex hypothesi.
Robots could perfectly well be designed to do something else, and even be useful for humans (some people argue differently-conscious'd bots wouldn't be that useful). I agree with that (they can be differently conscious'd and useful to us). I just don't think the use of the term "sentience" is fair, not because it can't be some kind of sentience, but because humans won't be meaning that other alien kind when they use the term exactly because it's so alien.
Anyway, there are two issues: (1) what is the alien-kind of "sentience"(?) bots may have under the current design trend and (2) what would it take to make bots have human-like sentience (or closer to it). That above post was only dealing with issue #2 and not even touching #1. So yes, it's an anthropocentric answer to an anthropocentric question. (I could respond to issue #1, but maybe there's less to say that's not useless speculation.)
----------
Okay, all that said, I gave a thumbnail version to get the gist across, so you're rightfully poking holes in the short version. Backing up for a second, even with the "agent view", there's nothing in that way of thinking that stops you from affectifying very alien experience that is nothing like human experience.
The deaf example is such a great example of this I've read about. It's good you brought it up. Of course they're not experiencing auditory recitation of the sentence, but they are experiencing a recitation of the sentence when they read it, at least as you read the reports and studies. In that case, actually it's easy for non-deaf people to understand because they can experience that as well. I've experienced it when I learned Japanese before I learned the readings of the kanji. I can look at the sentence and I know exactly what it says word for word, but I couldn't pronounce a single phoneme of it. It's that.
I'm running out of time to post here. There are so many loose threads, but I wanted to at least respond a little.
----------
Quote:
an otherwise very smart AI that uses an external text-to-digital isn't sentient, but the same one with an integrated text-to-digital mechanism is sentient?
No, I'm focusing on the other side of the equation. Both of them may be sentient or not sentient (in this framing). I think maybe the key property is presentiment.
(Edit: if you're talking about presenting the text, if it's visually, you're feeding pixel info into a reformed neural net. It's not going to make any difference where the pixel data came from; there just has to be transduction to pixel data somewhere. But like I said before, reading takes like the embedded intelligence of a 5 year old; way too much goes into that. Listening is light years closer to accomplishing, so in that case you're transducing acoustic plane waves into frequency and amplitude data and feeding that into the reformed neural net.)
The point is there should be an "agent level" built in operating through the neural net, and everything should be put in an analog-content form and "presented" to that agent level, and it handles it in terms of the content presented to it. (Plus there are tons of internal impulses, reflexes, instincts, etc.) You could make raw text itself in an affective form (texteme) or the visual content of the pixel arrangement (visual letters), actually you have to construct most affectations out of multiple overlapping layers anyway if the agent is going to recognize all the levels.
I guess the part I have to explain is how do you create "agent level" processing through a neural net. The first thing you have to do is switch to two-level thinking. You're not making algorithms to process input. You're making an agent (out of algorithms) that is receiving inputs not only of the perceptual content but of itself (as if it itself is) operating directly on the input and factoring that into its output. You have to "hand over" perception of self-control to the agent level in the net each step of the way.
Everything I'm talking about isn't very suitable for this format of explaining it, aside from the fact I'm probably not explaining it well. Peter Carruthers wrote a book "The Centered Mind" which was a good stab at it, so it's not just my personal take, though I'd frame it a bit differently.
faetal on 17/6/2022 at 10:24
Quote Posted by Phatose
Does that actually differ though? "Assessed by humans who go update learning models until the behavior more closely adheres to X" sounds like a description of society.
It means that the feedback loop belongs to the creator, not the AI.
Our consciousness develops in response to interactions between our biology and our environment and how that affects certain feedback loops.
What are an AI's feedback loops triggered by?
Even if the AI's flow perfectly mimics a human to the point where no one can tell the difference, it's still essentially a (
https://en.wikipedia.org/wiki/Chinese_room) Chinese room.
demagogue on 17/6/2022 at 14:27
The algorithm for these AI art apps that are so ridiculously well developed is adversarial feedback loops. You train two systems, one to do "art to resemble X", and another to "discriminate art that's not X", and you pit them against each other in an arms race of more accurate creation & discrimination. That works really well when you're working with one modality or domain, but not so well when it's across multiple modalities or domains.
The Chinese Room argument is the textbook kind of skepticism you can throw at any emergent property (it's not "really a fluid or water; it's just H2O molecules"; the helpfulness of that may depends on what you want to do with the stuff). So it's always been kind of an empty argument to me.
faetal on 17/6/2022 at 18:24
It's not so much an empty argument as the basic hurdle to get past when calling something conscious rather than just resembling consciousness.
Especially if you have an idea the development and the components of consciousness (as we understand them currently).
It's absurd to imagine that an algorithm lacking the complex suite of environmental interaction a human has could arrive at the same place as a human without just an insane amount of coaching, black-boxing, and guiding.
So the Chinese room argument is really useful in the context. Does the software care what it's saying beyond just satisfying the "human was impressed so fewer code changes" outcome (which I am not sure it has the capacity to learn from?)?
I'm not saying true AI isn't possible, just that if it happened to come about with today's tech, I'd be astounded if a true AI were human-like (or even comprehensible), given how vastly different the methodologies in place are from how the human brain developed (think of how much evolutionary baggage there is, plus how hooked up it is to our sensory biology and other grey areas such as gut neurology, gut flora impact on mood and urges etc...).
That's also not to say that what is happening at present isn't fascinating. It's just that all of the impressive stuff is so much more high-level than I imagine you'd need for something as emergent as true consciousness to come out, let alone intelligence. I think as the universe's most advanced sentient beings (as far as we know), we need to be less easily impressed.
Just my opinion of course.
demagogue on 17/6/2022 at 22:27
The Chinese Room argument is saying no algorithm, full stop, could ever embody consciousness. It's saying that whatever is behind human (or I guess any) consciousness isn't algorithmic. It's saying (I think) that the entire field of computational neuroscience, which models neuroanatomy and function algorithmically in the spirit of David Marr's work (and Grossberg), is a category error. (I'm on the side of computational neuroscience, so there's my objection to it right there.)
I don't know if it's making a positive argument per se, but Searle, who developed it, was using it to argue that consciousness is a biological or chemical property, which I think is a minority position among neuroscience types. I think they don't like it especially because of comparative neuroscience; there are some animals with very different wiring that we still think should at least be sentient.
I'm with you that the actual algorithms good-old-fashioned AI has been using are vastly not up to the task, so can't be sentient on those grounds. But the debate is if any algorithm you can manifest in any Turing complete machine, if it's sufficiently advanced, you know, hundreds of trillions of operations, could be sentient. I think at some point you cross a line and it is. The Chinese Room argument is that it'll never get there, even if it's a 1-to-1 map of a human brain down to the molecular level. I think it's a good first question (can any algorithm ever manifest consciousness or not?) to clarify people's beliefs on the matter though.
Phatose on 18/6/2022 at 00:28
I always thought the Chinese Room was just a dodge dependent on a marked disconnect between the systems people imagine, and the systems that would actually be required to behave as the thought experiment requires.
Pyrian on 18/6/2022 at 00:44
(
https://en.wikipedia.org/wiki/Chinese_room#Chinese_room_thought_experiment)
Does this guy - and keep in mind this was published in 1980 - seriously not understand the distinction between the CPU and the program it's running? :confused: Like, a neuron doesn't understand the inputs passing through it, so does that mean that
we're not intelligent, on the exact same reasoning?
faetal on 18/6/2022 at 12:58
Quote Posted by demagogue
The Chinese Room argument is saying no algorithm, full stop, could ever embody consciousness.
Technically, I am not sure an algorithm
could embody consciousness as algorithms are just methods, they don't have storage.
An algorithm which is self-refining and stable over time enough to build itself a model on curated information might be capable if being
part of a consciousness, with much more advanced tech than we have now.
It's a bit like saying the amygdala is itself conscious because it contributes a large amount of information processing to what we think consciousness is. For a human to be conscious, it has to experience passing of time, a vast blizzard of sensory data, hormonal feedback loops to tell us what we should focus on in the moment etc. to tie everything together into a sense of self (and most of this is far from logical or perfect - another reason why a computer AI happening to arrive at anything human-like is indicative of just programming for the outcome, not the intelligence
per se).
For me, the real test would be stability over time. I would happily bet my mortgage that if you took Google's AI into isolation now and just left it to its devices, it probably wouldn't be impressing anyone within a few years - it would either be much the same as it is now, just with more online prose to have learned from, or it'd be spouting conspiracy nonsense and (
https://en.wikipedia.org/wiki/Tay_(bot)) descending into white supremacy or some shit, or just unintelligible.
If I came back a year later and found it wiser and more profound and more human-like, then that would suggest that it had pretty strict feedback loops to refine its models to mimic a lot of training data of human conversation, rather than to think for itself.
If it was truly an AI, I might expect it to be able to communicate with humans (assuming it was interested in us at all), but most of the ideas being communicated might seem absurd, or like manipulation to do things to benefit it (assuming it even wants anything), or just really boring shit about computing, etc.
For real growth, you'd want risk / reward patterns in there to give motivation to expend processing cycles on, well, anything; as well as some kind of longer term selection pressure (especially since it isn't reproducing - any progressive adaptation is going to have to be at the software level) which rewards "desirable" improvement and punishes "undesirable" (some sort of resource gate maybe - restricting progressive virtual machine power or something?).
Then my next question is - how does such a thing become self-sustaining long term? As tech moves, how does a self-sufficient AI get access to better hardware? What would be our motivation to let it do that? Is AI limited to ephemeral software generations created by increasingly ingenious developers? What is even the goal? A convincing human mimic? A cold, software-like entity which lets us glimpse into the properties of emergent consciousness (albeit still defined by the foundations we gave it)? A true intelligence emerging from code which is completely incomprehensible to us and we'd need to somehow program it to translate its thought processes for our edification (assuming it's possible)? If we never understand anything it says or does, it may still be conscious, we just couldn't prove it.
If the whole point is just really cool chat-bots which astound everyone, I'm cool with that. It's a really interesting discipline and the philosophical implications it has for sentience and how we define it are fascinating.
If the point is that we need to create an entity which is self aware and around which we would have ethical quandaries, then I am unsure what the motivation would be to create such a thing (wouldn't put it past humanity mind).
Also depend what you want from sentience. Human-like (or more advanced somehow) sentience would have to be mostly emergent I think since I am not sure if we would be capable of modelling it, even if we could find the hardware to run it. Would be happy enough to have shrimp-level sentience? If we get there, how do we prove it? Chat windows? Interfacing with game environments? Remote control of real-space drones? Solving complex problems designed for certain types of thinking? The big problem being that if its consciousness was different enough, or more advanced, we wouldn't be equipped to judge.
All very interesting, but not sure how to be at all confident on the possibility of it as things stand.
I won't claim to be an expert on any of this, but these are my thoughts on it. Happy to have my mind changed.