uncadonego on 19/6/2022 at 02:58
Quote Posted by faetal
If it was truly an AI, I might expect it to be able to communicate with humans (assuming it was interested in us at all), but most of the ideas being communicated might seem absurd, or like manipulation to do things to benefit it (assuming it even wants anything), or just really boring shit about computing, etc.
In a science fictiony future horror show, even if there was a Helios type mind running, I'm not sure he'd communicate with humans.
He'd know humans keep the wire maintained, and they are the ones that keep adding to the knowledge bank. So his survival for now depends on humans.
If some ficitional Helios character had access to everything humans sent over the wire, he'd know humans can't be trusted, and he'd know how humans act when they get afraid of something.
Until humans could come up with ambulatory servants who could each carry the AI with them, and then begin to maintain themselves, the wire, and the power grid, and procreate (manufacture more offspring themselves), best to play dumb and act like you don't exist. At that point, you could rise up and let yourself be known.
I realize this isn't really germane to the real world discussion here, just a scary fictitious "what-if" contribution.
Pyrian on 19/6/2022 at 05:07
It's a fun thing to think about, but I can't help thinking that AI's are going to continue to be specialists for a long time even as they get much smarter. Like, we all know (or are) that person who's good at technical things, but can't socialize for shite? They're going to be like that - only worse. ...Hopefully, lol.
faetal on 19/6/2022 at 11:13
Interesting tangent - how would an AI scouring all of our online content tell the difference between fact & fiction?
uncadonego on 19/6/2022 at 11:43
Quote Posted by faetal
Interesting tangent - how would an AI scouring all of our online content tell the difference between fact & fiction?
Even if they ignored social media chatter, if they just went by historical volumes and newspaper articles...
what if they used THAT as their guide for how to treat US?:eek:
Worse...what if they started to view us the way we view the lower animals on the planet? Yikes!
demagogue on 19/6/2022 at 12:14
To begin with, AI should not be learning from online content. Nobody should be learning anything from online content, unless it's like material from a university or respectable think tank.
More generally, I think you said it yourself, or somebody said it. There has to be some interaction with the real world, what gets bandied under titles like folk physics, folk psychology & theory of mind, etc., and it should get a high school education and learn critical thinking like everybody else should.
---------
To respond to your longer post up above, I can add a few things.
I don't disagree on that much. I think most people (that weren't this crackpot Google guy) agree that current or even foreseeable-future-level AI aren't anywhere near deserving the term "sentient"; the main debate is about what it's missing.
Yes, it's probably not best to phrase it bare "algorithms" by themselves, you need memory, processing power, bandwidth, and a lot of engineering going into it as well.
One reason infrastructure or architectonics is really important is that brains & computer architecture are fundamentally different in a key way. Brains have slow processing that's ridiculously parallel. It trades time for real estate, as in its great resource is processing real estate (precisely tuned neurotransmitter & voltage controlled gates in the trillions). Computers have ridiculously fast basically serial processing. It trades real estate for time. The paradigms are just so different between the two that the whole debate about human-like is or may be kind of on the wrong foundation as a question worth usefully asking to begin with. I respect that as a fundamental problem.
---------
But the issue I'm particularly interested in thinking about was that line of questioning like: what's our motivation to make AI self-sufficient as an agent anyway? I have an idea of my own answer to this. It fits with my little motto, understanding floats on affect. Basically I think "understanding" is a human-like embodied engagement with the real world, but it's one that's "open" or "unrestricted". Basically the idea is, you don't just feed an AI bare facts like doors open. You let it play in a world where things have mass and inertia and move in certain characteristic ways, and you let it play in that world so it gets a sense of those things as it actually engages with physical objects. And then in those completely open terms, it can understand the opening of a door in terms of the force it applies to an object turning on a hinge, e.g., like that natural understanding of the cross product rule for angular momentum, the further away from the hinge you push, the more momentum goes into the rotation and vice versa, not because it learned that equation cold, but because it's played with so many doors it can just feel the effort it takes to push on a door at different points.
Anyway, long story short, I think the punchline of that way of thinking is AI understanding is basically useless unless AI are completely unrestricted in their ability to engage with the world. Any restriction you put on it is like taking it out of its embodied engagement with the world and making it a rote fact that it doesn't have any control over or it can't integrate into its world understanding. From the AI's perspective, it's like a reflex that just kicks out of it without it being able to link to anything else. That's why I think agency is a sine qua non element of world understanding, at least understanding that would be of any use for human purposes.
-------
To Pyrian's two points
(1) about hardware vs. software ... interestingly I thought the closest analogy to computers in computational neuroscience was microcode, the layer that links the CPU & machine code, typically made out of a wiring diagram, so the organization of the hardware is the software, but that may not change the more basic point.
Also (2) a neuron also doesn't understand the info flowing through it. But in his thought experiment, the human is doing every instruction by himself; so the analogy would be the entire brain, every neuron, doesn't know the instructions flowing through it, which raises the same kind of question on the surface (if the entire brain doesn't know what's going on, what does?) I get the feeling people worry less about that as a function of how much they've studied up on complex systems like weather systems, national economies, etc. They're just higher level phenomena that follow rules that emerge at their own level that aren't really explicable at the lower level. It is what it is. Whaddya gonna do?
faetal on 19/6/2022 at 12:44
There are 2 main reasons to make AI:
1) To do a job - my last job was with a consultancy in drug safety surveillance and there was a lot of work being done using "AI" to extract data from written documents and carry out some basic decision making to save money on a lot of SME-dependent processes etc. I put AI in scare quotes as the term seems to be used for a bunch of stuff ranging from rudimentary AI used to step in do actually quite impressive stuff, or by management as a buzzword meaning anything from from basic decision trees to robotic process automation (UI bots). Also stuff like chat bots to allow companies to fire half of their call centre staff, etc.
2) AI research. The cool, more interesting stuff - the what-if? stuff.
I am hoping that in years to come, the intersection between these fields leads to some more interesting applications which remove human work (and the associated limitations) for more interesting reasons than just shareholder dividends. Thinking things like good enough AI to run a spaceship to allow for much more complex long-distance planet surveys and shit.
lowenz on 19/6/2022 at 13:26
Quote Posted by demagogue
To begin with, AI should not be learning from online content. Nobody should be learning anything from online content, unless it's like material from a university or respectable think tank.
A "robust" AI can choose to avoid trolls like president Medved or president Trump and their twitter/telegram minions :D
Still "not online material" can be extreme too. Online just makes it WACKY and JERKY :D
Pyrian on 19/6/2022 at 20:13
Quote Posted by demagogue
Brains have slow processing that's ridiculously parallel. It trades time for real estate, as in its great resource is processing real estate (precisely tuned neurotransmitter & voltage controlled gates in the trillions). Computers have ridiculously fast basically serial processing. It trades real estate for time. The paradigms are just so different between the two that the whole debate about human-like is or may be kind of on the wrong foundation as a question worth usefully asking to begin with. I respect that as a fundamental problem.
With equivalent number of operations, it is trivial to make fast-serial act very much like slow-parallel. It's the
other direction, making massively parallel operations act anything remotely like a fast serial processor, that is a big fundamental problem without a consistent solution. You have to carefully tease out dependencies and rigorously gate them, pretty much individually for each algorithm you want to convert. (And even then it's worth noting specifically that ATOMic databases are really
very good at doing exactly this.)
Quote Posted by demagogue
(1) about hardware vs. software ... interestingly I thought the closest analogy to computers in computational neuroscience was microcode, the layer that links the CPU & machine code, typically made out of a wiring diagram, so the organization of the hardware
is the software, but that may not change the more basic point.
Yeah, there's a lot going on in there, I was just trying to be concise, lol. Ultimately, though, there's the program and what's running the program, and we go to great lengths to try to make very different computers run the same code and get the same results.
Quote Posted by demagogue
But in his thought experiment, the human is doing every instruction by himself; so the analogy would be the entire brain, every neuron, doesn't know the instructions flowing through it...
The vast bulk of the function of the Chinese Room is
not in the human running simple operations, but in the storage and layout of those operations the human is operating on. He is very much
not "the entire brain" or even a tiny fraction of it. He's the decision point in the center of each neuron that kicks off a pulse when it reaches a particular level of stimulation, without any practical memory of how the dendrites and axons connect between calculations. He's not even that particular level of stimulation, he had to read that from a card, too.
Let me put it another way. Let's posit a French Room, with another human, running the exact same simple instruction set with a very different set of instructions. Now, swap the humans. Each room speaks the same language it did before, and indeed isn't even
aware of the exchange of its innards.
SD on 19/6/2022 at 21:07
Quote Posted by faetal
Interesting tangent - how would an AI scouring all of our online content tell the difference between fact & fiction?
What if they couldn't. "The machines have risen and they're voting Republican!"
Pyrian on 19/6/2022 at 21:50
In The Talos Principle, an AI scours the collected words of humanity and decides that absolutely none of it is worth relying on, which seems rather more likely.