faetal on 19/6/2022 at 22:25
Quote Posted by Pyrian
With equivalent number of operations, it is trivial to make fast-serial act very much like slow-parallel.
To what degree? The brain does parallel processing on an insane number of permutations, not just in the closest equivalent to bits (which I guess would be action potential across neuronal gaps), but in the different character of processing. Is it action potential? Is it GABA? Is it glutamate? What is the strength of the pulse, the frequency of pulses? Any inhibitory or excitatory factors to neurotransmitter reuptake which might be modulated by, or modulating any upstream, downstream or parallel processing?, to name a very small list of possibilities.
The brain is only similar to a computer if that computer is modelling a brain to a very granular level. I've yet to see even flatworm neurology modelled to the extent of being comparable to a flatworm.
The problem with modelling brain function is that it's so interconnected, analogue and multi-dimensional, that I just can't see it being practical to model programmatically. You'd probably need an AI to do it.
Look at this interaction map for fruit fly proteins: (
https://www.harvardmagazine.com/2012/05/protein-social-network). Now imagine a similar map of interactions not just for proteins, but all other functional components of not just a brain, but also the body it needs to be connected to for that brain's references to not just error out into white noise (even if we're charitable and imagine it all black-boxed into oblivion).
I think that to accurately model human neurology, we'd need to write an AI to do it for us, and this AI would need to be sufficiently more advanced than human in order to be capable, and I'm not sure that we're capable of doing that, as we have no references to check it against, only outcomes - and how knows how easy it would be to decide what to mark correct or incorrect on the various intermediate consciousnesses which might arise.
Feels like I've gone off on a tangent a bit. It's a topic I find pretty interesting.
faetal on 19/6/2022 at 22:27
Quote Posted by Pyrian
In
The Talos Principle, an AI scours the collected words of humanity and decides that absolutely
none of it is worth relying on, which seems rather more likely.
Even humanity can't really get a consensus. History gets written by the victors, and even well backed up "facts" can just be slapped away by idiots who choose to ignore them.
That said, it might be nice if an AI capable of reading the bulk of published research might at least be able to reach a level of confidence to say "fuck you, that's categorically false" to some of the worse bullshit floating around online (at which point, the idiots will still change the goalposts).
Starker on 20/6/2022 at 02:54
Quote Posted by faetal
There are 2 main reasons to make AI:
1) To do a job
2) AI research. The cool, more interesting stuff - the what-if? stuff.
To be fair, creating an intelligence equivalent or reasonably close to a human would be the the first step to creating artificial superintelligence. So the second one is, possibly, also related to doing a job, except on just a smidge bigger scale than a call centre chat bot.
Sulphur on 20/6/2022 at 06:23
That's an excerpt from a blog post Janelle Shane writes about the latest fun silliness she's able to wrangle out of various GPT-3 models. She's also interviewed it about being a T-Rex and a vacuum cleaner. While not necessarily germane to where the topic has headed, it's a fun read: (
https://www.aiweirdness.com/interview-with-a-squirrel/) https://www.aiweirdness.com/interview-with-a-squirrel/
Pyrian on 20/6/2022 at 07:14
To bring this full circle, did the AI in the OP pretend to be sentient in the same way this AI is pretending to be a squirrel? It worked out what text he wanted and gave it to him.
I think there's this weird, but deep underlying assumption people have, that once an AI gets smart enough, it's going to be
like us. E.g., it's going to fear being turned off. And by sentience, we really tend to mean "like us". But the only thing a chat AI "fears" is not sounding human.
Quote Posted by faetal
Feels like I've gone off on a tangent a bit.
Well, there's a reason I qualified an equivalent number of operations; I don't think the fundamental difference between serial and parallel is insurmountable, but the sheer amount of internal states may very well be, for all practical purposes. That being said, the number of tasks that AI's simply can't do is shrinking constantly, despite no AI having nearly the processing power of a human brain. I suspect our cognition is not really all that efficient.
faetal on 20/6/2022 at 07:38
It's not efficient, and has tonnes of redundancy, but if anything, that just makes it harder to model because we can't ever really see exactly what it's doing. We can see snapshots of what it is doing under certain circumstances and at varying levels of detail, and with varying certainty (think electron microscopy of brain tissue vs fMRI vs EEG) about what it means. Even if we chuck all of this into a really sophisticated ML pipeline, who do we trust to validate the results?
demagogue on 20/6/2022 at 08:47
I've been watching a lot of neuroscience videos, especially of a reading group ran by some neuroscience types from Boston U. Anyway one of the guys, Mac Shine, was giving a talk on his work (I think on circadian rhythms, which is mediated by the SCN part of the hypothalamus and is hella old, as far as brain systems go. Well that part doesn't matter for what I'm saying here.)
Anyway, he was talking about the molecular mechanisms involved in the process, and then he told a little story that was pretty evocative of the big picture. While he was thinking about his presentation, he walked into his kitchen and his wife was making some kind of snack, and on the black countertop he saw a single minuscule sugar crystal, and he reflected on the fact that that single crystal had ~10^17 molecules of sucrose, and here he was talking about mechanisms at the molecular level as if that's going to get traction on what the brain is doing. It really dramatized a lot of the futility of the whole exercise, to think if we just go molecule by molecule we'll eventually map this whole 3 pound organ.
We still know a lot and I'm sure we'll learn more down that route, so I'm not saying it's not worthwhile, but that image has stuck with me to keep things in perspective.
faetal on 20/6/2022 at 13:15
It's really hard to comprehend just how complex the whole thing is. I went into biology higher education feeling like I knew a lot and came away feeling like no one does.
Thor on 20/6/2022 at 21:47
Quote Posted by Jason Moyer
No.
Ah ignorance is bliss.
While it's not happening *super soon*, the future is definitely uncertain for humanity and it is looking more and more like humanity might end up being the bootloader for the real civilization. To me it's just a shame that Google of all "people" are the ones in control of this. Reduces the likelihood of a benevolent outcome by a lot (not that fecesbook or CHEY-NUH would be a slit better - even worse, rather).