Cipheron on 20/6/2023 at 06:49
Had some fun recently testing out ChatGPT solving logic puzzles. It can notably produce correct solutions (most of the time) for river crossing puzzles of the Fox, Geese, Grain type.
However some people will state that ChatGPT is "solving" the problem using logical deduction, while others will state that it's merely read a lot of solutions to similar problems, so is merely fitting these to a pattern, even if you substitute the fox, geese, grain for different entities.
So I decided to look for less common variants to see if it could logically deduce the constraints. Here's the one I decided on
Quote:
A man and a woman of equal weight, together with two children, each of half their weight, wish to cross a river using a boat which can only carry the weight of one adult
... This particular problem is really no harder than the "Fox, Geese, Grain" puzzle, but ChatGPT consistently fails to come up with anything logically sound, repeatedly overloading the boat (either putting both adults in there, or a child and an adult at the same time).
Here are a couple of attempts to get something out of it. On the first one it doesn't get the idea at all:
(
https://chat.openai.com/share/2637d6e4-56d3-4805-8934-1b3e1d6d36ac)
Quote:
To solve this logic puzzle, where a man and a woman of equal weight, along with two children, each half the weight of an adult, need to cross a river using a boat that can only carry the weight of one adult, follow these steps:
Step 1: The man and woman cross together
The man and woman get on the boat and cross to the other side of the river.
On the second one I really hammered home the issue about the weight limit on the boat, and while some steps were better, it got other stuff wrong and completely hallucinated a requirement that neither adult be left alone with the children. Yet, it violated that made-up rule anyway.
(
https://chat.openai.com/share/06d0b27c-519d-4e0b-8ef8-eafca4842cd3)
heywood on 20/6/2023 at 20:05
Quote Posted by Starker
Funny you should say that. I just today listened to a podcast where they explored this idea in some length, more specifically the motivation to kill all humans. It's an episode of Decoding the Gurus, where one of the hosts happens to be a former AI researcher and psychologist and this time they were talking about Eliezer Yudkowsky, a somewhat prominent Chicken Little of AI development who has suddenly found a larger audience with the emergence of ChatGPT: (
https://decoding-the-gurus.captivate.fm/episode/74-eliezer-yudkowksy-ai-is-going-to-kill-us-all)
I saw it was 3 hours long and just tried to sample here and there to get a feel for where they were going, which I'm not sure I got. It seemed like another philosophical conversation rooted in anthropomorphizing AI, which everybody seems to be doing these days.
I think it's all hot air essentially. Machine learning programs are not people or an animal species. ML programs don't have our programming (DNA). They don't have our parents, upbringing, schooling, friends, and other things that fill our early, empty heads with how to be a human. We have millions of years of accumulated instincts and innate responses, all developed to enhance our survival and reproduction in a competitive natural environment in which we evolved from prey to apex predator. We carry that baggage, but programs don't.
I'm a lot more concerned about nefarious human motivations in applying the technology. AI viruses, autonomous weapons, propaganda and disinformation campaigns, etc.
Cipheron on 21/6/2023 at 04:31
I listened to the first 30 minutes to get an idea, and this podcast is not about that. Decoding the Gurus is critiquing the same thing you're critiquing, they're poking fun at doomsayers such as Eliezer Yudkowsky, especially his more outlandish claims.
See around 30 mins they listen to an Eliezer Yudkowsky clip saying that transformer architecture is so dangerous and poised to "exceed human intelligence" any day now, that we basically need to ban it and have WWIII if necessary, because WWIII would be less damaging that allowing AI to exist.
Sulphur on 21/6/2023 at 04:42
Wait, what's so specifically dangerous about the transformer architecture? From what I understand, it's the bit that gives an LLM the ability to infer context and have a 'memory' by categorising the relevance of each word to each other. Giving a neural network semantic processing is a heck of a thing to argue for WW3 over.
Cipheron on 21/6/2023 at 05:12
Well I'm still listening to it, Eliezer Yudkowsky is saying AI will decide to do some real Philip K Dick level shit like working out how to set the atmosphere on fire or invent synthetic viruses that reprogram humans to be drones, among other less-believable stuff, like inventing entirely new synthetic biology that trumps real biology. So yeah, if you believe that stuff is just 100% gonna happen "unless we stop it" within ~ 2 generations of architecture tech (i.e. GPT 6) then of course, you'd say that another world war would be worth it to shut that shit down.
Though I feel like the link to transformer tech is less to do with any specifics of that tech than that this guy has been preaching this stuff for 20 years and it's just the latest wagon to hitch his doomsday horses to. He appears to have previously ridiculed NN technology, since he's now backtracking but trying to twist it around to "well i didn't completely rule it out".
EDIT: what I think is going on is that Yudkowsky always predicts the same outcomes, but for wildly different AI architectures.
So back when he dismissed NNs as a sham, it would have been about AI using rigorous logic to decide to kill all humans, that then seems to have shifted to evolutionary algorithms: AI would naturally 'evolve' to kill all humans, and now the rhetoric has shifted to NN/transformers: AI through the training process will, for some unspecified reason just hit on the 'kill all humans' thing as a byproduct.
Starker on 21/6/2023 at 08:27
Quote Posted by Sulphur
Wait, what's so specifically dangerous about the transformer architecture? From what I understand, it's the bit that gives an LLM the ability to infer context and have a 'memory' by categorising the relevance of each word to each other. Giving a neural network semantic processing is a heck of a thing to argue for WW3 over.
Basically, the idea is that because nobody understands what a transformer does, and we just keep tinkering with them and stuff like deep learning neural networks, we will one day, somehow, end up with something vastly more powerful than us that will kill us before we even have the chance to say, "No disassemble!" Therefore, we should start bombing data centers now before it ever becomes a problem.
The podcast is essentially taking the everloving piss out of some of the ideas of AI doomerism by means of some gentle mockery, a heaping of sarcasm, and actually having some idea how science/society/concepts etc work.
Sulphur on 21/6/2023 at 09:04
Ah, so a standard luddite in the form of an AI researcher. Interesting combination, and while there are good reasons to be concerned about AI, it seems he's got a talent for sensationalising, eh. Colour me intrigued, I don't usually do podcasts, but this sounds like a decent listen.
Cipheron on 21/6/2023 at 11:47
As a note, I ran it through Audacity's Truncate Silence filter and removed any silence > 0.5 seconds. This knocked a full 25 minutes off the total mp3. I ended up playing it back at 150% speed too (recoded with FFmpeg because it's faster than audacity for that). But for people who can't stand fast playback, the remove silence thing is worthwhile.
heywood on 21/6/2023 at 14:45
I appreciate the correction. I guess I skimmed too lightly. If this is just people debunking an obvious crank, I'll skip it, because what's the point?
Sulphur on 21/6/2023 at 17:42
Well-articulated critique's always worth some time. 3 hours and change is a bit much for me personally, but I'll give it a fair shake.
Barely related, but I can't think of anywhere else to stick this: a (
https://if50.substack.com/p/2020-scents-and-semiosis) lovely article on a piece of procedural text-generating interactive fiction about scents, you'll be well-served if you like words from people who can arrange them well. There's also a bit in there about Chomsky's context-free grammars so it's not so dangerously tenuous a link for this topic. (And the rest of Reed's articles are great reads too if you were into IF or text adventures or whatever you like to call them. There's a book, even.)