Qooper on 4/4/2023 at 08:58
(
https://gpt6.ai/) GPT6 (The joke is a few days late, but I thought I'd post it anyways)
heywood on 5/4/2023 at 15:32
Quote Posted by Pyrian
The Dunning-Kruger bot.
Some of the emergent behavior (or should I say misbehavior?) is surprisingly human-like, which gives me hope these large language models and deep learning algorithms can answer some questions about how our brains work, like whether personality and culture are mostly artifacts of the process we go through to learn language, and what the root causes of some disorders are.
demagogue on 5/4/2023 at 16:34
LLMs are associationist to their core, so to the extent they mirror any human mental structure, it's only what makes it into the structure of language. There's a really old debate about that suggestion. Chomsky was saying as early as the late 1950s that the grammatical structure of language mirrored cognitive structure. But, just like Chomsky's program eventually fell apart by the 1980s, this would hit a limit pretty quickly too, exactly because LLMs don't mirror neural cognitive structure. Grossbergian neural nets try to mirror actual neural cognitive structure, and they've been shouting their criticism of connectionist neural nets from the rooftops on that basis for ages now, trying to get them to move closer to actual neural structures. The brain is made out of topographical functional maps that link representations across modalities, most especially in feedback-feedforward loops.
We don't say "the grass is green" because it's the statistically most likely thing a person would say in that context, but because the grass is green and we have a compelling motivation to get the mental image of that color being on that thing into the minds of people standing around us for some good reason, the feedforward plan of which is affirmed by the feedback that that person actually got that message and we get some intimacy +1 oxytocin hit from it or whatever the ultimate payoff is. The thing is you can take a lot of the Deep Learning mechanics and put them to work in a more Grossbergian model. But I'll grant the challenge of that explains why it's easier to just go the connectionst route and let the model built its own modalities in evolutionarily based just on the context-output content it gets its hands on.
Or something like that. Here's one punchine of that line. People worry about explicitly modeling motivation in AI, but I think it doesn't really understand a meaning unless it's constructed via the intentions underlying it, and it's actually safer to have a bot understanding what it's saying than not. So I think the solution for AI security is giving the AI more (realistic) "free will", not less, not railroading it, and then focusing on moral education.
Edit: Oh yeah, something else that's not being talked about much that's in this line. To me it seems that one logical end of this will be an ethics centered on veganism because from an AI's perspective, we're the animals.
Azaran on 14/4/2023 at 14:27
Convincing generated music is here
AI Jay Z
[video=youtube;y7r6PAkFRfU]https://www.youtube.com/watch?v=y7r6PAkFRfU[/video]
Kanye does an Ice Cube cover
[video=youtube;Kw84ei9jDS8]https://www.youtube.com/watch?v=Kw84ei9jDS8[/video]
Cipheron on 16/4/2023 at 08:52
Quote Posted by mxleader
Chat GTP has some flaws with their data for sure. It's starting to act like that guy you work with that bullshits you about something that they know nothing about. I asked some very concise historical questions about the location of some totem poles in the Pacific Northwest and it totally lied about them. It kept insisting that these totem poles existed in a location where they were never located.
Looks like you still don't understand how GPT even works.
What GPT does, is take the "sentence so far" then rolls dice and picks random words. That's the ENTIRE process of how it generates text:
1) Take the sentence so far
2) pull up statistics for the probability of what the next word should be
3) roll some dice
4) pick a word, based on the dice roll
5) add the word to the sentence.
6) is the sentence finished? If so, stop, otherwise, return to step 1.
So this is where people humanize it too much: they assume it approaches writing how a human would, with a "top-down" approach of deciding WHAT you want to write about then breaking that down into words and sentences.
What GPT does is the opposite - a "bottom-up" approach where it reads the sentence so far, then tries to think up a random word that's likely to come next. In between each attempt to pick the next word, there's no actual memory going on. So it's blind to not only the meaning of the words, but also how it's going to end a sentence when it starts one. Entirely depends on how the dice-rolls go.
So there's no "brain" in there capable of reasoning about which data sources it should access, and when it needs to do it, since it's entirely blind to any meaning of the actual words being written. It just stirs words together and random words pop out of the mix, which it strings together into sentences.
WingedKagouti on 16/4/2023 at 09:40
Quote Posted by Cipheron
So there's no "brain" in there capable of reasoning about which data sources it should access, and when it needs to do it, since it's entirely blind to any meaning of the actual words being written. It just stirs words together and random words pop out of the mix, which it strings together into sentences.
The breakthrough of ChatGPT and similar algorithms is the ability to analyze the conversation up to the current state and translate that into data useable for generating the statistics you mention.
The "AI" moniker is pure marketing.
Cipheron on 16/4/2023 at 13:08
Quote Posted by WingedKagouti
The breakthrough of ChatGPT and similar algorithms is the ability to analyze the conversation up to the current state and translate that into data useable for generating the statistics you mention.
The "AI" moniker is pure marketing.
Isn't the 175 billion parameters a big part of that, and not any human created special concept?
Basically, there are links for what the most recent word was, the 2nd most recent, third most recent etc, or something like that, and you merely throw enough memory at it to have enough links.
Most of the special sauce is just throwing ever-more amounts of memory at the problem and creating these gigantic probability tables.
Pyrian on 25/4/2023 at 00:48
Back in 2010, there was a Battlestar Galactica prequel called Caprica. The setup is that a young woman has died in a train bombing, but before she died, an AI was trained based on her online behavior. That AI becomes the first functional Cylon.
That is quickly becoming uncomfortably plausible. Certainly you could train a chatbot to pretend to be some of us based on our extensive posting records. There are even companies taking rather limited cracks at it as a funeral service: (
https://www.pcmag.com/news/ai-helps-woman-answer-questions-at-her-own-funeral) That's... Just kinda meh, but I imagine AI post-life replacements are going to get rather more convincing, much more quickly than I'd've guessed even just last year.
Tocky on 25/4/2023 at 16:30
I remember that one.
Also here is something AI creepy-
[video=youtube_share;rxD5TVYhOaI]https://youtu.be/rxD5TVYhOaI[/video]
More secret things?