Cipheron on 4/8/2023 at 08:24
Pretty good ABC Australia article about the problem of ChatGPT "making things up". This did a good job of breaking it down for the layman but also confirming how I understand the problem:
(
https://www.abc.net.au/news/2023-08-02/chatbots-sometimes-make-things-up-is-ai-hallucination-problem/102678968)
Quote:
"I don't think that there's any model today that doesn't suffer from some hallucination," said co-founder and president of Anthropic, Daniela Amodei.
"They're really just sort of designed to predict the next word, and so there will be some rate at which the model does that inaccurately."
...
"This isn't fixable," said Emily Bender, a linguistics professor and director of the University of Washington's Computational Linguistics Laboratory.
"It's inherent in the mismatch between the technology and the proposed use cases."
...
When used to generate text, language models "are designed to make things up. That's all they do," Ms Bender said. They are good at mimicking forms of writing, such as legal contracts, television scripts or sonnets.
"But since they only ever make things up, when the text they have extruded happens to be interpretable as something we deem correct, that is by chance," Ms Bender said.
So that's one issue. "rightness" and "wrongness" are values that we ourselves assign to the output, after the fact. These are not actually values that are inherent in the process itself.
Basically, when the algorithm is "right" it was "right for the wrong reasons". So the underlying reasons that it does what it does don't actually align with our idea of what it means to be "right".
When the linguistics professor says "this isn't fixable", she's right.
Things like ChatGPT use "bottom up" text creation - just churning out new words one at a time in the blind hope that at some point in the future, a coherent sentence will emerge.
That is not the same as "top down" text creation, which would be to decide what to write about, then break it up into logical sections, then work out what you're going to say in each section, then finally, choosing the specific words you're going to use.
mxleader on 13/8/2023 at 10:25
Despite ChatGTP shortcomings I just had it create a latin phrase for spaghetti monster: Monstrum Vermiculorum.
Also, wasn't there a AI art thread in Com-chat at one point that this thread spun off from?
demagogue on 13/8/2023 at 22:32
While this thread is up, this demo is unsettling even by the standards of everything else we've seen lately. It's a (
https://www.replicastudios.com/blog/smart-npc-plugin-release) free demo you can download and play for yourself too.
I mean I think most person's first reaction would be to think about the possibilities for NPCs in games, especially if devs figure out how to build gameplay usefulness into their speech (like interrogating NPCs to make progress), but it doesn't take long before the deeper implications start sinking in and the gameplay side kind of falls flat by comparison.
[video=youtube;4sCWf2VGdfc]https://www.youtube.com/watch?v=4sCWf2VGdfc[/video]
[video=youtube;aihq6jhdW-Q]https://www.youtube.com/watch?v=aihq6jhdW-Q[/video]
mxleader on 14/8/2023 at 03:33
@Azaran - Thanks! :thumb:
Cipheron on 14/8/2023 at 03:57
It's neato, but i'm really waiting until they link up speech and actions, so you can basically convince NPCs to do things.
Pyrian on 14/8/2023 at 08:06
...Feels like Tron:Legacy is working its way towards becoming reality, lol.
Quote Posted by demagogue
...this demo is unsettling even by the standards of everything else we've seen lately.
Hmm. I mean, it's neat that they can answer more-or-less in context, but... It's really
bad.
Cipheron on 14/8/2023 at 12:35
One point however is that any AI bot needs to be trained on real conversations, so if you don't do YOUR part and speak like the people in the training conversations, the AI doesn't have much context on how to answer properly. So for roleplaying you might already get better results if you actually roleplay properly too and don't try and "break" them or say things which aren't appropriate for the setting.
So how much of our effort should be aimed at training the bots in scenario-specific information, vs how much effort to put into anti-troll systems? Clearly, they need both, so that e.g. a medieval peasant has setting-appropriate knowledge, but also responds realistically if you suddenly start claiming that they're fake people in a simulation or that you're from the future.
What's in the video is probably the only sensible approach. The bots say "that's none of your business" or similar if you go off topic to stuff they don't actually know. That's the only way to do it, since the devs cannot possibly predict every topic or tactic the human will try on them, and even if they did it would drown out the training of actual topics they're meant to talk about.
Cathedral Haunt on 15/8/2023 at 12:01
Quote Posted by Cipheron
One point however is that any AI bot needs to be trained on real conversations, so if you don't do YOUR part and speak like the people in the training conversations, the AI doesn't have much context on how to answer properly. So for roleplaying you might already get better results if you actually roleplay properly too and don't try and "break" them or say things which aren't appropriate for the setting.
So how much of our effort should be aimed at training the bots in scenario-specific information, vs how much effort to put into anti-troll systems? Clearly, they need both, so that e.g. a medieval peasant has setting-appropriate knowledge, but also responds realistically if you suddenly start claiming that they're fake people in a simulation or that you're from the future.
What's in the video is probably the only sensible approach. The bots say "that's none of your business" or similar if you go off topic to stuff they don't actually know. That's the only way to do it, since the devs cannot possibly predict every topic or tactic the human will try on them, and even if they did it would drown out the training of actual topics they're meant to talk about.
To be honest, not really surprised by it, but maybe it's because I've been playing with AI for some time now. I think that once the novelty wears off, it's pointless to try to have a proper conversation with an AI, at least with today's technology.
Starker on 15/8/2023 at 16:16
I mean, it's still impressive from a pure NLP point of view. But, as discussions in the interactive fiction scene have pointed out before, it's kind of a dead end from the game design perspective. The issue is not the parser, it's the prompt -- setting down the player without a clear guided framework and having them be able to do just about anything rarely results in riveting gameplay, as evidenced by the few interactive fiction games that do make extensive use of NLP, but remain kind of a curiosity at best.
Not to mention that when the player can do anything, they also expect the world to react to anything, so it gets very quickly limited by the world modelling you are able to do. You can mitigate some of it by having the NPCs act irrationally, like in the prototype Little Pink Best Buds that the Pendleton Ward team created for a Double Fine game pitching contest, but even in that case the limits become quickly and clearly obvious.