Starker on 18/8/2023 at 17:58
Not quite sure how these hidden prompts are supposed to work in the context of player-initiated dialogue, but even if someone is able to somehow coax a piece of amazing writing out of a LLM-based chatbot, the usual result is still remarkably boring and riddled with mistakes and inconsistencies. If you want a well-written character, you kind of need it to have a unique voice, something that's memorable and stands out -- consistently so. And something tells me you will not get HK-47 out of a chatbot. Not because chatbots wouldn't be able to handle the idea of a human-loathing assassin robot, but because there's a whole lot of unique personality behind it that current chatbots just aren't able to generate.
Also, with writing, sometimes it's more important to know when to stop and cut back. Sometimes it's the mystery behind the character and players being able to fill their own imaginations with details that does the job. For example, having a chatbot being able generate additional information about HK-47, like what its favourite colour is or what it thinks of whales, would detract from the character rather than add to it.
Also also, having a chatbot generate a whole bunch of dialogue on the fly with the player able to ask any kind of questions would make it kind hard to distinguish what's actually relevant and what's meaningless filler.
What could very well happen, though, is companies having a bot write dialogue and then hiring someone to proofread it and fix any inconsistencies with the help of a game bible, because it would be cheaper than hiring a writer.
Azaran on 25/8/2023 at 14:47
Those of you who want to use Google Bard, but are in a banned country, you can use (
https://www.opera.com/) Opera's free VPN to circumvent it.
I just tried it, and I swear, it's indistinguishable from the current version of ChatGPT. Down to the generic outputs and style
mxleader on 25/8/2023 at 17:31
Quote Posted by Azaran
Those of you who want to use Google Bard, but are in a banned country, you can use (
https://www.opera.com/) Opera's free VPN to circumvent it.
I just tried it, and I swear, it's indistinguishable from the current version of ChatGPT. Down to the generic outputs and style
I was just trying Google Bard and it's really awkward but I need to mess around some more before I can say what I really think. My first reaction is that I prefer ChatGTP.
Cipheron on 28/8/2023 at 02:26
That's because it's not actually "checking" your answers against anything, it's just doing text prediction, and one of the predicted outcomes is to say "that's correct".
there was basically a probabilistic coin-flip after "that's" and it can say "correct" or "incorrect". After that however, the very fact that it just said "correct" then basically forces it to complete its response as if you were correct.
The text prediction only goes one word ahead. So it's not actually going away and "thinking" about what you said before it starts the response. What it is doing is going 'hmm what word should i use. "that's" seems like a good choice, yeah I'll write the word "that's" next'
The key issue to understand is that it puts the same amount of "brain power" into every next word. So it spent the same amount of processing to come up with "that's" as it did to choose the word "correct" vs "incorrect". To ChatGPT every word is just as important as any other in terms of meaning, but it's clear that determining whether you were correct or incorrect *should* be more important. It's just that the design of ChatGPT doesn't see things that way.
demagogue on 28/8/2023 at 03:12
I think they thumbed the scale towards certain kinds of responses based on some policy decision, and I think that may be one of them.
mxleader on 28/8/2023 at 03:32
Quote Posted by Cipheron
That's because it's not actually "checking" your answers against anything, it's just doing text prediction, and one of the predicted outcomes is to say "that's correct".
there was basically a probabilistic coin-flip after "that's" and it can say "correct" or "incorrect". After that however, the very fact that it just said "correct" then basically forces it to complete its response as if you were correct.
The text prediction only goes one word ahead. So it's not actually going away and "thinking" about what you said before it starts the response. What it is doing is going 'hmm what word should i use. "that's" seems like a good choice, yeah I'll write the word "that's" next'
The key issue to understand is that it puts the same amount of "brain power" into every next word. So it spent the same amount of processing to come up with "that's" as it did to choose the word "correct" vs "incorrect". To ChatGPT every word is just as important as any other in terms of meaning, but it's clear that determining whether you were correct or incorrect *should* be more important. It's just that the design of ChatGPT doesn't see things that way.
That would explain it. One of the weirder things too is that it got worse the longer I played trivia. Narrowing the trivia down to a certain subject from random questions made it worse still.
Also, in the second question it asked which US Navy admiral led the Pacific fleet in WWII but then quoted Yamamoto instead of Nimitz (Yamamoto probably only said that in the movie Tora! Tora! Tora!).
Azaran on 1/9/2023 at 18:55
So I now suspect the reason ChatGPT has been dumbed down is to push people to get the paid version. I imagine the good features it used to have were transferred over to the premium tier
mxleader on 2/9/2023 at 00:05
Quote Posted by Azaran
So I now suspect the reason ChatGPT has been dumbed down is to push people to get the paid version. I imagine the good features it used to have were transferred over to the premium tier
That doesn't surprise me if they did something like that. To be fair though I have found errors like that very early on though.
Cipheron on 2/9/2023 at 05:57
Quote Posted by mxleader
That would explain it. One of the weirder things too is that it got worse the longer I played trivia. Narrowing the trivia down to a certain subject from random questions made it worse still.
Also, in the second question it asked which US Navy admiral led the Pacific fleet in WWII but then quoted Yamamoto instead of Nimitz (Yamamoto probably only said that in the movie Tora! Tora! Tora!).
That's easily explained.
It's mimicking human text instead of thinking up questions them formatting them. So if you ask for general trivia questions, it's just sampling from texts it was already fed, thus allowing it to leverage the work humans already did in coming up with decent and sensible trivia questions. But if you ask for specific quizzes then it has either limited or no sample data for that, so it has few examples of good curated questions to ask.
Normally, a human would come up with some interesting fact first, then turn it into a question/answer pair. but ChatGPT just blindly writes a question to start with, then coming up with an answer is an after-thought. It's not even thinking about that while writing the question. So, the questions might superficially look like they're well formatted and grammatical but for ChatGPT no thought has gone into whether an answer even exists, is interesting, or unambiguous.