Starker on 22/1/2023 at 15:14
With jokes, I think the biggest problem is that an AI right now, no matter how well trained, cannot consider context and cannot do anything beyond what it already knows. It doesn't create as much as copy and supply existing knowledge, kind of like a very sophisticated autocomplete feature. So when it comes to jokes, it can mimic a style by looking at existing material on Steve Jobs, it can use it to try to make fun of him in the same style other people have made fun of him, but it cannot really use a technoshamanistic figure like Jobs or Musk as a metaphor alluding to societal problems, because it doesn't have an "understanding" of neither the person nor the problems. It can only use what other people have written about them, because it cannot infer anything.
I recently happened to watch a conversation with Jon Favreau, one of Obama's speechwriters, and one of the points he made was that he didn't really know how to write speeches for Obama until he got to know him as a person. He could mimic the style, sure, but it would have been a ChatGPT level of failure without actually getting to know Obama and without Obama's involvement, collaboration, and Obama actually writing parts of the speech himself. Because good writing is about more than a particular style or structure or expressing an idea -- it's also about emotion and having a unique point of view and a set of experiences that nobody in the world but the writer can access.
WingedKagouti on 22/1/2023 at 15:27
For comedy to work, you need to understand the culture you're presenting the intended comedy to. And the technology behind current "AI" is extremely far away from understanding cultural context, they're much closer to being search engines with a limited selection of context sensitive pointers.
Cipheron on 23/1/2023 at 01:56
While some people are saying that ChatGPT will level out skills in essay writing etc, with poor students able to push a button and get an elite-level essay, from what I'm seeing it's anything but that. There's going to be a digital divide based on people's ability to use this technology.
To put it bluntly, some people are thick as shit, and type stupid stuff into it, and aren't very good at layering logic and reasoning in ways to get around any limitations.
People are actually complaining that if they ask ChatGPT about fictional things, ChatGPT responds by pointing out that the thing they asked about is fictional so the question cannot be answered. At this point, the moron typing this shit in gives up. "ChatGPT is broken, and you can't get it to output the thing you want! give me the magic word i can add to my prompt that'll make ChatGPT answer the question!1!"
However here's an example of the workaround, which took me like 5 seconds to think up, and shows how this is 110% purely a context and framing issue, and not about the topic you asked for:
(
https://pastebin.com/pXa9hpi2)
tl;dr
"what would happen if a zombie attacked a minotaur" <= gives shitty answers, even with cajoling ChatGPT with follow up prompts.
"write an account, in the style of a non-fiction documentary, about the relative battle strength of a zombie and a minotaur." gives an amazing answer, first attempt.
Starker on 23/1/2023 at 03:18
Someone already wrote a program that can pretty accurately identify texts written by ChatGPT, so I don't think it's going to be that big of a problem: (
https://www.npr.org/2023/01/09/1147549845/gptzero-ai-chatgpt-edward-tian-plagiarism)
Also, I personally don't think English and more specifically American style classroom essays are that amazing to begin with. They tend to have the sort of formulaic boilerplate structure that invites the most boring type of quite narrow and simple arguments. I mean, the fact that you can automate this is in and of itself kind of an indictment.
Cipheron on 23/1/2023 at 03:58
Actually I wasn't concerned with any cheating aspect: I was focusing on the idea that there will be AI haves and have nots, and that just "access" to the technology only tells half the story, since a lot of people just plain suck at using it.
I'm noticing that more and more in the commentary surrounding the thing. Some people call it a "glorified Google" and they sum up talking to it as "asking it questions". These same people are the people who keep complaining they cannot get the content they're after - mostly because the suck at how they word it, and can't think beyond the very most straight-forward ways to request stuff. All of this stuff shows they got no idea what they have and what it can do. Like some people literally just cannot think outside the box of talking to the machine in a question and answer format, then they go online and write Reddit posts complaining that that's all it can do, and that it sucks.
That's where my example of framing a request comes in: asking it to write fiction, but in a non-fiction style about a fictional thing, gets it to write about something fictional, as if it's real. A lot of people will get what I did there, but I've met enough people to know there are a lot of morons out there who still wouldn't get how that works if you explained it to them.
Sulphur on 23/1/2023 at 05:04
I don't think that even takes a certain amount of smarts to do. I asked ChatGPT to pretend it was Hesiod, and it demurred, so I merely rephrased the request in a way that it didn't find a problem with taking on the affect of the ancient Greek poet, and that session was very entertaining if imperfect. Essentially, it's about how much bare minimum effort you want to put in for working with the program at this point.
Cipheron on 23/1/2023 at 06:15
Quote Posted by Sulphur
I don't think that even takes a certain amount of smarts to do. I asked ChatGPT to pretend it was Hesiod, and it demurred, so I merely rephrased the request in a way that it didn't find a problem with taking on the affect of the ancient Greek poet, and that session was very entertaining if imperfect. Essentially, it's about how much bare minimum effort you want to put in for working with the program at this point.
You're coming at that from the high end of the Dunning-Kruger scale: you're actually over-estimating how competent the average person is at this stuff. Additional effort doesn't actually make up for people who have no clue to start with. This stuff actually requires you to be able to process the logical structure of language and understand stuff like nesting concepts.
Think about what you just wrote. You asked ChatGPT to pretend to be Hesiod, that didn't work, so you modified the context or framing of your prompt differently, based on the feedback you got from ChatGPT. Then you said that this didn't need smarts, just a bit of effort. Well, you're wrong there. A large chunk of society wouldn't have even gotten as far as realizing this was a thing they could do.
I realized this from seeing some of the requests. These people weren't asking for advice on how to *restructure* their prompts in the way that you and I are discussing. Instead, they're asking for random keywords to add to their existing prompts in the mistaken belief this will somehow overcome whatever defects their prompts have, by just sprinkling some magical keywords in there as the cure-all fix for having shitty prompts. This is Dunning Kruger stuff, definitely.
Then you have the people who claim that ChatGPT only tells them redundant useless or dumb shit. Well, guess how much imagination these same people have? It's Garbage In, Garbage Out, but they're too dumb and self-centered to get that. Basically they only have the brains to ask the types of questions a dumb person can ask, and then get bored with ChatGPT because it's responses are bad.
As for the AI digital divide: Have you ever tried to help out someone who was really, really thick? Notice how nothing gets through to them, and they tend to latch onto the wrong thing and have inappropriate reactions, failing to see the bigger picture, or be able to generalize concepts. Now picture those people trying to get along in the AI-driven world. They're going to be equally fucking useless to help even with a super-intelligent version of ChatGPT, let alone the existing ChatGPT.
Starker on 23/1/2023 at 12:18
I think if ChatGPT becomes usable only by a handful of r/iamverysmart users, then it's never going to find wide-spread adoption, because it takes quite a bit of money to train it.
Cipheron on 23/1/2023 at 14:09
Quote Posted by Starker
I think if ChatGPT becomes usable only by a handful of r/iamverysmart users, then it's never going to find wide-spread adoption, because it takes quite a bit of money to train it.
That's not what I'm saying. Saying "usable" make it sounds like there's some binary cut-off. I'm saying there will be grades of competency. Using this is going to be a skill, which should be clear since if the input requires you to manipulate real English then clearly some people will get along there much more quickly.
Also if there isn't wide-spread adoption that'll be because most people can't adapt and work out how to give it good input to get good results. The majority of actual people haven't even heard of ChatGPT yet. If you sat most people down with it, they would indeed flounder and not know where to start or what it's capable of.
heywood on 23/1/2023 at 14:10
Quote Posted by Cipheron
You're coming at that from the high end of the Dunning-Kruger scale: you're actually over-estimating how competent the average person is at this stuff. Additional effort doesn't actually make up for people who have no clue to start with. This stuff actually requires you to be able to process the logical structure of language and understand stuff like nesting concepts.
Think about what you just wrote. You asked ChatGPT to pretend to be Hesiod, that didn't work, so you modified the context or framing of your prompt differently, based on the feedback you got from ChatGPT. Then you said that this didn't need smarts, just a bit of effort. Well, you're wrong there. A large chunk of society wouldn't have even gotten as far as realizing this was a thing they could do.
I realized this from seeing some of the requests. These people weren't asking for advice on how to *restructure* their prompts in the way that you and I are discussing. Instead, they're asking for random keywords to add to their existing prompts in the mistaken belief this will somehow overcome whatever defects their prompts have, by just sprinkling some magical keywords in there as the cure-all fix for having shitty prompts. This is Dunning Kruger stuff, definitely.
People are just trying a strategy that works for them with search engines, where they start out with a relatively broad query, see what they get back, and then specify additional nouns and adjectives to narrow the results. I don't see anything Dunning-Kruger about that.
Quote:
Then you have the people who claim that ChatGPT only tells them redundant useless or dumb shit. Well, guess how much imagination these same people have? It's Garbage In, Garbage Out, but they're too dumb and self-centered to get that. Basically they only have the brains to ask the types of questions a dumb person can ask, and then get bored with ChatGPT because it's responses are bad.
As for the AI digital divide: Have you ever tried to help out someone who was really, really thick? Notice how nothing gets through to them, and they tend to latch onto the wrong thing and have inappropriate reactions, failing to see the bigger picture, or be able to generalize concepts. Now picture those people trying to get along in the AI-driven world. They're going to be equally fucking useless to help even with a super-intelligent version of ChatGPT, let alone the existing ChatGPT.
OK Mr 31337
The whole point of using machine learning to allow people to communicate with computers in natural languages is to get the machine to meet people on their terms, training the machine rather than the masses. Anybody should be able to get good results from it. If you have to trick it to get the most straightforward result, it hasn't met its goal yet. In order for this technology to have value beyond novelty, you have to be able to get something useful done with it in less time than it would have taken you without it.