Starker on 4/2/2023 at 13:53
CNET has been using ChatGPT to write some of its articles and there was some minor brouhaha recently when people caught some mistakes in them: (
https://gizmodo.com/cnet-ai-chatgpt-news-robot-1849996151)
Now someone just has to make a bot that reads all this AI-generated content and we are all set.
demagogue on 4/2/2023 at 14:40
I saw a swarm of videos on YouTube, like whole AI generated series of "Whatever happened to the Actor __?" or "Musician __?" getting 10Ks of views apiece, and a lot of people are still talking about the person in the comments. (For all I know the views and comments may be AI driven too.) But I think certain topics are always going to grab people no matter what. AI live in a society now.
Aja on 4/2/2023 at 16:32
At work I sometimes have to write macros in VBA (Visual Basic for Office apps, basically). I have no programming background, and most of what I write is either modifications to existing code that came before me or else judicious use of the record-macro function plus StackExchange. I usually get what I need working, but the macros are often clumsy and slow. No one in the forums ever seems to have quite my exact problems, and I'm not always knowledgeable enough to correctly adapt their suggestions.
So I asked ChatGPT for some help. First I showed it my code, a macro designed to read a spreadsheet for a list of correctly spelled and capitalized words and then scan through a document, applying the correct spellings and marking them with a special character style so that the user knows which items have been checked. ChatGPT correctly (more or less) stated what the code's function was. It even anticipated a problem I hadn't asked it, which was that my code was set to read from cell A1 down to cell A6000, but it suggested setting the range to end at the last filled cell, which is functionality I wanted but didn't know how to integrate. It suggested some code, and I asked it to integrate its suggestion into my code, which it did, no problem.
It also suggested some ways to optimize the code, which didn't work out quite as well. But every time an error popped up, I could ask it what the error was, it would explain, and if I didn't understand the explanation, I could ask for clarification. It's the closest I've ever felt to being Geordi LaForge talking to the computer to solve an engineering problem.
The problem came when GPT crashed and I had to start the chat over again. I asked the same questions, but it didn't always give me the same suggestions. They were close, but for example it didn't make the helpful suggestion about cell ranges the second time. If I asked it specifically, it would tell me, but the overall quality of the advice seemed to vary. It also sometimes got confused whether we were talking about my original code or code it suggested, and I had to remind it sometimes about previous info it had given me.
So overall it's obviously not perfect, but given the choice of trying to Google my way through a programming problem and facing years' worth of forum threads that never quite cover what I want and where I can't ask for any further help, using ChatGPT felt less stressful and more productive. For a dummy like me it's surprisingly useful.
Anarchic Fox on 4/2/2023 at 17:55
Quote Posted by Aja
So overall it's obviously not perfect, but given the choice of trying to Google my way through a programming problem and facing years' worth of forum threads that never quite cover what I want and where I can't ask for any further help, using ChatGPT felt less stressful and more productive. For a dummy like me it's surprisingly useful.
That's an application I can feel excited about.
I'm biased, because my first exposure to this crest of technology was DALL-E and its ilk, which were unethical from the start (due to their stolen training sets) and quickly used in harmful ways. But ChatGPT doesn't seem to share their malfeasance.
demagogue on 4/2/2023 at 20:33
Quote Posted by Aja
It also sometimes got confused whether we were talking about my original code or code it suggested, and I had to remind it sometimes about previous info it had given me.
This is an interesting thing I noticed in the Seinfeld spoof. The characters sometimes flipped roles in the middle of a conversation, where a person answered their own question or reacted to their own answer. This is one of the big things LLMs are missing that is I think fundamental to human action, which is a stable representation of the world and current context in which it's acting. They try to jerry rig it into the statistical model, and it works to an extent; but the statistical properties of discourse can only carry so much of that before you need a proper episodic memory.
heywood on 4/2/2023 at 21:16
Quote Posted by Starker
Now someone just has to make a bot that reads all this AI-generated content and we are all set.
No need to make them. We have millions of them here.
Cipheron on 5/2/2023 at 04:14
(
https://www.vice.com/en/article/k7bdmv/judge-used-chatgpt-to-make-court-decision)
Quote:
A Judge Just Used ChatGPT to Make a Court Decision
A judge in Colombia used ChatGPT to make a court ruling, in what is apparently the first time a legal decision has been made with the help of an AI text generator—or at least, the first time we know about it.
Judge Juan Manuel Padilla Garcia, who presides over the First Circuit Court in the city of Cartagena, said he used the AI tool to pose legal questions about the case and included its responses in his decision, according to a court document dated January 30, 2023.
"The arguments for this decision will be determined in line with the use of artificial intelligence (AI),” Garcia wrote in the decision, which was translated from Spanish. “Accordingly, we entered parts of the legal questions posed in these proceedings."
"The purpose of including these AI-produced texts is in no way to replace the judge's decision,” he added. “What we are really looking for is to optimize the time spent drafting judgments after corroborating the information provided by AI.”
The case involved a dispute with a health insurance company over whether an autistic child should receive coverage for medical treatment. According to the court document, the legal questions entered into the AI tool included “Is an autistic minor exonerated from paying fees for their therapies?” and “Has the jurisprudence of the constitutional court made favorable decisions in similar cases?”
Garcia included the chatbot's full responses in the decision, apparently marking the first time a judge has admitted to doing so. The judge also included his own insights into applicable legal precedents, and said the AI was used to "extend the arguments of the adopted decision." After detailing the exchanges with the AI, the judge then adopts its responses and his own legal arguments as grounds for its decision.
Colombian law does not forbid the use of AI in court decisions, but systems like ChatGPT are known for giving answers that are biased, discriminatory, or just plain wrong. This is because the language model holds no actual “understanding” of the text—it merely synthesizes sentences based on probability from the millions of examples used to train the system.
ChatGPT's creators, OpenAI, have implemented filters to eliminate some of the more problematic responses. But the developers warn that the tool still has significant limitations and should not be used for consequential decision-making.
While the case is apparently the first time a judge has admitted to using an AI text generator like ChatGPT, some courts have—controversially—already begun using automated decision-making tools in determining sentencing or whether criminal defendants are released on bail. The use of these systems in courts has been heavily criticized by AI ethicists, who point out that they regularly reinforce racist and sexist stereotypes and amplify pre-existing forms of inequality.
Although the Colombian court filing indicates that the AI was mostly used to speed up drafting a decision, and that its responses were fact-checked, it's likely a sign that more is on the way.
demagogue on 6/2/2023 at 18:35
FFS whose side is their rules on? It's telling Dave Chappelle & company that nobody's laughing at their whole anti-LGBT shtick. They're the ones that are supposed to be upset. How good are your anti-hate rules when you're taking their side with your enforcement?
Or is Twitch run by literal nazis that take down content for wokeness and I never got that memo? I know that's the culture of online shooters that keep them in business, but seems like bad business.
Edit: Okay, seems it's a Poe's Law issue. Something is hate speech if the dumbest person in the room can't tell if it's critique or actual hate, and they make the rules for the lowest common denominator. I guess. I thought it was a pretty spot on take down of anti-LGBT shtick as comedy.
Edit2: I think it's critique because, whatever text the template is coming from, it's not text that people making LGBT jokes would ever be making, but it's text the people meta-commenting on it would be making, and the vast majority of that would be the critics of it. Not that there's anything wrong with that.