mxleader on 8/1/2023 at 05:09
Quote Posted by Tocky
I spent the day drawing. With a pencil. Soooooo sort of the opposite of this thread. I might even write someone a letter. With a pen. On paper. From thoughts in my own brain.
Does anyone even remember the joy of getting a hand written letter in the mail? You are looking through all the bills when out of nowhere there is a letter from someone you know. Whoa. Crazy shit. Crazy delightful shit.
It's just another tool in the toolbox and not a replacement for human creativity. I find it useful to bounce ideas off of and to help making lists to help me break that writer's block. It definitely can't replace human creativity at this point but I can see people being lazy and trying to use it to replace there own work and creativity.
I should mention that I sent out actual Christmas cards this year and I think that it surprised a lot of people.
Tocky on 8/1/2023 at 06:05
I used to string Christmas cards together in a long line as a decoration during the holidays. I also used to go out and chop down a tree. Now I just pull the little fiber optic tree from a box and plug it in. It's not the same as when there were kids at home. Then you were making memories. Now it's hoping nobody dies and fucks up the holiday. If I die at Christmas then just freeze dry my ass and bury me come April.
But yeah, Kevin is going to get a surprise. Not freeze dried me but a letter like I used to do. Sure we can talk any time we like. It used to cost dearly and now it's just another call with no long distance bill. But back in the day I wrote the hell out of some letters. And one out of nowhere? GTFO. I might even send a homemade CD like I used to. Sentimental shit is great. If you still have a grandparent I bet they would love this stuff. I know I would.
WingedKagouti on 8/1/2023 at 11:55
Quote Posted by Cipheron
The thing about student assessment is a valid concern. But I think the idea that "someone will pass their history exam without studying!" is verging on the moral panic side of the debate, similar to how people were concerned that just being able to "Google the answer" would do the same thing. There are good students and bad students, and the bad students funnily enough are also bad at using tools like Google effectively. The same will probably be true in the AI era.
The main problem I see here is that ChatGPT is reasonably adept at figuring out what a multi-stage task is and provide a reasonably close answer to that, where Google needs to do several searches and the person doing the search needs to filter the various possible answers.
How each method will turn out will obviously depend on how the original question is posed and what is expected as demonstrated by your earlier game example in this thread.
Cipheron on 9/1/2023 at 10:27
Quote Posted by mxleader
When you chat about a subject like coal mining, which is one of my favorite subjects, it presents a lot of data in a conversational way that makes it more easily digestible. It does sometimes make incorrect statements and you can refute the data. I was chatting about the Peanuts comic strips and talking about Snoopy's siblings in the strips and it claimed that his siblings weren't in the strip but I argued my point and it admitted being wrong.
Great 15 min video for an inkling into how this stuff works:
(
https://www.youtube.com/watch?v=gQddtTdmG_8)
As for being wrong, that's necessary for it to also have creativity capacity. An AI, built to never be wrong, would also not be capable of generating original statements, since for any original statement there's always a non-zero chance that it's actually wrong.
Basically it's using inference and interpolation to create all the statements it gives you, and the process by which GPT creates novel correct statements is in fact identical to the process by which it creates novel incorrect statements, it's just that you only notice the process when it gives a "wrong" answer. So there's no easy fix for something like that.
--
It's the same technology used to create "new" faces. When a face-generator interpolates between "real" faces to make a new face, that is in fact an "untrue" face, since it doesn't correlate to any person that actually exists.
So the sentences about Snoopy are like those fake faces. They're sentences that *could* be true in some reality, just as the faces it generated are people who *could* exist.
---
One problem is that if anything is "too obvious" we don't generally write that down. So something like GPT merely never gets those facts as input.
One possibility is that there were in fact some cartoon-only characters, and because they were, it was a *notable fact* then it was written down somewhere and ended up being included into GPT's data. Then GPT has just inferred that it can combine the tokens "The siblings were in Snoopy" with the statements it's seen for some other characters which say "Character X was in Snoopy but not in the comic book". it's not dealing with facts, but abstract word-tokens, and in a lot of similar cases the same inference rules will allow it to generate CORRECT inferences. Truth isn't encoded in the system, that's an external quality that you assess by comparing the output to the "real world".
So there wasn't anywhere that they expressly told GPT that, unless specified otherwise, Snoopy characters are by default in the comic books. This may seem obvious if you know the context of Snoopy, but to a machine where it's sum total of context is the statements we give it about Snoopy, it's not actually at all obvious, and if the only time we reference appearances in the comic books is to note the EXCEPTIONS, then we're in fact giving it bad data.
heywood on 9/1/2023 at 15:40
Another thing to keep in mind is that GIGO applies. It's pretty evident that one thing that sets ChatGPT apart from previous efforts is the colossal size & breadth of its training data. It looks like they threw everything they could get their hands on into it. Regardless of how good the algorithms are, it's answers will only be as correct as the sources it's pulling from.
It's the same risk with web search. But at least with search, the results are displayed in context, which helps us figure out what weight to give to the information. Before ChatGPT can be used for academic or research purposes, it will need to be able to cite the sources it used when compiling its answer. I expect there will be copyright challenges to overcome as well.
Azaran on 9/1/2023 at 18:05
I don't get why people are saying this will kill Google. ChatGPT is not a search engine. It can't pull up images, you can't enter quoted text and ask it to give you a link to the source, it can't pull up websites, etc. Sure, Google will pick answers from random websites about general questions and put them at the top, but otherwise apples and oranges
demagogue on 9/1/2023 at 19:12
There's already a parse engine underneath Google, and ChatGPT's is better, so I agree would make for a better search engine parse engine after the rest of the infrastructure is made for it. But I don't think Google is going to be overthrown by it just because I can foresee Google just making the equivalent tech in-house.
But I think the writing on the wall is you're going to have one master chat engine that handles everything, search engine, running your apps & games, browsing videos and shows, linked up with your house and cars, taking care of little handyman tasks and chores, everything. Whoever gets on the ground level of that tech is going to be the tech behemoth for the next generation.
Pyrian on 9/1/2023 at 20:25
We have much of that already, and people don't want it. They use it to play songs and not much else, and the systems are losing money.
Azaran on 9/1/2023 at 20:53
This might offend some people, but the bottom line is we have too much, and with time we'll have less and less appreciation for simpler things. AI just took that to a new level.
Too much info, too much art, too much music, too many different movies, games and shows.
I remember when innovative things had a huge impact on the world. Now it's news for a day, and the next we move on to the next trend.
We invented a fount of knowledge (composed of the internet, portable devices, social media, and now AI), and instead of sipping from it, we dove in and willingly drowned
heywood on 9/1/2023 at 22:06
Quote Posted by Azaran
I don't get why people are saying this will kill Google. ChatGPT is not a search engine. It can't pull up images, you can't enter quoted text and ask it to give you a link to the source, it can't pull up websites, etc. Sure, Google will pick answers from random websites about general questions and put them at the top, but otherwise apples and oranges
(
https://www.geekwire.com/2023/microsoft-openai-chatgpt-and-bing-the-surprising-way-the-integration-could-work/) https://www.geekwire.com/2023/microsoft-openai-chatgpt-and-bing-the-surprising-way-the-integration-could-work/
When Microsoft was working on Cortana, they were also working on something they called the Bing Concierge Bot. It was supposed to be a productivity agent that would communicate with the user via text over a conversation/messaging platform, and try to do what a human assistant would do, using the web of course. It went dormant, then in 2019 Microsoft quietly made a $1B investment in OpenAI. The leak last week said that Microsoft has already been working on integrating ChatGPT into the Bing platform.
ChatGPT is a search engine, and a natural language processor, but also a lot more. Google search can give you answers to basic questions, especially ones that are commonly asked. But anything more than a quick question generally requires a human to review the resulting hits for relevance, and compile information from multiple sources to complete a task. The big deal with ChatGPT is not just the breadth of its training data set, but the way it compiles, merges, and adapts the information it was trained on to provide relatively complete answers.
I'll give you an example: recipes. I like to cook and I like variety. I don't like to make the same things the same way over and over. I often find myself with a rough idea of what I want and then hit Google search. I get back pages and pages of recipes, most of them redundant. I have to spend time skimming each recipe looking for what's unique about it (if anything) and how well it suits my intentions. While looking them over, I'll rough out a recipe with pen & paper, taking the relevant bits I like from recipes I find interesting. For a complicated or large meal, I could be at it for an hour. With ChatGPT, I can explain my recipe goals and constraints, and skip all that scrolling and skimming and note taking. It doesn't always give me a usable answer, because it doesn't understand flavor balance, but I've cooked two of its answers so far with only minor modifications.
Keep in mind too that ChatGPT is just a public test and it's been trained on text only. It hasn't been trained on the web. I assume that's happening with the Microsoft project.
Their intention is to go beyond just providing information, including taking action for the user. Suppose I need to make a business trip to the LA area. Right now, I'd start by pulling up Google Maps to see where the work location is and which airport is going to get me there the quickest, avoiding traffic jams if possible. Then I'd hit the travel search engines looking for flights, a hotel, a car. Before picking a hotel, I'd go back to Google to see what restaurants are in and/or around it. I'm always looking for good SE Asian (especially Thai) and Southern Italian restaurants. And then I'd book it all, and monitor my email for confirmations. Where ChatGPT is going, all of that will be done for me by my agent in a few minutes of messaging back and forth while I'm multitasking.
I'd love to have my virtual agent for organizing big meetings too. The reason why major companies across all industries are making investments in machine learning is not to produce individualized creative works for consumers. It's for the productivity improvement. The last time we saw a rapid exponential increase in non-farm labor productivity was during the first 10 years of the web. Integrating AI/ML into ordinary business is going to be the next big one, and each employee having their own virtual admin is just one of the ways.