heywood on 29/4/2025 at 15:24
My concern about AI is its potential for information control. Deep fakes are part of that problem, but my bigger fear is AI becoming a middle man between us and ground truth. It's a more powerful information filter and propaganda tool. Most people get their information from political pundits and entertainers these days, not so much from reporters anymore. We've already seen the negatives resulting from that. But at least we have a variety of sources to choose from, individuals capable of holding independent views (albeit more partisan than ever). But if we let ourselves become dependent on mega-scale AIs to collate and summarize information for us, and only a few big tech companies can afford to operate them, we are ripe for 1984. It makes me especially uneasy when governments take an interest in regulating and investing/partnering with big AI companies. Trump already has his Justice department going after tech companies for trying to reduce the biases that PigLick mentioned as part of his wider effort to censor anything he calls "woke".
demagogue on 29/4/2025 at 17:32
The main issue I have is related to the way I think about AI chess. It's not even that the answers it gets are wrong or the art looks off, because I predict like with AI chess, it'll get better over time, and there may even be a point when it may even exceed human capacity. But then the issue is, starting with the online chess analogy, you could just cheat and use its moves, and you'd win every game against every human. But what's the point? It's not you playing anymore. You're not "winning" anything. You're watching something else win.
That's how I'm starting to look at generated art and writing. You could use it, and eventually it may even be beyond the capacity of any human to make, certainly it's much faster and adaptable. But then it's not really you anymore. Again what's the point? We're all just watching AI churn out content forever. That's not a life.
Then I think there's an economy to it. People that make money off of content can make a living off of it. But then I think, if we've gotten to that point, someone ought to just make a universal public AI that creates content of all kinds (and other income generating work it can do), and then just distribute the money to everyone on the planet, so everyone has a Universal Basic Income, and then we focus on doing human things and having a life worth living.
The other thought I had, going back to just taking moves from a chess engine, you don't really know anything about chess. None of it sinks in. I worry about people becoming so dependent on AI content that they don't exercise their minds or creativity, it's not really part of them, and they become empty shells of pure consumption.
Cipheron on 9/5/2025 at 03:47
As for game-changing. I have a bunch of automation scripts built using a lot of assistance from ChatGPT that I'd never have gotten off the ground without that assistance.
Sure, with weeks or months worth of research I could have gotten to the same place, but it's not worth taking months out of your life to study a thing in depth that you're going to use exactly once, vs a couple of hours tinkering and prompting ChatGPT to get a working prototype running. Even if i subsequently take over and continue development, you hit roadblocks where you don't know what tool you need to solve the next problem or bottleneck. ChatGPT often cuts hours off finding what you need.
Like any tool, you can use it to help you with useful things or use it to make garbage. Just because people have free time and use it to make garbage doesn't mean it's not a useful tool.
The fact that people have free time and all those tools and make garbage with it says more about the person making the stuff than about the tools.
**Most** people use their computers and phones to do useless meaningless shit. That doesn't in fact mean computers and phones are useless junk, so you can't use that criteria to critique LLMs either.
WingedKagouti on 9/5/2025 at 23:46
I've used Google's Gemini as a TTRPG aid, to help parse flaws in motivations of bad guys and potentials for improvements to their strategies, as well as how they'd react to various factors on a strategic scale. And probing scenarios for potential player actions that I might not have thought of before the sessions, just so my preparations are a bit more complete. Without having to spend personal time wracking my mind trying to prepare encounters.
I obviously still have to improvise, but there is value in being able to ask "What is <bad guy> likely to do long term if <player/other bad guy> does <action>, give some options and rank their likelihood" and get a list, that might include things I never would have thought of myself, with explanations for the how and why of each option. Sure, sometimes you have to correct it, and it probably wouldn't be a good GM without patience and a willingness to correct it when it inevitably gets details it makes up itself horribly wrong. Like one line of text describing an item on a shelf and the next mentioning it being picked up from the floor...
But it's usually much more consistent than that, and will normally do cross references within the conversation properly.
And that's essentially what it still does, cross reference text. Just a lot more text than a human can visualize with contextual values added. And some weighted values for selecting what should come next in a sentence.
mxleader on 14/5/2025 at 17:44
I've pretty much stopped using other AI programs that are similar to ChatGPT and am using ChatGPT exclusively. My reasoning is that it's fairly easy to use, there are decent updates being made, is faster and more reliable than Google's AI and other ones out there. I mostly use it now to review my own writing similar to peer review. I made a list of editing items, styles, etc. for the AI to use each time so I don't have to keep adding it every time I want something reviewed. I used several pieces of material from college papers, journal entries, fiction stories, etc. of my own so that ChapGPT doesn't go crazy trying to change my writing voice when suggesting edits. I also have it look back on all the conversations I've had in the program to better understand my point of view on things so it doesn't just throw darts when providing feedback. Giving the program solid reference points really helps.
One area that I've been using ChatGPT a lot is with some family history involving one of my grandfathers and his 25 years of Air Force service as a pilot. I have all of his military records including flight logs, photos, awards, etc. and use that as reference material to prompt ChatGPT to search its database and the Internet for social media groups, webpages, books, etc. to find research data to piece together many of the WWII and Korean War bombing missions that he flew. It's basically acting as a reference librarian for me at this point. I've also used it to write out detailed travel plans for road trips with historical notes, markers, viewpoints, museums and such that I previously wasn't aware of. It's not always accurate so you have to be on your toes when using it, but the AI did help me figure out that I was researching my grandfathers WWII history on the wrong continent because I had reversed two numbers in his bombing group. I was also recently reading a book on B-24 history by a bombardier and confirmed my error.
I do have concerns about ChatGPT now using the Internet to add to it's information but at least it will tell you where the info came from so you can judge the reference material quality for yourself.
Cipheron on 16/5/2025 at 17:11
One other one i tried out recently is perplexity ai.
It was able to solve several programming questions that ChatGPT repeatedly failed to give good answers on. For example I gave ChatGPT a section of code and asked it to optimize the code, and ChatGPT just ended up in a loop making bad or nonsensical changes, then reverting them back to the original code, and then when pushed to try again would suggest the same nonsensical changes that I had explained many times were not valid code or were non-existent commands in the language i was using.
However, Perplexity.AI uses Chain of Thought and it was able to look at the code and outline a list of potential improvements including "unrolling loops" that were actual valid techniques, and didn't do what ChatGPT did which was immediately start rewriting the code with baffling changes.
But, the funny thing is that Perplexity uses the ChatGPT 3.5 API + web search to produce it's results, which I guess shows the merit of Chain of Thought, if it can produce better code results than ChatGPT 4o on the website.
I prefer the interface for ChatGPT a little more, it's more conversational, but Perplexity feels like a good side-tool to have for the occasional tough problems that regular ChatGPT isn't handling.
Azaran on 16/5/2025 at 18:32
The worst I've used is Llama (the Meta one in Whatsapp and Messenger), gives really low quality, basic outputs. I'm not sure how good it is for coding, but as far as reconstruction and style emulation, it's abysmal. Then again, it doesn't have deep think capabilities like the others, the output is immediate
Gertius on 7/6/2025 at 16:18
Since a few days I´ve been using dndGPT as Dungeon Master for a solo choose-your-adventure style roleplaying game. It´s Chat GPT trained on DnD 5e.
I must say I´m quite impressed, at least as far as I have gotten in the past few days.
You can tell it to run a preexisting adventure book for you, or let it just come up with it´s own adventure.
I asked it if it could create a DnD 5e campaign set in the Thief universe and that´s exactly what it did with ease.
Really awesome. I play as a Keeper novice thief who gets sucked into a big story. All happening in the City, all factions there, mood like the games, it´s great!
First story was a bit confusing, but not sure if from the AI or because it wanted to mimic the Keeper style of writings.
I had to ask it quite often what´s happening in the story again, but when it explained it started to make sense.
My impressions so far:
Narrative-driven adventures work great. The small scale fights and dicerolling that I encountered so far worked perfectly.
Sometimes it makes mistakes on the numbers. You can correct it or just keep focussing on the story.
Not sure how reliable Inventory/Spells/EXP/Character sheet are handled. Largely it seems ok, but once it made an error calculating my XP.
Dungeons/Locations seem to be not so fleshed out (like in the game or in DnD that you draw a map and have hallways and stuff - it´s more described textually), but may depend on the adventure.
You have to tell it to really use RNG for the dice rolls. Also it´s default narration style is painting you as a hero all the time, you maight wanna ask it to tone that down a little bit. You might also want to tell it to be really strict and make you play by the rules. I tested once to say "I have a +5 longsword now" and it wanted to give it to me until I told it that it should not let me make such requests. Then GPT was stricter. Also at one point I tested the limits of the goodwill of the AI. It seems to want to tell a cool story and see you succeed, but when I stole some food on the market I ended up in jail. Also when I frontally assaulted the city watch, the veteran watchman first gave me a warning but then really killed my sorry lvl1 a** :D
I´m sure there a limits in coherence, maybe over a longer adventure, but I havent run into those yet and I´ve been having a lot of fun with this so far!
Azaran on 7/6/2025 at 17:36
I'm finding these AI's also good for getting stats and info that may not be readily available, or otherwise unresearched. Eg, how many new drivers quit after trying it
Quote:
The percentage of new drivers who quit driving entirely after trying it is relatively low but not insignificant. While exact global numbers are hard to pin down due to regional differences and lack of consistent data, available research and transportation surveys from places like the U.S., U.K., and Canada suggest:
Roughly 5-10% of licensed drivers may stop driving within the first few years, most often due to anxiety, cost, or negative experiences.
Among teenagers and younger adults, driving anxiety and fear of accidents are common reasons for discontinuing after learning.
For urban populations, especially in cities with good public transit, some new drivers never continue regularly after passing the test, preferring alternative transport options instead.
In essence: it's rare, but certainly not unheard of. Around 1 in 10 might give up driving permanently or semi-permanently after initially learning.
It will often even reference online data (Reddit, forums, etc) in order to get percentage estimates for certain things
Sulphur on 8/6/2025 at 01:01
Quote Posted by Azaran
I'm finding these AI's also good for getting stats and info that may not be readily available, or otherwise unresearched. Eg, how many new drivers quit after trying it
It will often even reference online data (Reddit, forums, etc) in order to get percentage estimates for certain things
If it's unresearched and there's no direct data reference point to track back to that can be called a trusted source, depending on what you're looking for, then the probability of this being the LLM bashing together words to satisfy your prompt is extremely high. Hell, even if you do have a direct reference point, the probability is still high that it's lost the nuance and/or is outright misrepresenting it at this juncture.