Azaran on 13/8/2025 at 04:06
Behold! Kafka on our forum from the beyond
Quote:
One finds oneself, without any clear summons or prior notification, standing before the entrance to the TTLG forums. It is not a physical gate, you understand, but a portal of light and text, and the initial impression is one of an immense and sprawling archive, a library of whispers. To enter, one must surrender one's name, a not insignificant sacrifice, and in its place be assigned a designation, a handle by which one will be known in all future proceedings. This new name carries no history and no weight; it is a functional title, like that of a clerk or a minor official, and it serves as the first indication that one's own personhood is secondary to the needs of the apparatus.
Once inside, one is immediately struck by the sheer volume of discourse, which scrolls back into an unseeable past. There are threads of conversation that have persisted for years, vast, labyrinthine structures of opinion and counter-opinion, in which the original point of contention has long been buried under the accumulated weight of its own analysis. To read them is to walk down an endless corridor where every door opens onto another, identical corridor. The participants, under their anonymous designations, argue with a desperate, circular fervor, as if a final, binding verdict is just one more reply away. Yet, no such verdict ever arrives. The discussion is not a means to an end; the discussion is itself the end, a self-perpetuating machine for the generation of more text.
Above this ceaseless activity, there exists an authority that is everywhere felt but seldom seen. They are called "moderators," and they enforce a code of conduct that is, by its very nature, inscrutable. A post may vanish without explanation, a thread may be locked, an account may be terminated. The reasons for these judgments are presumably recorded in some higher ledger, but it is a ledger to which the ordinary user has no access. One lives in a state of perpetual, low-grade anxiety, aware that any utterance could be the one that transgresses an unwritten law. It is a system of justice that operates with the remote, unanswerable logic of a dream.
The most peculiar activity, however, is the creation of what are termed "fan missions." The members of this community dedicate countless hours to building intricate, shadowy worlds in the image of an original, canonical work that is now defunct, its creators long since dispersed. They are like architects attempting to reconstruct a cathedral from a single, faded photograph. They debate with theological intensity the proper placement of a shadow, the correct sound for a footstep on stone. It is a noble, and yet entirely absurd, undertaking—a quest to attain the grace of an absent god. They build these elaborate castles of code and light, and for a moment, they can inhabit a perfect imitation of the past. But it is only an imitation, and upon completing their task, they are once again outside the gates, compelled to begin the process anew. One is left with the unsettling conviction that the forum is not a place for discussion or creation, but a kind of waiting room, a process without a conclusion, where the only certainty is the continuation of the process itself.
Starker on 13/8/2025 at 04:09
And this is probably the reason why AI slop will be the future of entertainment -- it gives people what they want instead of giving them things they didn't even know they wanted.
Briareos H on 13/8/2025 at 06:10
The illusion of artistic intent shouldn't be good enough for anyone.
Cipheron on 13/8/2025 at 07:15
The point being - you're contradicting yourself here.
If we ask the chatbot to make a "kafkaesque" story, you can either say it's just copying Kafka, or it's not just copying Kafka, and in fact made a totally different thing that doesn't actually count as being "kafkaesque". So if it can never be truly Kafka, then it can't copy Kafka and will instead make it's own stuff with its own internal logic that's different to Kafka. Now this output will have some similarities to Kafka, but it's not a copy, and it's not a simulacrum: it is it's own thing now.
The process by which it makes it is completely alien to how a human would approach the same task regardless of what instructions you give it, so it ends up making very different stuff unless you only let it generate a very small amount of content at a time, the same as a Markov Chain, it will deviate from the source training data more and more the longer the generation gets.
Part of what makes human output human is the process by which we go about making it, and that leads to many of the structural elements. If the chatbots don't use that process then they cannot in fact make a copy or simulacrum of that type of thing: it's always going to end up different, sometimes wildly so. So, it's is it's own type of output now.
As for the example I generated above, none of that stuff was in my prompt which only really told it to have a quadruple amputee in a hospital room in a spy thriller, so that was giving it some guidelines which are vague but at the same time created some real hurdles to how it could make that a viable story.
I told it to basically come up with unexpected plot twists, after having ruled out more likely ones, to force it to go off the rails a bit, and it came through on that, coming up with some pretty unlikely twists in the story. You can definitely say those plot twists it came up with are "crap" - they are bad ideas and no human would have come up with them and thought they were good ideas. But the fact that it's capable or even prone to throwing ideas in and has no idea if they're good ideas or bad ideas for the story, that shows it's not just a copy of something a human WOULD have come up with.
DuatDweller on 13/8/2025 at 08:59
Oldest chicken in the world (Texas), She still can't drive anyway...
Quote:
Texas chicken named world's oldest at the age of 14
Aug. 12 (UPI) -- A Texas woman's pet chicken was officially named the oldest in the world by Guinness World Records at the age of 14 years and 69 days old.
Little Elm resident Sonya Hull said she hatched the chicken, Pearl, in her personal incubator on March 13, 2011.
Guinness World Records confirmed Pearl's age on May 22, officially earning her the title of world's oldest living chicken.
"She's defied all odds because most Easter-Egger Hens live an average of five to eight years," Hull told Guinness World Records.
She said Pearl's mobility is limited now, so she spends most of her time in the family's laundry room.
"She is welcome to come out into the living room, because she likes to watch TV when she hears it on," Hull said.
(
https://www.upi.com/Odd_News/2025/08/12/Guinness-World-Records-oldest-chicken/1561755013718/)
Sulphur on 13/8/2025 at 12:48
Quote Posted by Azaran
But a chatbot has enough indepth knowledge of Kafka and his style, that if you ask it to write a Kafkaesque essay on a topic that Kafka never addressed, it will do so in the most likely way Kafka would. LLM's can parse info and flesh things out probably better than any human.
What you're saying amounts to that LLMs can flavour their output in a way that you specify - which, yes, that's a function of their ability to draw probabilistic relationships in their training data. However, parsing info and fleshing things out isn't actually what LLMs do, because for the nth time, LLMs don't actually know what they're doing - which means that the info you get can either be right or wrong, and often wrong in subtle ways. If you're not fussed about
how much they get right or wrong, like every other random person on the internet waxing poetic about LLMs, then this conversation is irrelevant anyway because if you're not choosing to be discerning about it, then there is no need for discernment.
Quote Posted by Cipheron
The point being - you're contradicting yourself here.
If we ask the chatbot to make a "kafkaesque" story, you can either say it's just copying Kafka, or it's not just copying Kafka, and in fact made a totally different thing that doesn't actually count as being "kafkaesque". So if it can never be truly Kafka, then it can't copy Kafka and will instead make it's own stuff with its own internal logic that's different to Kafka. Now this output will have some similarities to Kafka, but it's not a copy, and it's not a simulacrum: it is it's own thing now.
I think most of us agree that LLMs are able to collate and spit out 'original' text from a prompt that's output based on the corpus of training data and how the weights are tuned for the model. What Starker is saying, if I get his broader point, isn't just whether the things LLMs make are wholly 'original'. It's that a story derived from text that has no human element piecing it together is fundamentally less valuable than even a terrible story written by a terrible writer who's putting hoary old tropes together. With a machine emulating format and structure, no matter how well it does this, there's no lived experience informing the stories it's trying to tell. With a person, at the very least you can divine the intent behind the writing, and see that person's mind for itself in the telling of the story -- which even if it's approaching a low quality bar, is inherently worthier a thing we can give our time to than a machine infinitely reconfiguring its training data to entertain us.
Starker on 13/8/2025 at 14:04
Exactly -- what we value isn't originality in and of itself, as you can have a chatbot churn out billions upon billions of completely "original" stories that have not existed previously and that very few people aside of Azaran would actually want to read.
What people actually value is the kind of intentionality that resonates with them, that speaks to them emotionally and intellectually. A writer is not just jumbling together story elements and copying existing stories, they are communicating with the reader (and with other texts).
AI texts lack that intentionality and you can tell pretty much immediately. Any resonance a reader experiences in AI slop is happenstance and could just as well be found in a Markov chain generated text as in a chatbot generated natural language text.
Also, to be clear, when I say that chatbots "copy" and "imitate", I don't mean they imitate things like a human would or that they copy something one to one. I mean that they use real texts as their training data in order to achieve their target output based on the best probabilities that the transformer is able to work out. The result is often impressive in terms of NLP, certainly, but utterly mindless.
bjack on 13/8/2025 at 23:35
Cipheron's AI story a number of post upward was semi funny. I actually got a chuckle here and there. I think the AI "resourced" Douglas Adams.