demagogue on 11/1/2023 at 22:07
Quote Posted by Anarchic Fox
Okay, I've been redirected here. What do people think about the fact that recent AI art programs (DALL-E most notably) used thousands of artists' work as training data without consent?
Well I petitioned this, so I ought to say something.
So I noticed recently that for the latest version of Stable Diffusion, I think v2.0 after v1.5, they completely gutted out any images that weren't clearly public domain or otherwise legally available. So if you ran the 2.0 and 1.5 models side by side, you can see pretty clearly what a difference it makes to the output. 2.0 is noticeably worse, not awful, but like what we were seeing with Dall-E 1, still pretty borked and disappointing by comparison. You wouldn't really want to use it like other high end models.
So my thinking was that of course it's a necessary thing. Rights are rights, and they have to be enforced. But I was also thinking the major value of these systems isn't to copy artists' style per se (except for like the classics, where we're talking about works already in the public domain anyway). It's to be an image generator that can translate concepts to images, where the style should come from the prompts, and taking artists' style would even be counterproductive.
And I believe that it's possible for these systems to do that job with high quality using a data set that's clear in terms of the IP. It'd be a big challenge and probably expense to produce such a data set. But I think in the long term it's worth doing, and I think some group is going to get around to it sooner or later.
Now all that said, I think another big challenge on the horizon is that the idea of IP ownership itself (along with the idea of privacy, and the public-private distinction generally for that matter) is going to be attacked and increasingly unappreciated in the culture over the next few decades ... as in there will be an assumption any works are in the public domain where the creators themselves are also drinking that koolaide and taking it for granted. I'm in favor of having legal rights respected, but that's a different problem that's a bigger can of worms; and I'm not sure how it will play out. I'm unsettled by the idea, and I don't know that it will happen, but it looks like the writing on the wall for me right now. I'll be really interested to see what the culture appears to make of "content creation", "ownership", and "private vs. public" in the coming years.
Edit:
Quote Posted by Azaran
Well that was expected. You can already search countless copyrighted images on any search engine and use them as you wish, so it's pretty much fair game
Legally no, you can't, but you're proving my point in that last bit. The vast, vast majority of uses have gone on unrestricted by the IP owners, and when you have that kind of rampant unenforcement, it has the effect of eroding respect or even recognition of the law. AI art has just boosted that trend to the next level. It's something you can expect that's hard to deal with from the artists' perspective, or I guess we're calling all of them content creators now, which is another kind of troubling sign.
Quote Posted by heywood
EDIT: Upon second thought, maybe we shouldn't worry about stunts like that. The blame rests with the AI artist who tried to sell the work, not with AI. The artist could just as easily paint an image of Mickey Mouse and try to sell that.
Also this, yes, I was going to add this above. If the data set itself is clear in terms of IP rights, it can still output images copying the style of others, but then it's not a violation by the model, it's a violation of the person entering the prompt that makes it create a copying image in the same way they'd violate it by drawing it themselves. So it could still be a violation, but of the user, not the model.
Anarchic Fox on 12/1/2023 at 05:22
Quote Posted by demagogue
So my thinking was that of course it's a necessary thing. Rights are rights, and they have to be enforced.
*nods* And this new kind of exploitation makes them even harder to enforce. Train your AI on copyrighted material, then lie and say you used only public domain stuff. How can such a lie be exposed?
I have found myself in the strange position of wanting stronger copyright law.
Quote:
But I was also thinking the major value of these systems isn't to copy artists' style per se (except for like the classics, where we're talking about works already in the public domain anyway). It's to be an image generator that can translate concepts to images, where the style should come from the prompts, and taking artists' style would even be counterproductive.
You speak as though it's intentional, but perhaps it can be inadvertent. If one artist produces an outsized amount of a niche subject -- say, if they were the lead character artist on a new game -- then generating an image of that subject might also end up copying the style.
Quote:
]Now all that said, I think another big challenge on the horizon is that the idea of IP ownership itself (along with the idea of privacy, and the public-private distinction generally for that matter) is going to be attacked and increasingly unappreciated in the culture over the next few decades...
This has already started. Someone has called AI art "the democratization of art," as though artists were aristocratics. Granted, this person was an idiot amplified by the social dynamics of the hellsite Twitter, but the words are out there.
Quote:
as in there will be an assumption any works are in the public domain where the creators themselves are also drinking that koolaide and taking it for granted.
There's also a change in the implications of "public domain." Previously this meant it the art was free to copy, repost and reprint. Nobody anticipated training AI to be one of the permissions granted. I expect to see some public licenses appearing that explicitly forbid AI training.
Let me relate three things that have happened that have contributed to my anger. (1) Kotaku published an article about Twitter burning, using a DALL-E image of its mascot burning. In previous eras that header would have been commissioned or licensed artwork. (2) A major figure in the Magic: The Gathering fan community launched a Kickstarter for his own card game, which will use only AI-generated images for the cards. (3) An artist was streaming herself drawing a commission. As a prank, a viewer took an in-progress screenshot, fed it into some AI art program, posted the image before the original artist, and then pretended the artist had copied them.
Azaran on 13/1/2023 at 20:27
(
https://futurism.com/the-byte/italy-robot-carves-sculptures-marble?fbclid=IwAR0wD5fdvn36fF5yd48_0Mxxt6H1CXh3z25eNnmqqwqbRYA7eggrZ9zv02k) Not exactly AI, but related
Quote:
An Italian startup called Robotor has invented a machine that's nearly as good at carving marble masterpieces out of Carrara marble as its Renaissance-era predecessors.
As CBS News reports, Robotor founder Giacomo Massari is convinced his robot-machined marble statues are nearly as good as those made by humans. Almost.
"I think, let's say we are in 99 percent," he told CBS. "But it's still the human touch [that] makes the difference. That one percent is so important."
Massari even went a step further arguing that "robot technology doesn't steal the job of the humans, but just improves it" — a bold statement, considering the mastership that went into a form of art that has been around for thousands of years.
Robotor's latest robot sculptor, dubbed "1L," stands at 13 feet tall, a zinc alloy behemoth capable of carefully chipping away at a slab of marble day and night.
The company claims the technology is nothing short of revolutionary.
"The quarried material can now be transformed, even in extreme conditions, into complex works in a way that was once considered unimaginable," the company boasts on its website. "We are entering a new era of sculpture, which no longer consists of broken stones, chisels and dust, but of scanning, point clouds and design."
Unsurprisingly, not everybody is happy with robots taking over the craft, arguing that something important could be lost in the process of modernizing processes with new technologies.
"We risk forgetting how to work with our hands," Florence Cathedral sculptor Lorenzo Calcinai told CBS. "I hope that a certain knowhow and knowledge will always remain, although the more we go forward, the harder it will be to preserve it."
Inline Image:
https://assets3.cbsnewsstatic.com/hub/i/r/2023/01/03/33f72651-1236-45e7-a88e-4fb42cef1c80/thumbnail/620x364/017d90dd3a430acee7f3266ed6f1a2d5/robot-sculpture-venus.jpgAnother article
(
https://www.cbsnews.com/news/robots-marble-sculpture-carrara-italy-robotics-art/)
lowenz on 16/1/2023 at 08:53
Yeah, we're getting good at robotic (and we got high school classes too about robotic in Milan and Bergamo area :D )
But the problem is the same as hand writing (and brain-level implications), just Calcinai says.
Azaran on 18/1/2023 at 19:13
And so begin the lawsuits
(
https://www.polygon.com/23558946/ai-art-lawsuit-stability-stable-diffusion-deviantart-midjourney)
(
https://www.theverge.com/2023/1/17/23558516/ai-art-copyright-stable-diffusion-getty-images-lawsuit)
Quote:
Getty Images is suing Stability AI, creators of popular AI art tool Stable Diffusion, over alleged copyright violation.
In a press statement shared with The Verge, the stock photo company said it believes that Stability AI “unlawfully copied and processed millions of images protected by copyright” to train its software and that Getty Images has “commenced legal proceedings in the High Court of Justice in London” against the firm.
Getty Images CEO Craig Peters told The Verge in an interview that the company has issued Stability AI with a “letter before action” — a formal notification of impending litigation in the UK. (The company did not say whether legal proceedings would take place in the US, too.)
“The driver of that [letter] is Stability AI’s use of intellectual property of others — absent permission or consideration — to build a commercial offering of their own financial benefit,” said Peters. “We don’t believe this specific deployment of Stability’s commercial offering is covered by fair dealing in the UK or fair use in the US. The company made no outreach to Getty Images to utilize our or our contributors’ material so we’re taking an action to protect our and our contributors’ intellectual property rights.”
When contacted by The Verge, a press representative for Stability AI, Angela Pontarolo, said the “Stability AI team has not received information about this lawsuit, so we cannot comment.”
The lawsuit marks an escalation in the developing legal battle between AI firms and content creators for credit, profit, and the future direction of the creative industries. AI art tools like Stable Diffusion rely on human-created images for training data, which companies scrape from the web, often without their creators’ knowledge or consent. AI firms claim this practice is covered by laws like the US fair use doctrine, but many rights holders disagree and say it constitutes copyright violation. Legal experts are divided on the issue but agree that such questions will have to be decided for certain in the courts. (This past weekend, a trio of artists launched the first major lawsuit against AI firms, including Stability AI itself.)
Getty Images CEO Peters compares the current legal landscape in the generative AI scene to the early days of digital music, where companies like Napster offered popular but illegal services before new deals were struck with license holders like music labels.
“We think similarly these generative models need to address the intellectual property rights of others, that’s the crux of it,” said Peters. “And we’re taking this action to get clarity.”
mxleader on 10/2/2023 at 12:33
I've been playing around in Nightcafe AI and it's pretty interesting but sometimes I think that the are that is produced is like when you make a collage out of old magazine and newspaper clippings. Sometimes the AI does such a great job that I'm amazed but at times the AI goes badly wrong with strange results. It's almost like watching someone with a mental illness creating art.
demagogue on 18/2/2023 at 11:11
(
https://www.biorxiv.org/content/10.1101/2023.02.13.528288v1.full) Here's an article where some neuroscience types try to reproduce different aeitiologies of visual hallucinations (aeitiology = the neurological profile underlying each category: neurological conditions, visual loss, and psychedelics) through the parameters of deep learning art/visualization model, and it's apparently accurately reproducing visual content with the different features of (what people report as) the different kinds of hallucinations, depending on the type.
They call their method ‘computational (neuro)phenomenology'. I don't know; maybe one shouldn't make too much of it. But there's something kind of cool and unsettling about being able to dig into some of the traditionally more hidden realms of consciousness in this kind of way.