Topic: AI sites are using e621 to steal art and generate images.

Posted under e621 Tools and Applications

This topic has been locked.

I found out one guy "icze4r" at pixiv using Stable Diffusion to steal images from e621 and from different artist to generate "art". e621 should ban the use of the api or the access of e621 to those sites, it is so painful to see the amount of work stolen by this guy. I suggest to every artist to block him at twitter too
https://www.pixiv.net/en/artworks/106394156

Updated by Cinder

johnwolfshepard said:
I found out one guy "icze4r" at pixiv using Stable Diffusion to steal images from e621 and from different artist to generate "art". e621 should ban the use of the api or the access of e621 to those sites, it is so painful to see the amount of work stolen by this guy. I suggest to every artist to block him at twitter too
https://www.pixiv.net/en/artworks/106394156

The description looks similar to 4chan's style. Some people on 4chan made a model of a subset of e621 a while ago. Realistically all they need to do is download all the images one time and the tags and that set can be used as the basis for making new models on only 100-300k images as the basis. They likely don't even need the API, and banning them only blocks one or two accounts, if you can even identify who or what it is. Japan has a LOT of acceptance of AI art from what I have noticed, even some actual artists I follow have messed around with it on twitter.

Also e6ai is also a thing. Rather than trying to isolate, accept and regulate it the best you can because it isn't going away any time soon. Them posting their prompt as well as being up front about it's nature should be commended, rather than condemned.

As far as I can tell, this person doesn’t seem to be monetizing the AI-generated stuff in any way, or lying about how it was made. That is where I personally draw the line on this issue, and they don’t appear to be crossing it. I’m not telling you or anyone else what to support or not, but please consider that this is not a black-and-white issue, and the tech can actually be used in an honest way.

There is no realistic way of preventing this. All you need is for someone to copy-paste a post and its tags to train a model, and that goes for any publicly-accessible artwork posted on any art gallery.
Even before all of this, there are bots that scrape content from e621 and repost them onto other imageboards, see topic #26990.

Hell, the image files and site aren't the real issue. The TAGs are. You could use the tagging system to train sanely accurate filters for example. And yeah, trying to circle all the wagons while one's on fire seems like a counterproductive use of our time. I'm actually wanting to try working with the image set one day, for other reasons than just imitating original drawings in low-effort way. There are already projects where you can do this.

https://github.com/DominikDoom/a1111-sd-webui-tagcomplete A tool for boorus.
https://e621.net/forum_topics/13526 The original topic for this, I believe. "Someday", haha. It came pretty fast!
https://e621.net/forum_topics/17083 Low-effort tagging and it's unintended consequences.
https://e621.net/forum_topics/30098 "Topic: e6tag: A deep image classifier trained on e621's crowdsourced annotations"

I had a link for a demo of autotagger for e621 but can't remember it...

Updated

I mean, isn't the big hubbub around the available AI models that they've been taking art from all over the internet? It's not just e6 that's being used as a source. If you don't think other places like FA, DA, Twitter, etc, aren't being scraped for training data too, you're sorely mistaken.

Based on papers on how it plagiarises images, e.g. https://arxiv.org/pdf/2212.03860.pdf
I'd say the best method right now is to use stuff like Glaze which uses an advantage in the above research. Since most images data-laundering pirates take in bulk are generally too lazy to fix anti-AI plagiarism artifacts. See here: https://glaze.cs.uchicago.edu/

Another method is to only show very simpler lower res versions of your art online and to paywall it, but that's quite drastic, isn't it?
You could also avoid over-tagging your art, but in this case, e621 is browsed through tagging so you could pursue other sites.

Also, certain art styles just break the AI's fitting. Unique compositions tend to get overfitted or completely discarded by the feature-map tree for your image.
AI users tend to pursue more commoner artist styles as there's more samples to interpolate over, it creates a very generic appearance for the model.

inafox said:
Based on papers on how it plagiarises images, e.g. https://arxiv.org/pdf/2212.03860.pdf
I'd say the best method right now is to use stuff like Glaze which uses an advantage in the above research. Since most images data-laundering pirates take in bulk are generally too lazy to fix anti-AI plagiarism artifacts. See here: https://glaze.cs.uchicago.edu/

Another method is to only show very simpler lower res versions of your art online and to paywall it, but that's quite drastic, isn't it?
You could also avoid over-tagging your art, but in this case, e621 is browsed through tagging so you could pursue other sites.

Also, certain art styles just break the AI's fitting. Unique compositions tend to get overfitted or completely discarded by the feature-map tree for your image.
AI users tend to pursue more commoner artist styles as there's more samples to interpolate over, it creates a very generic appearance for the model.

Glaze is a method of preventing style replication and limited plagiarism. It does nothing about actually people using your art as feedstock to create an AI that generates results with similar concepts. It also distorts the image visibly if you use an "animation" art style, as they mention themselves, so it kind of just is bad for majority of images on this site.

Also it is unproven, and only backed up by their own word and non-peer reviewed papers.

In the end it is little more than a watermarking system that creates patterned visible watermarks that will mess with current training systems to force overtraining to a degree, but unlikely to work for long by their own admission. So your current protected images, might just be distorted images in the future and have tools that clean up them made.

Edit: looking into this a bit more, it is little more than a temporary solution. It uses a tailored attack vector to trick the AI into misclassifying it. This trick has been known for years and has to be re-done for each new iterative AI model made. https://www.theverge.com/2018/1/3/16844842/ai-computer-vision-trick-adversarial-patches-google

Basically a watermark version of the old adversarial patches thing.

Updated

Low resolution is bad because people will just figure out a way to upscale it, as well. Also, the 512x512 image size now... haha, it gets expensive to go past 1MP.
Glaze sounds a bit like some 'voodoo math' papers where they tried to convince juries and judges of someone's guilt using incorrect assumptions about certain networks.

inafox said:
Based on papers on how it plagiarises images, e.g. https://arxiv.org/pdf/2212.03860.pdf
I'd say the best method right now is to use stuff like Glaze which uses an advantage in the above research. Since most images data-laundering pirates take in bulk are generally too lazy to fix anti-AI plagiarism artifacts. See here: https://glaze.cs.uchicago.edu/

Another method is to only show very simpler lower res versions of your art online and to paywall it, but that's quite drastic, isn't it?
You could also avoid over-tagging your art, but in this case, e621 is browsed through tagging so you could pursue other sites.

Also, certain art styles just break the AI's fitting. Unique compositions tend to get overfitted or completely discarded by the feature-map tree for your image.
AI users tend to pursue more commoner artist styles as there's more samples to interpolate over, it creates a very generic appearance for the model.

Already been bypassed anyways in 16 lines of python. Runs fully on CPU, if a laptop cou takes less then 4 seconds for an 1024px image then having a very high end CPU would be almost nothing.
https://github.com/lllyasviel/AdverseCleaner
There is also a plugin for the Automatic1111 webui linked at the bottom of the repo

Glaze has also already had it's sourcecode leaked.
Not the frontend ui source, but the actual tool itself.

nuclear_furry said:
As far as I can tell, this person doesn’t seem to be monetizing the AI-generated stuff in any way, or lying about how it was made. That is where I personally draw the line on this issue, and they don’t appear to be crossing it. I’m not telling you or anyone else what to support or not, but please consider that this is not a black-and-white issue, and the tech can actually be used in an honest way.

I mean, AI images are automatically public domain anyway, but there's the gray area of the source of learning those images are there.

Ill be honest I dont care about this one bit.

if I could download all of E621 with its tags embedded for sorting out to train AI I totally would. It sounds like a load of fun and a fantastic offline-content generation method for people to make whatever they want.

sweetaleena said:
Ill be honest I dont care about this one bit.

if I could download all of E621 with its tags embedded for sorting out to train AI I totally would. It sounds like a load of fun and a fantastic offline-content generation method for people to make whatever they want.

There is a json file which has the tags and other details about each post and is shared with the post ID, which makes that process extremely easy to set up.

alexyorim said:
I mean, AI images are automatically public domain anyway, but there's the gray area of the source of learning those images are there.

AI images are no more public domain then any functional generated information. Now that might change in the future, but as of now, there's no reason to think they aren't copyrightable. With that said, release AI generated work into the public domain, should release the creator from most legal ramifications should the law change.

The sourcing for the back propagation (learning), is somewhat gray in the sense that there's multiple lawsuits about it. Though, I'm of the opinion that most cases AI generation falls under the same concepts as collages, which are protected in at least some instances.

All that being said, the fact is the cat is out of the bag. There's no really way to put it back in. AI generated works will continue to be made, even if they became fully illegal in the US (which isn't possible), at the very least other countries wont abide, and will end up surpassing us in art and media output.

In the US (and EU) you might see AI generators requiring licensing for any art works they're trained against. But even that has issue. Like, in theory, you could create a generator that has never seen artwork before. It would be difficult, but you could pit a series of generators against a "tagger", a discriminator, and a human. Properly done, a generator could learn to draw off the output from both the tagger and the discriminator, and human would come in from time to time to select which models are producing viable results thus giving power to the discriminator and generators. It be like a convoluted GAN.

Even with out the tagger, you could add a human into that part as well, but it would be much more difficult and longer.

This also ignores the fact that there is public domain art work which could be used anyway.

Also, none of this will stop big companies like Adobe from creating their own AIs on art work made with their software. If you read your EULAs every artist who ever used Adobe in the past 10 years have already given them the rights to use their art work in AI generation.

reallyjustwantpr0n said:
...If you read your EULAs every artist who ever used Adobe in the past 10 years have already given them the rights to use their art work in AI generation.

That includes the artists who "high-risk traded" the softwares?

alexyorim said:
That includes the artists who "high-risk traded" the softwares?

no, you don't agree to the EULA when you download those programs. I'm not exactly sure how adobe gets access to the content made by registered users but whatever it is wouldn't apply to "high-risk" ones since they don't connect to adobe's servers to use the software.

reallyjustwantpr0n said:
AI images are no more public domain then any functional generated information. Now that might change in the future, but as of now, there's no reason to think they aren't copyrightable.

Yes, they specifically aren't copyrightable (in the US): https://www.youtube.com/watch?v=QtxW39OQbbc (summary video)

It's essentially the same rule as if you grab an image off the net and paint over: the parts that you changed are copyrightable, the rest isn't. Since this is a logical extension of existing law, I think some other countries will follow suit.

(I note that you shouldn't assume from this that the AI generated images are public domain; it's probably more accurate to say that their most defined legal status is that they aren't copyrightable. )

savageorange said:
Yes, they specifically aren't copyrightable (in the US): https://www.youtube.com/watch?v=QtxW39OQbbc (summary video)

It's essentially the same rule as if you grab an image off the net and paint over: the parts that you changed are copyrightable, the rest isn't. Since this is a logical extension of existing law, I think some other countries will follow suit.

(I note that you shouldn't assume from this that the AI generated images are public domain; it's probably more accurate to say that their most defined legal status is that they aren't copyrightable. )

It's not true though, if I wrote a math function (or manipulated one) to generate a visual image or audio clip that is copyrightable as it was created through "human authorship". If it wasn't then nothing created digitally would be protected by copyright because it's all done thought manipulations of algorithms.

Looking deeper into this, the copyright office released the following update a few weeks ago: https://www.federalregister.gov/documents/2023/03/16/2023-05321/copyright-registration-guidance-works-containing-material-generated-by-artificial-intelligence

I'd argue that what I've said is mostly in agreement with it. Let me quote this one part:

In other cases, however, a work containing AI-generated material will also contain sufficient human authorship to support a copyright claim. For example, a human may select or arrange AI-generated material in a sufficiently creative way that “the resulting work as a whole constitutes an original work of authorship.” [33]

Or an artist may modify material originally

generated by AI technology to such a degree that the modifications meet the standard for copyright protection.[34]

In these cases, copyright will only protect the human-authored aspects of the work, which are “independent of” and do “not affect” the copyright status of the AI-generated material itself.[35]

This policy does not mean that technological tools cannot be part of the creative process. Authors have long used such tools to create their works or to recast, transform, or adapt their expressive authorship. For example, a visual artist who uses Adobe Photoshop to edit an image remains the author of the modified image,[36]

and a musical artist may use effects such as guitar pedals when creating a sound recording. In each case, what matters is the extent to which the human had creative control over the work's expression and “actually formed” the traditional elements of authorship.[37]

The question arises what level of human involvement is required for human authorship. At a fundamental level, there is some inconsistency in how the copyright office is judging copyrightable works, in relation to mechanical means. I mean, John Cage's works like 4'33" and music of changes are copyrighted. The one is literally nothing, and the other was randomly generated with dice. Pretty much all Aleatoric music can be copyrighted, and again, it's purely random.

Anyway, my point, I think what I said for the most part agrees with the current view of the copyright office. Long term though, what I don't agree with appear to be "deep" inconsistencies which will likely need to go to court, or be changed by congress.

Also, I disagree with some of what the video author said. Like: AI generation will, eventually, be indistinguishable from stuff generated by hand. There just wont be anyway to detect it. Really long term, I think copyright as a whole isn't viable.

Edit: Just to reiterate my point, which I am shifting a little: You can not assume AI generated content is not protected by copyright with out a deeper understanding of what went into generating it.

Updated

reallyjustwantpr0n said:
It's not true though, if I wrote a math function (or manipulated one) to generate a visual image or audio clip that is copyrightable as it was created through "human authorship". If it wasn't then nothing created digitally would be protected by copyright because it's all done thought manipulations of algorithms.

Math is specifically and uncategorically not copyrightable. This is part of why some people think software patents shouldn't be allowed to exist: programming is completely isomorphic to math, and math is definitionally public domain (this doesn't mean compiled software is non-copyrightable, though).
Now, taking the case of a person painting in a paint program -- the program is just the brush rendering engine, the creative choice of what brushstrokes to make is the users and in fact very little of interest can be made without a lot of user input. The advent of image generating AI is a problem for this way of judging things because it legitimately can produce things that are as sophisticated as what a user might do, with relatively little user input, so seeing only the result is inadequate to judge copyrightability.

Also, I disagree with some of what the video author said. Like: AI generation will, eventually, be indistinguishable from stuff generated by hand. There just wont be anyway to detect it. Really long term, I think copyright as a whole isn't viable.

If I only cited things I agreed on every point with, I'd never cite anything ;) I suspect you're right, with the caveat that people will still work crazy hard on the detection problem because successfully falsifying certain things (eg. ID documents) can cause huge problems, and they don't want to go away from the convenience of electronically submitting digital photos/scans.

Edit: Just to reiterate my point, which I am shifting a little: You can not assume AI generated content is not protected by copyright with out a deeper understanding of what went into generating it.

Of course you never know for sure until you know the details. But also, the better AI gets, the more likely that 'this random image I see which I know used AI generation' will be simply uploaded/printed/etc with a minimum of intervention to what the AI spat out.

I personally would like to see more, what's the word, "experience" or something get put into Furry AI Art.
I mean seriously, AI Art ain't gonna take away artist's jobs, Humans always have a "Robots taking our Jobs" fearmonger.

IDC, It's bullshit, Artists need to calm down.

Anime AI Art is already getting a headstart and what it's producing is quite good I must say. Some blunders here and there, I recommend you guys play around with: https://waifus.nemusona.com/
Which was made by a 4channer who was bored and it's coming a long way now.

I personally would like to see the GROWTH of AI Furry Art now, so I can play around with Furry AI Art generators.

Anyone who sees this as a minus or art theft aren't entirely wrong but they're not entirely right neither.

innovation brings more happiness, those who stick too close to traditions get left behind by progression.

One day I'll be able to make all the furries I want.
I'll still commission, you can never truly get rid of Artists.
post #2783062

Copyright infringement isn't theft, thus your argument in invalid? :P <-- Jack Valenti is infamous for calling it that

So do we tag "adversarial_noise" now? There's no tag usages yet but I did see it the other day.

ijerk said:
So do we tag "adversarial_noise" now? There's no tag usages yet but I did see it the other day.

That seems too specific. Like, maybe we care that there are anti-AI-training measures in an image, but it seems likely that adversarial noise itself is going to be one in a line of technologies. Assuming that AI doesn't reach a point at which anti-AI-training measures are simply not effective enough to bother with.

regardless of the legality of training and/or using AI models (i suspect the latter at the very least is not fair use, but we'll see what the courts say), i am shocked at the number of furries--usually a relatively pro-artist lot--who think that it is OK to use AI models built from datasets taken from artists who were not consulted, credited, or compensated. these AI models have already had a noticeable and damaging effect on the (furry, and broader) art community.

there is a pinned post floating around a pro-AI furry subreddit that is literally a guide to "upgrading" a sketch commission from an artist so you can get a nicer commission "without having to pay for it". you can find an angry and very long twitter thread of artists who stopped offering sketch comms and updated their TOS in response. i'm observing a discord server with models designed to target specific artists' styles (and coincidentally these are artists who usually cost quite a lot of money--how strange). and furries still defend using these tools and datasets, because apparently it's OK to get free stuff built off the works of thousands of uncompensated artists if you really, really want it.🙄

yes, i know how the models are constructed and how stable-diffusion tools work. yes, i've tried them myself on my own GPUs. your defenses involving democratizing art or the invention of the automobile or the inevitability of technological progress or uwu im poor and i want free art--these are not persuasive to me. i don't care what you have to say. i don't abhor you, fyi. i just don't care enough about you to waste my time listening.

fwiw, this is literally why i requested DNP status, stopped posting new projects to e6 last year, and reduced my posting frequency (and quality) in general. yeah, my art isn't particularly good, or likely to be desirable as a trained style (a few people might get their jollies but barely anyone is going to look at it--someone might do it to be a troll, but it's their electricity and time to waste, not mine). i don't see the point in giving free, high-resolution, meticulously tagged and organized art to a community with a startling number of AI bros who are just going to ignore my TOS and my general wishes.

i'm considering abandoning tags going forward on my other galleries because i can afford the discoverability hit as a hobbyist. hell, most of my art doesn't even get posted publicly any more. i share it privately with friends or post on a few discord servers. so great job guys. this is definitely a positive use of an amazing technology that has no impacts whatsoever in your community. haven't you heard the expression, don't shit in your own bed?

especially the guy that was like, "AI Art ain't gonna take away artist's jobs". sure, totally not happening, nor is it discouraging people from pursuing art-related careers.

https://restofworld.org/2023/ai-image-china-video-game-layoffs/
https://www.reddit.com/r/blender/comments/121lhfq/i_lost_everything_that_made_me_love_my_job/
https://gameworldobserver.com/2023/04/12/game-artist-jobs-china-down-70-percent-gen-ai-adoption

anyway, that's the end of myself-allotted time to shout into the void. don't bother responding. or do, idk. your choice, your time to waste.

edit: grammar

To anyone saying "no one" is monetising from AI images, you should see DeviantArt and Unstable Diffusion patreons. There's literally hundreds literally even just img2img'ing people's furry art (mostly because interpolative furry AI models themselves are too crappily tuned to create decent results yet). e6ai is full of that, too e.g. https://e6ai.net/posts/1628 where an image is just outright plagiarised and "adjusted" with the rest of the network that also contains stolen feature map layers from various interpolated artworks. And when individuals aren't potentially profiting from it (socially or financially), sites sure as hell are. I'm sure many sites would love to "data launder" art in general (e.g. DeviantArt using an AI model using its own users' and competitors' artworks).

And algorithmic collage plagiarists aren't anything new and hardly "progressive", there's been pretty good photo distortion and paint filters out there, it was just a bit "harder" before, but never as hard as making an original painting/sculpture. Just "prompts" give a bit more control over remixing people's artwork. Technological improvement is not the same as the abuse of said technologies, otherwise we might as well say that nukes were "innovative" and "for the good of all" (which is defined by who-the-hell-who exactly anyway?)
People who feel self-entitled to the years of sleepless nights and financial losses and university educated skills of artists or any labourer are just a echo of the parasitical infestation of consumerism bestowed upon this planet by capitalism. You're not entitled to someone else's work, period, even if you use an estimation algorithm to interpolate bundles of work.

How combing data for data-laundering is even allowed is a very frowned-upon one by anyone who is pro-privacy (which is basically the entire left-wing and centre-right-wing, by the way). So I don't see who else but the far-right would find stealing work and not compensating it would be remotely a good thing since they don't care for labour rights. Data laundering is the hallmark of a surveillance society, for companies and other people to use others' data as a form of social and financial laundering is very shitty. Just like how calling something "your painting" without ever having painted the image, because you know... Painting is DEFINITIVELY a art form, a process, so don't call an AI image "your painting", you likely don't even know "how" it was painted.

And to those who say "the algorithm" made it. Not only is that complete bullshit, e.g. https://arxiv.org/pdf/2212.03860.pdf and https://techcrunch.com/2022/12/13/image-generating-ai-can-copy-and-paste-from-training-data-raising-ip-concerns/ scientists can literally reconstruct the original images in a network which are used without permission. Just because an image is on the internet it doesn't mean it's public domain. In fact prompters "intentionally" enter artist names and supply direct images, often, in prompts to try to rip off that artist as much as possible.

The misinformation about GANs and SD is also proof that they don't know the developed purpose of NNs and RNN encoders or how they work. The simplest unit of a NN is the ANN, which "shapes" an output based on the intent to recreate a shape function. For example, if I created the NN feature "sine wave", I'd input estimated values of x to predict y e.g. f(x) => P(e^2) * y. And the ANN algorithm would "fit" the estimation curve to the best degree "emulate" a sine-wave function from samples (this requires "knowing" what a sine wave looks like before creating it, there is no "generation" only "estimation"). It's like interpolating sin(x) and triangle(x) waves and saying you invented a new wave form by blending the two a priori wave shapestogether. AI is not one singular algorithm or some sentient magical SkyNet genie, to think so is woo and fictively ideological. Al algorithms f(x) => y follow GIGO (garbage-in, garbage-out) so if you put quality art in you will rip off the quality of the input and the hard work that went into said quality, in order to produce an interpolated output. Stable Diffusion does this with feature maps and uses "diffusion" to smoothly glue shapes together through "image reconstruction" and "blur-like denoising" hacks and the AI doesn't know fuck all of how the art was made or the chronological way that the brush strokes are layered. It uses shape data stolen from the compositions of various artists, from major shapes to minor shapes, iterating by kernel.

Calling an interposing restorative algorithm that was originally made to reconstruct satellite and streamed images via GPU denoising procedures a "generative technology" and dismissing that it's a superposing derivation hack is an entirely absurd one. You might as well tune in between two different analogue TV/radio stations and denoise that and call that a "original piece". I've never heard so much unscientific nonsense in my life, that machines are seen as quasi-god-like generative beings by some people is starting to resemble something out of the film Idiocracy where consumer "orders" a computer but doesn't have any regard to the source nor how it works. If you sent such an AI back in time and they'd had worshipped it for their lack of knowledge, but modern people have no excuse to not be educated on this. It also kind of makes me concerned for the future of humanity and the decline of intellect through the lack of reward through actual effort and learning. Let alone how people dismiss effort as if they're indirectly enslaving people. I wouldn't bet on some AI models being made from sweatshop artists to whom you'd never know even had existed.

The input to an encoder or active algorithm should never be decoupled from the output when it comes to the law and general morality. Same with child pornography AI ( https://petapixel.com/2023/02/21/pedophiles-are-using-ai-generators-to-create-child-abuse-images/ ), if someone makes an AI model estimating images from real abused children, and people are like "well it has nothing to do with the input!" we're heading for a deeply concerning future that has no respect for fellow humans and only focuses on the parasitical consuming of all others' data. Least there's considered bans on AI models made from real CP, and in turn copyrighted images. So no plans for circumventing legality of theft and source content do not wash. The law is slow, but it is showing hints of catching up. We all waited 5 years for DMCA, so sure AI is law-circumventing at the moment, but that doesn't mean there won't be laws, there's too many issues surrounding unethical AI models to begin with. Every pirate said "you can't copyright a digital image, it's cloneable", yet we did. So anyone using AI is playing a game of fire when it comes to the impending future lawsuits, just as before the DMCA hit with millennium infringement boom. SD only really hit court awareness this last year with real suing, and at some point frustration leads to law. Give it time.

Hmm, if I don't publish anything, pirates can't copy it, right? I guess that reasoning works for some definition of 'works'. Well, unless you're Jerome David Salinger.

I probably qualify as a "furry artist" and am pretty fine with all the AI stuff happening. I think any laws passed will only affect models created by large corps, in the same way the DMCA affected youtube or Megaupload but didn't do crap to stop piracy. Meanwhile the individual will continue to train their own models from whatever they scrape up.

If people actually value a soulless AI image over yours, well that probably means they didn't value it much to begin with. Maybe you should still be drawing because it's fun rather than expecting to make a living on a niche of a niche community? Or at least find something that the model draws badly and focus on that (I actually look forward to the day when AI has completely taken over pin-up generation and the Popular tab isn't spammed with them anymore).

ijerk said:
If people actually value a soulless AI image over yours, well that probably means they didn't value it much to begin with. Maybe you should still be drawing because it's fun rather than expecting to make a living on a niche of a niche community? Or at least find something that the model draws badly and focus on that (I actually look forward to the day when AI has completely taken over pin-up generation and the Popular tab isn't spammed with them anymore).

Most untrained people I know of can't even tell an AI plagiarised image from an actual direct artwork, though an evaluation of art foundations can be semi-detective. And a drawing that's "just for fun" won't have the quality of an AI image since such networks pursue instantaneous higher quality by interpolating more heavily detailed/composed works. So may be MS paints are safe for carefree people, but just because "some" people have such an attitude doesn't mean everyone has to suffer for it. The narcissisms of certain self-entitled pro-AI people shouldn't dictate what others have to go through, though indeed narcissism is the opposite of empathy. Just because you feel it "doesn't affect you" doesn't mean it doesn't affect others, so you can lose the "I'm alright, jack" attitude when considering what the majority feel. I know only a small majority of vocal self-entitled art consumers who want this AI, the rest are sympathetic of artists.

And people are going to notice people who have more output than someone who spends weeks to months on difficult works. This is just how exposure and marketing works. It is very unrewarding for artists to have their art stolen and "bred" by interpolation engines like SD for others to profit from especially.

Updated

alphamule said:
Hmm, if I don't publish anything, pirates can't copy it, right? I guess that reasoning works for some definition of 'works'. Well, unless you're Jerome David Salinger.

that's why i only share within closed circles for most things (WIPs especially). like sure if it leaks NBD, but so far none of them have. by and large it does what i want. i make my pics because i like to and i share them with who i want to. it's very freeing in a way. and it's a middle finger to the people who want art like mine but aren't allowed to have it :>

but that only solves my personal problem. the bigger problem is that the behavior of the unrepentant AI bros (e.g., the furry AI subreddit that literally publishes directions on how to get a high-quality commission without paying for most of it, or the people who say that crediting artists is "too hard" and that we're "outlawing creativity" and other nonsense) drives other artists to make similar evaluations. not everyone will go to my extremes ofc. not everyone cares. but those that do--a decent majority of artists--have to make ugly choices, and the outcome of those choices tends to hurt the community as a whole. but yay free art i guess.

foxaro said:
I know only a small majority of vocal self-entitled art consumers who want this AI, the rest are sympathetic of artists.

Sounds like you're saying it's not really an issue then. Most people are going to continue supporting artists rather than going with whatever the ai churns out.

sentharn said:
regardless of the legality of training and/or using AI models (i suspect the latter at the very least is not fair use, but we'll see what the courts say), i am shocked at the number of furries--usually a relatively pro-artist lot--who think that it is OK to use AI models built from datasets taken from artists who were not consulted, credited, or compensated. these AI models have already had a noticeable and damaging effect on the (furry, and broader) art community.

A lot of people, especially in this fandom, aren't OK with it, but they definitely aren't commenting in this thread — keep in mind that only a small segment of this site even uses these forums and given the number of dismissive or pro-AI art comments, they may not want to engage in a debate, especially when the most opinionated have already seen the arguments before and know how these conversations can go. I know that I didn't comment here before for that reason, but I'm not happy with the situation, how the technology is being used, and the attitudes around that usage — as your second post mentions.

Here's my two (or ten) cents on several different angles, and this first part will probably come off as snobby:

While technologically impressive, after seeing hundreds upon hundreds of AI images, even ones that are supposedly "good", I find AI images have a very samey quality about them when it comes to generating images intended to replicate art (say, using them to make endless anime elf bikini pinups, and not done experimentally, or as intentionally bizarre or created for comedic/meme purposes). They're inherently substandard, they're bland images and much easier to spot than many people think (although, unfortunately, there are artists with styles that will lead to "false positives" without very close inspection or evidence of a process) — it's not ever just the fingers, the "tells" vary by the image and I could be here all day making an essay about the various recurring, seemingly inherent visual and compositional quirks of algorithmic generation, and finding supporting examples, but I don't feel like doing that, and I don't feel like sparking, or engaging in a debate that'll only frustrate me, so I'll move on.

The training data sets are gathered and used unethically, and profiting off of AI generated images is especially egregious*, that's using someone's labour without their consent and without compensation to essentially make free money, laundering the data doesn't change that fact to me, especially when some AI images I've run across are very obviously fed art from a specific artist, or clearly using data based on a few similar artists' work. I reported an AI image probably about a month ago (it was deleted), that was clearly based off of fluff-kevlar's artwork.

Aside from the ethics, I don't fear "AI" being "better" than artists or replacing them, for one, the vast majority of people online seem to want to continue buying art from humans, and seem to see AI art, if they see it positively, as a curiosity or a way to make funny memes, and deride those who buy AI images or those who unironically compare their prompt entering to being an actual artist; and from my own experience; AI generated content on Twitter and DeviantART seems to get far less engagement than similar human made artwork. Generally, my impression of the people seeing AI as a way to "get free art" is that they're not people who would be buying commissions in the first place.

A specific problem, which doesn't seem to get mentioned much, is with AI "junk" or "spam" cluttering art sites, which is already happening — AI prompters posting dozens of iterations of the same concept over and over, flooding places like Pixiv and DeviantART is making both sites far less appealing and usable as a viewer.

* tangentially, I really, really, don't understand paying for AI images in the first place, why would you ever buy something that is both substandard, and a single tutorial and some software downloads away, instead of just commissioning an artist who will do better, and understand what you want the first time? Why buy muffins that taste like cardboard instead of making the same muffins at home, or paying a baker to make them?

Updated

maplebytes said:
While technologically impressive, after seeing hundreds upon hundreds of AI images, even ones that are supposedly "good", I find AI images have a very samey quality about them when it comes to generating images intended to replicate art (say, using them to make endless anime elf bikini pinups, and not done experimentally, or as intentionally bizarre or created for comedic/meme purposes). They're inherently substandard, they're bland images and much easier to spot than many people think (although, unfortunately, there are artists with styles that will lead to "false positives" without very close inspection or evidence of a process) — it's not ever just the fingers, the "tells" vary by the image and I could be here all day making an essay about the various recurring, seemingly inherent visual and compositional quirks of algorithmic generation, and finding supporting examples, but I don't feel like doing that, and I don't feel like sparking, or engaging in a debate that'll only frustrate me, so I'll move on.

To me this is exactly the same underlying issue as with all other filters, no matter how 'interesting' or 'complex' the effect they may create: It's extremely unlikely to create a really interesting work by primarily throwing filters at the problem.
It's not that, say 'Gauss blur is a bad filter that produces shitty looking results', it's that bad or good results is determined by something like art direction decisiveness * familiarity with tools. You can't get away from the need to know your tools intimately, that's still an illusion, AI generation compounding effects reasonably well doesn't protect you from this. And it's really hard to have a strong opinion on what 'good results' are when you lack the familiarity with art fundamentals.
When I watch people talking about using AI generation, there is so much 'well this is kind of what I wanted, it's good I guess', or 'it's good except for the hands (but I'd rather fuck with more generations rather than figuring out how to fix the relatively small part of the image which is the hands)'. This seems similar to a common novice artist experience 'well I'm pretty sure it's wrong but I don't know how, so I guess I'll just change this, this, this, and pray; wash, rinse, repeat'

Not that there is intrinsically something wrong about just exploring what kind of results you can get, but you need a lot more than that to arrive at a 'good' result. AI makes it easy to fuck around, which means a lot of people will just fuck around.

savageorange said:
Math is specifically and uncategorically not copyrightable. This is part of why some people think software patents shouldn't be allowed to exist: programming is completely isomorphic to math, and math is definitionally public domain (this doesn't mean compiled software is non-copyrightable, though).
Now, taking the case of a person painting in a paint program -- the program is just the brush rendering engine, the creative choice of what brushstrokes to make is the users and in fact very little of interest can be made without a lot of user input. The advent of image generating AI is a problem for this way of judging things because it legitimately can produce things that are as sophisticated as what a user might do, with relatively little user input, so seeing only the result is inadequate to judge copyrightability.

So, circling back on this. Math it self is not copyrightable, but the outputs, if selected properly can be. Computer code is a great example of this, code is just an application of algorithms and math, compiled code is really nothing more than a number, and yet our legal system say someone can own that number. It is a logical contradiction, but it also doesn't matter. Some math, can in fact be copyrighted, though it requires a "fuzzy" amount of human mangling.

I fully agree with your point on judging output alone not being enough to judge copyrightable. Interestingly, AI could "learn" to make art using brushes, it doesn't need to operate on a pixel by pixel basis. There's an entire field of AI devoted to this idea, robotic process automation (RPA). Though it deals far more with document input then drawing, it is applicable.

I get why people don't make RPA algorithms for image generation, it would just take forever to train. But, it should be possible... lol, kind of makes me want to try.

If I only cited things I agreed on every point with, I'd never cite anything ;) I suspect you're right, with the caveat that people will still work crazy hard on the detection problem because successfully falsifying certain things (eg. ID documents) can cause huge problems, and they don't want to go away from the convenience of electronically submitting digital photos/scans.

There's no way for an "AI detector" to solve that problem. What you need are secured and digitally signed records. There's no other way to make that work. Even that has some issues, as quantum algorithms may be able to just break any signing system we come up with. Honestly though, that's the only answer. Digital images are just really large numbers. Detectors work on seeing patterns in those numbers, but it's a finite field, even if large. Generators will eventually be able to create numbers that have no easily distinguishable patterns to them, at least when compared between generated and authentic items.

Of course you never know for sure until you know the details. But also, the better AI gets, the more likely that 'this random image I see which I know used AI generation' will be simply uploaded/printed/etc with a minimum of intervention to what the AI spat out.

Yeah. Which is why I kind of see copyright just failing in the not too distant future.

Thanks for the chat. Meant to get back to you earlier just... busy. Working with some BERT models actually!

sentharn said:
it's very freeing in a way. and it's a middle finger to the people who want art like mine but aren't allowed to have it

This is kind of the point of copyrights and patents. If you publish it, you have to deal with the whole mess that copyrights implies (including mandatory public domain release, DMCA, DMCA trolling(like, backdated sites+fake claims), fights over plagiarism, parodies and fair use, and so on). It's why the J.D.S. thing was so controversial. It was never intended to be released until it was basically beyond sane copyright terms, anyways. XD

I was actually not just joking with that comment, yes. It's an effective option if you absolutely don't want to have to deal with that. Don't publish. But that takes a lot of the fun out of it! And stress, haha. Not telling people how your invention works or not selling it at all is much the same. Someone else publishes how it works, from studying it or just reinventing the wheel due to having same problem you wanted to solve, and you have no real recourse. No patents to enforce. It's one of the big reasons that modern copyrights are very different - they're automatic (blame photography for them adding this?).

maplebytes said:
I know that I didn't comment here before for that reason, but I'm not happy with the situation, how the technology is being used, and the attitudes around that usage — as your second post mentions.

They're inherently substandard, they're bland images and much easier to spot than many people think (although, unfortunately, there are artists with styles that will lead to "false positives" without very close inspection or evidence of a process) — it's not ever just the fingers, the "tells" vary by the image and I could be here all day making an essay about the various recurring, seemingly inherent visual and compositional quirks of algorithmic generation, and finding supporting examples, but I don't feel like doing that, and I don't feel like sparking, or engaging in a debate that'll only frustrate me, so I'll move on.

Generally, my impression of the people seeing AI as a way to "get free art" is that they're not people who would be buying commissions in the first place.

Oh lordy, LOL, the hands! Oh, and text. Remember Flash animation when it was done really really badly? First episode of Southpark bad? :-D

There's something I noticed with an example on YT. They effectively blur an image like how blanking out part of FFT and reversing it would do, add noise, etc. Then they try to go back the other way in a feedback loop. Seems that you could detect if two images are based on the same source image in img2img setup, but I think it would be intractible to do it algorithmically.

Yeah, not really worried as the types that want low-effort art are the type to just use something like Bing/Google/Yandex to get their images, and oh look, now they can get more selection of 'free' art. Nevermind that it took a long time and lots of effort to build this amazing library of data from many millions of people. XD

savageorange said:
Not that there is intrinsically something wrong about just exploring what kind of results you can get, but you need a lot more than that to arrive at a 'good' result. AI makes it easy to fuck around, which means a lot of people will just fuck around.

This is why I want to revisit algorithmic art like I used to post to Deviant Art years ago. I suspect there's entire areas of art that haven't been visited that this makes approachable. Not everything's realism or impressionism or the like. I want to make abstract art just for my own entertainment. ;)

My real issue isn't the algorithmic tools, but like many said, the attitudes of a lot of those that use and promote them. It's already reaching blockchain meme levels of con artists and grifters. We'll eventually get to some stability but this disruptive technology will take time to adjust to. :shrugs:

reallyjustwantpr0n said:
It is a logical contradiction, but it also doesn't matter. Some math, can in fact be copyrighted, though it requires a "fuzzy" amount of human mangling.

In this case, yeah, it doesn't matter, in terms of the practical result making sense. Maybe there is an alternate universe in which compiled software is not copyrightable, but it seems like it would mainly just .. make reverse engineering easier and more legal, which certainly doesn't seem aligned with the overall goal of either patent or copyright law.

There's no way for an "AI detector" to solve that problem.

You don't need to explain that to me. It's just these huge logistical obstacles: institutional inertia, individual lack of technical literacy, individual lack of security literacy.. make it kind of irrelevant. We, even just by thinking we should discuss this subject, are way above the water line here.
The technical part of an actual proper solution to the problem is not that complex but it's also, like, in general not gonna be implemented until way after it actually should.

Thanks for the chat. Meant to get back to you earlier just... busy. Working with some BERT models actually!

Huh.. There is too much stuff happening with AI honestly, for me to even keep track of, but that sounds pretty interesting.

sentharn said:
regardless of the legality of training and/or using AI models (i suspect the latter at the very least is not fair use, but we'll see what the courts say), i am shocked at the number of furries--usually a relatively pro-artist lot--who think that it is OK to use AI models built from datasets taken from artists who were not consulted, credited, or compensated. these AI models have already had a noticeable and damaging effect on the (furry, and broader) art community.

there is a pinned post floating around a pro-AI furry subreddit that is literally a guide to "upgrading" a sketch commission from an artist so you can get a nicer commission "without having to pay for it". you can find an angry and very long twitter thread of artists who stopped offering sketch comms and updated their TOS in response. i'm observing a discord server with models designed to target specific artists' styles (and coincidentally these are artists who usually cost quite a lot of money--how strange). and furries still defend using these tools and datasets, because apparently it's OK to get free stuff built off the works of thousands of uncompensated artists if you really, really want it.🙄

yes, i know how the models are constructed and how stable-diffusion tools work. yes, i've tried them myself on my own GPUs. your defenses involving democratizing art or the invention of the automobile or the inevitability of technological progress or uwu im poor and i want free art--these are not persuasive to me. i don't care what you have to say. i don't abhor you, fyi. i just don't care enough about you to waste my time listening.

fwiw, this is literally why i requested DNP status, stopped posting new projects to e6 last year, and reduced my posting frequency (and quality) in general. yeah, my art isn't particularly good, or likely to be desirable as a trained style (a few people might get their jollies but barely anyone is going to look at it--someone might do it to be a troll, but it's their electricity and time to waste, not mine). i don't see the point in giving free, high-resolution, meticulously tagged and organized art to a community with a startling number of AI bros who are just going to ignore my TOS and my general wishes.

i'm considering abandoning tags going forward on my other galleries because i can afford the discoverability hit as a hobbyist. hell, most of my art doesn't even get posted publicly any more. i share it privately with friends or post on a few discord servers. so great job guys. this is definitely a positive use of an amazing technology that has no impacts whatsoever in your community. haven't you heard the expression, don't shit in your own bed?

especially the guy that was like, "AI Art ain't gonna take away artist's jobs". sure, totally not happening, nor is it discouraging people from pursuing art-related careers.

https://restofworld.org/2023/ai-image-china-video-game-layoffs/
https://www.reddit.com/r/blender/comments/121lhfq/i_lost_everything_that_made_me_love_my_job/
https://gameworldobserver.com/2023/04/12/game-artist-jobs-china-down-70-percent-gen-ai-adoption

anyway, that's the end of myself-allotted time to shout into the void. don't bother responding. or do, idk. your choice, your time to waste.

edit: grammar

Most people are rightfully anti-AI "art". There are just a minority of edgy contrarians here who are OK with it, and their opinion thankfully doesn't matter. AI "art" is rightfully hated by the majority of the community.
Fortunately laws are being enacted against it, and AI "art" is not subject to copyright.

Which is good as AI "artists" don't deserve to make money off what is quite literally someone else work. Especially when that "art" are just soulless rehashes over and over.

Updated

somelatinphrase said:
Most people are rightfully anti-AI "art". There are just a minority of edgy contrarians here who are OK with it, and their opinion thankfully doesn't matter. AI "art" is rightfully hated by the majority of the community.
Fortunately laws are being enacted against it, and AI "art" is not subject to copyright.

Which is good as AI "artists" don't deserve to make money off what is quite literally someone else work. Especially when that "art" are just soulless rehashes over and over.

That seems to be saying what was already said by some? Just like how it's similar situation with the sites that put people's stuff up with watermarks without permission. Sooner or later, law catches up with them.

  • 1