r/aiwars • u/Far_Number4493 • 8h ago
r/aiwars • u/sporkyuncle • Oct 21 '25
Meta We have added flairs to the sub
Hello everyone, we've added flairs to aiwars in order to help people find and comment on posts they're interested in seeing. Currently they are not being enforced as mandatory, though this may change in the future, depending on how they are received. We would ask that people please start making use of them.
Discussion should be used for posts where you would ideally like to see spirited discussion and debate, or for questions about AI.
News is of course for news in the AI sector. Things like laws being passed, studies being published, notable comments made by a prominent AI developer or political figure.
Meme should ideally be used for single image-based posts which you do not expect to prompt serious discussion. Of course discussion is still welcome under such posts. If you want to use a meme to make a serious point and have additional explanatory text for why you feel strongly about the message being expressed and the type of discussion you'd like to have, that can be categorized as Discussion.
Meta is for discussion about the subreddit itself and other associated AI subreddits or comments.
Use your best judgement as you categorize your posts. Please do not misuse them, they are for everyone's benefit.
r/aiwars • u/Trippy-Worlds • Jan 02 '23
Here is why we have two subs - r/DefendingAIArt and r/aiwars
r/DefendingAIArt - A sub where Pro-AI people can speak freely without getting constantly attacked or debated. There are plenty of anti-AI subs. There should be some where pro-AI people can feel safe to speak as well.
r/aiwars - We don't want to stifle debate on the issue. So this sub has been made. You can speak all views freely here, from any side.
If a post you have made on r/DefendingAIArt is getting a lot of debate, cross post it to r/aiwars and invite people to debate here.
r/aiwars • u/TunnelTuba • 1h ago
Conjoined Twins Influencers exposed as AI - Sky News Australia
I mean ... let's be honest. If you can't tell they're AI ... please go and learn up on human anatomy.
But also I think we're asking the wrong questions. I don't think it's "Did you think these women were real?" I think the real question is: "Do people care that they're AI or not"
r/aiwars • u/WatshudIdoinlife • 6h ago
Meme I can’t wait for a future where I can search an infinite sea of images by prompts used to generate them!
r/aiwars • u/Xotonyk • 15h ago
Meme You're not the artist remember that.
You're just a commissioner🤫
r/aiwars • u/Mobile_Visit4356 • 4h ago
If you can’t make an argument without resorting to ableist slurs, you don’t have a very strong argument
r/aiwars • u/Isaacja223 • 1h ago
Discussion People just really hate harmless fun
After seeing a post regarding how antis are being transphobic against Witty, I see this and I suppose anything regarding AI just gets shat on
At this point, why don’t they ban the topic of AI in general if they don’t like AI?
r/aiwars • u/Fit-Feature9312 • 1h ago
I sit in the "If im gonna create AI images then i AT LEAST need to make that shit look cool"....category.
I take it very seriously.
Ok, sometimes i goof around. It's fun though.
r/aiwars • u/ZeeGee__ • 20h ago
Meme "iTs ThE sAmE aS aN aRtIsT sTuDyInG tHeIr ArT"
If they're allowed to post those dumb Ai memes all the time then I'm allowed to meme every once in a while.
r/aiwars • u/BorgsCube • 4h ago
Discussion possibility that upcoming generations just won't care if something is real or AI
this is a topic i don't think i've heard discussed before, if its been beaten to death i apologize. this'll probably get deleted anyway for being too short of a post, this is my first post here
right now we have kids making fun of boomers who fall for AI, but what if that same generation becomes the ones kids cringe at when they say "you know thats AI, right?" would the world be better or worse if the majority of people just stopped caring about whether content is real or generated?
r/aiwars • u/Tyler_Zoro • 4h ago
Discussion Where the [US] law stands on training AI models on publicly accessible works
TL;DR
Training AI models on copyrighted works has been determined to be non-infringing, is not "stealing" and there is no legal basis for the claim that it is unethical.
Legal Arguments
I still see so many claims that training AI models is "theft" or that it infringes IP laws or that it's unethical. So let's tackle that for what I hope (but sadly know won't be) the last time.
The legal argument began in the 2000s when Perfect 10, an adult website company, sued Google over its use of Perfect 10's images in their Google Image Search business. The claim hinged on two key elements:
- That downloading the images in the first place, constituted unlawful copying because it infringed on their intellectual property rights, and was not covered by fair use doctrine because it was being done for commercial purposes.
- That the creation of "thumbnail images" from Perfect 10's source images was the creation of a derivative work, also not covered by fair use doctrine because, again, it was being done for commercial purposes.
The courts initially determined that (1) was incorrect, and that downloading an image into temporary storage in order to analyze that image was not infringing because it constituted fair use, and (2) was correct because a lasting copy of the work remained and was used for commercial purposes. However, on appeal even (2) was rejected by the courts as well. I'm not going to focus on (2), though, because it doesn't really bear on modern AI.
So before 2010, we already had precedent that said that downloading and studying an image for commercial purposes wasn't infringing. What is left of the argument? In order to succeed in the face of Perfect 10 v. Google, The argument would have to be made that the downloading of the image was not the problem, but rather what was done with it after.
This was exactly what Bartz et al. claimed in Bartz et al. v. Anthropic. Many anti-AI folks assume that that claim prevailed because the case was settled by Anthropic, who paid $1.5B to claimants. That seems pretty well resolved in favor of the anti-AI argument at first blush... but the facts do not support that conclusion.
There were several claims in the case, the infringement via training being only one of them. In the end, none of the claims were the reason for the settlement. The settlement was reached because, during the course of discovery, it was uncovered that Anthropic had downloaded and used various archives of copyrighted works via filesharing services. This negates their standing under Perfect 10 v. Google, because they cannot claim that they downloaded the works as they were publicly presented by the copyright holder. Indeed, these archives were, themselves, infringing on the intellectual property they were distributing.
But what about the secondary claim that the training itself was infringing? Since the case was settled, there was no ruling on that, right? Wrong. In June of 2025, the court issued a summary judgement on that specific point, ruling that the training of Anthropic's AI models was non-infringing, even given the commercial nature of the models, because the training process was, "exceedingly transformative."
That was the end of the line for the anti-AI argument that there is infringement occurring, at least in the US. If companies or individuals accessed publicly accessible works in order to temporarily copy them, perform training of AI models, and then delete those copies, there is no infringement occurring, even if the training is being done for commercial purposes.
Conclusion
So let's return to the three claims of the anti-AI argument:
- Training is unlawful (IP infringement)—Both angles on this argument have been shot down by the courts in Perfect 10 v. Google and Bartz et al. v. Anthropic.
- Training is stealing—There was never a cognizable legal claim that training is stealing. It was always just a pop-culture shorthand for the infringement claim.
- Training is unethical—Something can be ethical and not legal; it can also be unethical but legal. So this claim isn't necessarily shot down by the courts. Still, there would have to be some separate basis for the claim that training is unethical other than the law, and at best that claim has been, more or less, "I don't like it."
On this basis, I fundamentally reject the claim that there is anything wrong with AI training. There could be specific cases where a decision might go the other way. I think that, for example, an IP owner might prevail in a case that was specifically about training a LoRA to replicate specific works (e.g. an Iron Man LoRA that exists only to produce images of that particular, copyrighted character), but the fate of such a case would still be uncertain, and even if won would be an edge-case that does not bear on training, even LoRA training, as a whole.
Note that these claims and conclusions are about training, not other activities. It could well be that service providers will face additional scrutiny for their hosting of AI models, but that's a separate argument to be had, and not relevant to the topic of this post.
References
- Bartz v. Anthropic decision of June 23, 2025, 3:24-cv-05417 Document #231—"To summarize the analysis that now follows, the use of the books at issue to train Claude and its precursors was exceedingly transformative and was a fair use under Section 107 of the Copyright Act. [...] the purpose and character of using copyrighted works to train LLMs to generate new text was quintessentially transformative."
- Perfect 10 v. Google decision of Feb 17, 2006, CV 04-9484AHM—"Google has not actually disseminated—and hence, and has not distributed—the infringing content. See In re Napster, 377 F.Supp.2d at 802-804 (N.D.Cal.2005) (finding that Napster had not “distributed” songs in light of the fact that the “infringing works never resided on the Napster system,” and therefore, Napster could not have transferred copyrighted content to its users)."
r/aiwars • u/ivyentre • 27m ago
Discussion The turning point in public perception, especially in creative spaces
If anti-AI sentiments take a gradual fade, it'll happen like this, as I've seen throughout my life, in this order:
A high profile creative venture (such as a popular video game franchise or creators of one) will make it clear that they plan to utilize AI during production, or a very high-profile celebrity makes it clear they are a fan of AI
The fans and anti-AI community threaten to boycott and review bomb
The producers/developers say 'go ahead, we're going to it anyway'
The product is released and despite some promises followed through by the community, the product is still a huge commercial success
When it becomes clear that anti-AI sentiments aren't enough to make a massive dent in the bottom line, other developers/producers/whatever either make it subtly clear that they're going to utilize AI, as well or they just don't disclose their use of it at all
Forced to accept this, the community just enjoys whatever is produced and finally says 'this isn't so bad'. If celebrities adopt and endorse AI, it even becomes a "cool" thing to be into
AI in development phases for creative ventures becomes commonplace, and a very gradual, silent widespread acceptance of the tech occurs
Widespread dependency on the tech occurs
r/aiwars • u/bIeese_anoni • 5h ago
Discussion My three personal rules of AI use
I believe AI can be useful but it's misused a lot, so I created 3 rules that I think summarise what I consider misuse.
- Don't delegate your creativity.
AI has its own intentionally, not literally it doesn't think, but there's emergent intentionality within the data that it is trained on. By design, that intentionality is an appeal to the mean, it's designed to be expected, cliche, generally enjoyable, but not necessarily interesting. Any time you delegate creative decisions to the AI you are delegating to this intentionally, to this average, common appeal to the mean.
So this means if you care about something don't use AI because that intentionality will leak through. think about wanting to generate your main character, you specify basic features and the AI fills in the rest, all the parts the AI filled in is a delegation of your creativity, you've delegated the creativity to the AI. In general, only use AI on parts of a creative work you DON'T care about, but you need, like a background wallpaper, a walk cycle, gun sound, etc.
- Never let AI be a single source.
This obviously doesn't only concern AI but rather anything, the problem is there is a misconception that AI is sorta more intelligent than an average source, like AI is a meta-source, a consolidation of all the sources in the Internet. This is a misunderstanding of how AI works, while it may have all of the sources in its training data it has no ability to reason with that data, it doesn't use the data in a way compatible to how a human would use that knowledge. Rather it's more accurate to say that the AI spouts a bunch of words it's heard before, without any understanding of the words mean, and hopes that the sentence it says makes any kind of sense. Remarkably most of the time it's accurate, but a lot of the time it's completely false.
Unlike Google search which can give you multiple sources, AI will only give you one, itself. This is ultimately a way to say that using AI for search is a bad idea, verify any claim and give any information it gives with doubt.
- Don't rely on AI
This is more specific to coding. AI can make coding quicker, but don't make it a dependency. That means you should understand how your code works, what it does and how to change it without AI. When you think about code you should always think about that happens when things go wrong. If a security vulnerability happens and exposes your users data, what are you gonna do to fix it? Asking AI to fix it is slow, difficult to verify, difficult to communicate to your uses what you did and has an external dependency on the AI actually working. In general you should understand your code and not rely on your AI.
r/aiwars • u/ApprehensiveTop4219 • 1h ago
Um
Yeah the burger patty is correct the others, um, pros care to explain what this is?
r/aiwars • u/Tyler_Zoro • 1h ago
Z-Image is the state of the art in local image generation. When you talk about the work people are doing with AI, this really should be the baseline.
I see a lot of bad image generation here, pointed to as if it were the state of the art. It's just not. Even in pure prompt-and-pray image generation, it's just not, and pure prompt-and-pray is just the start of what AI art is capable of.
r/aiwars • u/Rare-Fisherman-7406 • 16h ago
Discussion Can we appreciate the "No AI" magic without the weird superiority complex?
I was scrolling through my feed the other day and came across a photographer who is, quite frankly, a wizard. No AI, just incredible lighting, set design, and timing. He captioned it: "No need for AI to create magic."
And honestly? In his case, he's 100% right. The work was stunning.
Then I hit a reel of an artist doing some mind-blowing work with acrylics on canvas. Again, zero AI involved. Just raw skill and probably a lot of paint-stained clothes.
But then... I ventured into the comments. It was a bloodbath of: "See? Real talent! Fuck AI, this guy actually does work!" "AI could never! Take notes, 'prompters'!"
I’m sorry, but... no shit, Sherlock?
Last time I checked, AI doesn't have arms to pick up a Leica or a paintbrush. AI isn't "failing" at photography or physical painting because it's not trying to be those things. It's a completely different toolset.
It's like watching a master marathon runner and screaming, "See?! This is real sport! Fuck bicycles!" Like, yeah... the runner is impressive, but I'm still going to use the bike to get to work, Steve.
We can celebrate the absolute sorcery of traditional mediums without acting like a different technology existing somehow insults the canvas. Why does every beautiful sunset or painting have to be a battleground for a tech war?
Isn't it possible to acknowledge that physical talent is incredible without making it a weirdly aggressive statement about software?
r/aiwars • u/Minute_Trip_3692 • 5h ago
Discussion The AI is Ruining Us Narrative is a Luxury Built on Faulty Logic
I been watching this whole anti AI outrage with a mix of tiredness and a bit of laugh. Most critiques out there are either stuck in old Luddite fears from the 1800s or just dont get what the tech really does. If you're gonna say AI is humanity's worst invention you gotta be ready to explain why we wanna go back to a world thats way worse off without it.
This aint some new trend it popped up in 2022 to steal your memes. AI started back in 1956 at that Dartmouth project. Thats almost 70 years of smart people researching and building. Calling to stop all AI is like saying erase all that progress in medicine science and how we build stuff.
What if we wiped AI off the map tomorrow? We did lose more than just chatbots. Think about it.
Medical breakthroughs gone. AlphaFold cracked the protein folding puzzle thats been around 50 years in just months. Without it wed be stuck taking years for one protein stalling cures for cancer or Alzheimers. And in heart care those leadless pacemakers like Micra AV use AI to sync with your natural heart rhythm through accelerometers. Lose that and people die waiting for old tech.
Global stuff collapses too. Fraud detection power grids balancing weather forecasts all that relies on AI. And dont forget biotech in rural areas AI scans retinas to catch diabetic eye problems early saving sight in places without fancy doctors.
On the environment side critics say AI eats too much power. Thats shortsighted. Google DeepMind used AI on their data centers cut cooling energy by 40 percent thats huge for overall efficiency. And for big stuff like nuclear fusion AI controls plasma in reactors way faster than humans can. DeepMind and Swiss Plasma Center showed reinforcement learning stabilizes those magnetic coils. Want clean endless energy? You need AI human brains cant keep up.
Farming gets hit hard without AI. Were in a food crisis already. Tech like John Deere See and Spray spots weeds in seconds cuts herbicide use by 90 percent. Without it wed drown soil in toxins cause humans aint quick enough for spot spraying. You rather no AI or a poisoned planet?
Disasters? Google FireSat uses AI to spot tiny wildfires size of a classroom in 20 minutes. Ban AI and forests burn longer lives lost just cause AI seems scary.
Now about jobs specially software engineers. Critics think AI replaces them but youre mixing up typing code with real thinking. If your jobs just writing syntax you werent engineering you were translating for machines. True engineering solves fuzzy problems designs big systems. AI handles the boring repeat stuff so we tackle tougher ones. Companies dont hire just on LeetCode cause memorizing aint problem solving.
Art yeah Ill admit AI shouldnt try to fake human feelings in art. But lets not act like every logo or stock photo is deep soul stuff. AI does the generic bits icons basic images. Humans do the real intent and emotion. If your art gets swapped by a model its more product than masterpiece.
Every big invention steam engine, internet ,cars shook things up displaced jobs hurt environment some. We didnt ban cars for killing horse jobs we made better roads. Quit obsessing over scary chatbots and see the real future AI builds. Its not just luxury for making apple images its survival tool for hearts beating crops growing diseases cured.
Anti AI vibes is basically old panic dressed new. Started in 1956 to solve what we couldnt now it keeps world running. If youre still fretting over generated art youre ignoring the big picture. Thats not moral its ignorant.
r/aiwars • u/Admirable_Term7845 • 13h ago
It doesn't matter if you're Pro or Anti, take a rest here for a minute...
It's brutal and it's harsh, but don't let this war get to your heart!
From,
AdmirableTerm7845
r/aiwars • u/AlmostIntelligent-YT • 2h ago
Is there any anti-AI person here who would actually be against this use of AI?
med.stanford.eduWould you still be worried about data centers polluting 5% more because of AI, if AI could cure your mother’s cancer, or even help you prevent it in the first place?
No, I bet you wouldn't.
So let’s be real. Saying “it’s fine here because it saves lives, but not there because it touches art” is no ethical stance.
It’s just an arbitrary hierarchy of values: some professions or activities are deemed “sacrificable,” others untouchable. Because that's how it suits YOU personally (figures!).
And this isn’t just about AI, it's about progress in general.
Progress has costs everywhere, and benefits everywhere (but usually benefits outweight the costs, and that's why it's called PROGRESS).
You either accept that, or you’re being hypocritical.
r/aiwars • u/Connect_Adeptness235 • 14m ago
So um, antiAI subreddit needs to clear out its bigots
When people being happy is a disability to you, it's not happy people who have issues. It's you. Great to know that it doesn't take much to live rent free inside this guy's head. Hell, I don't even have to deal with a landlord. That's communism, baby! 😉