Generative AI in Wargames
Oh boy. This topic’s been brewing for the past year and it’s really started to heat up in the board game world and now, as expected, in wargaming. It’s the issue of the ethics of wargame publishers using generative AI in their games.
George Pearson, who’s been wargaming since 1968, recently posted in The Board Wargamer that he’s enjoyed watching graphics evolve in the hobby over the years and that he’s concerned about AI replacing human artists whose work he appreciates. I can’t applaud and thank George enough for his excellent post because until he did that, no one (to my knowledge) addressed the issue head-on like he did. His fantastic post inspired me to post one of my own in Wargames, a group that I admin. Many of the replies to both of these posts from wargamers were surprising and utterly disappointing. Those in favor of generative AI took to ridiculing, making false accusations, and weak justifications for their support of the programs using copyright-protected work, and I’ll attempt to address those here.
If you’re not familiar with the term, generative AI (Artificial Intelligence) differs from other types of AI in that it generates something new. You might be familiar with predictive AI we all use, like Amazon suggesting things you “might also” want to buy or Gmail predicting how you want to finish a sentence. This type of AI is aimed at helping you complete small tasks, whereas generative AI’s purpose is to create something completely new: a photo, an illustration, music, or text.
All AI works by identifying patterns in a dataset. Amazon can suggest what you might want to buy next based on what thousands of other customers who bought the same stuff as you bought next. Facebook can suggest tagging a photo of someone based on the billions of photos we’ve all uploaded and who were tagged in those.
With generative AI, it works in a similar way. If you ask a generative AI program to create a photo of a cat it’ll use the knowledge from its dataset of cat photos to make a new one. This is no easy feat as just averaging images will produce some scary-looking monstrosities with cat features, and we saw some of this in the early versions of these programs. The most recent versions of these applications, however, are producing some startling, if not disconcerting, results where it’s getting almost impossible to discern real photos from fake (humans only get 10 total fingers now).
The ethical issue comes in the fact that many of these programs have used and continue to use copyright-protected work for their datasets and consequently produce work that looks like specific creators. It’s OK to tell an AI program to make an image that looks like a Rembrandt painting, because Rembrandt’s body of work is in the public domain. You can even put it on the cover of your board game. But theoretically you shouldn’t be able to tell it to create a work like a living artist because that would mean that that artist’s copyright-protected work was used in the dataset, yet that’s exactly what’s been happening. In one of the most infuriating examples, a photographer tried to have his images removed from a dataset and in response got an invoice for $979 claiming he violated their copyright-protected work.
Lawsuits against AI and dataset companies are everywhere and show no signs of slowing down until regulation comes. But companies are also finding ways to skirt the system. Adobe released its AI, called Firefly, last summer and said it would pay the legal fees of any copyright infringement cases against their use because they were only using images from their own stock library. What Adobe didn’t say was that those images were used without the consent of contributors. Contributing photographers and artists, who make their livings on royalties from their images, found out that their work had been used to seed AI datasets. Their work was used without their permission and without the option of opting out, for the purpose of creating similar images as those contributors…which, you guessed it, takes future work away from those people.
This all matters to us as wargamers because now publishers are starting to use AI images in their games. This practice at best takes work away from the artists that have made our games great for years, and at worst does so by emulating their copyright-protected work. To be clear here, considering how much artwork matters in how well a board games sells, artists don’t make much. For mainstream games, multiple sites list having a custom illustration and design for a game box done for as little as $2,000. I imagine it’s even less for wargames that often use classical paintings from the public domain and just require design without the custom illustration.
The ethics of this disturbs me deeply and I personally choose to stand with the artists. I’ve been doubly disappointed with the wargame community as of late because I naïvely thought they’d do the same and not only has that not been the case, but I’ve seen many comments justifying and even attacking the artists. I’m not picking on anyone in particular as these are all arguments I’ve seen multiple times, but here are some of those gems:
“It’s not copying or stealing. AI is creating new things.”
While it’s technically true that the output isn’t an exact copy, it’s still deriving styles, colors, and line work from the art it’s been fed, kind of like those “continue this pattern” questions on high school tests. An AI program can’t create work like someone else unless it’s been seeded with work by that creator in the same way that asking you to write a sonnet like Shakespeare would make no sense to you if you hadn’t read Shakespeare. Making art too similar to an active artist’s work is a copyright violation and proving AI is doing so is tough for artists since it is derivative not identical. Most of the lawsuits against AI companies revolve around this.
I should also mention that there are use restrictions on images as you may have seen in your Google image searches. An image might be ok to use in your personal project, but require purchase if you plan on using it commercially. Even then you’re not allowed to distribute the image as your own. For example, you can’t upload someone else’s photo to a stock photo website as your own and get royalties. Obviously, these AI programs are being sold and distributing what they create commercially to customers who are often using them to distribute the images they produce commercially, so if courts rule in favor of human creators, it could be doubly heavy punishment on AI companies.
“Artists steal; why shouldn’t I?”
An actual comment to me from a gentleman who created an imaginary scenario where unscrupulous artists are stealing each others’ work so why shouldn’t he join the fun? Well, because you just made that up for one. Isn’t this how criminals justify their own crimes? In all seriousness, I’m sure there are artists out there who copy work and call it their own…just like there are people in every profession who take credit for others’ work (I can show you a few at my old job). It should go without saying that that shouldn’t give all of us the green light to punish all artists most of whom are honest people trying to earn a living.
“Artists are influenced by other artists. What’s the difference if AI does the same?”
The obvious rebuttal to this is that a human artist takes longer to create their work. There’s blood, sweat, and tears involved. But the other side is that a human copy, even a tracing (which is plagiarism) is imperfect. You can try to copy an artist’s work but it won’t be the same. An AI, using mathematically precise capturing methods can get it exactly right and often produces “pieces” in a generated work that look identical to their source image. So in addition to the AI being able to mass-produce these images, it can also do it with the accuracy aeons ahead of Xerox copies, and “chunks” of the original can remain unchanged. As I said above there’s also the question of being able to prompt work like a specific artist which also proves that artist’s work was used to train the AI. Using someone else’s work or even parts of it is a universally agreed-upon copyright violation and creators are fighting in courts as we speak that it should be no different for AI who have used their work to train AI and distribute derivative work commercially.
“You’re just afraid of progress.”
The straw man argument. The worst and the most common excuse which accuses artists, many of which create digitally, of being afraid of technology, calling them “Luddites” etc. The irony is that I see this most often from people I know are technologically illiterate themselves. All the artists I know happily embrace technology. Most of my LinkedIn contacts are people who illustrate in ProCreate, Photoshop, and other digital programs. This lame excuse attempts to detract attention away from the real issue of artists’ work being used without their permission and the people who use it are knowingly attempting to create a straw man and avoid the real issue. Artists love technology. They don’t love having their work used without consent (stolen) to create other work like theirs.
“It’s already done, so deal with it.”
This is like saying, “I overcooked my meatloaf so just let the house burn down.” Yes, many of these applications have already “let the genie out of the bottle,” as one commenter put it, in scanning what’s already out there, but since when does that mean we give up trying to mitigate the problem? What problem-solvers these folks must be in their daily lives. What bothers me about this comment, however, is how quick people are to turn on the artists who have made their games great for so long. The same gamers I see telling others not to worry about shipping costs or who generously donate to content creators, are so quick to pull the purse strings tight when it comes to artists. Yes, some of the damage has been done. Is it ethically right to use it to create similar work to existing artists? To me that’s a clear no. I’ve been disappointed to see it’s not the same for others in the wargaming community.
“Small publishers can’t afford to pay artists.”
The irony of this popular comment is truly something to behold. The logic here as I follow it is: small publishers can’t stay afloat unless they don’t pay artists by using their derivative work for free. I’m sorry, wargames have been around for what, 70 years? Generative AI is brand new to the masses. How did all those wargame companies survive? There should be none since “sMaLl pUbLiShErS cAn’T pAy aRtiSts!” What a crock of [REDACTED].
I want to be clear for the wargamers in the back:
You having a game idea and no artistic ability does not entitle you to use artists’ copyright-protected work.
That’s your problem, not theirs. They don’t owe you their work any more than any other business owes you free stuff. That attitude is pure entitlement.
To sum up I just want to say that I personally stand with the artists who have made board gaming and wargaming better hobbies. I will not buy any games that use generative AI art until it’s well-regulated and artists are treated fairly and their work is protected. I encourage you to do the same.
I’m thankful to all the artists who sweated over the beautiful cards, rules, boards, and boxes I use and I don’t want to live in a world where they stop showing their work to us lucky viewers out of fear that it’ll be harvested by some AI program that will spit out poor artless imitations. Our world is better because of the creators. Please stand up for them and support them.