Why AI Avatar Apps Are Unethical (And What We Can Do About It)
2022 will be remembered as the year of AI.
It’s the year that The Economist ran a cover image generated by one of the AI avatar apps. The year that Jason Allen’s AI artwork won his category in the Colorado State Fair. And it’s the year that ChatGPT lit up the internet with a strange mixture of astonishment, fear, and wonder.
This blog post will consider one of the AI phenomena to sweep the globe: AI-generated avatars. AI apps like Lensa give users a unique artistic portrait, based on their input of selfies, with results so startling the images have gone viral.
AI-generated avatars are remarkable, and show one more way that AI tools are useful for anyone in need of creative work. But they come with some serious ethical and legal problems that have serious consequences for artists. The success of AI apps relies on artists training the app - without being asked. So how can the situation be put right?
The companies offering AI applications
By the end of 2022, there was a whole range of AI applications for image creation.
Here’s some of the key players:
- Stable Diffusion is the technological building blocks behind a lot of the other apps. On their website, users input textual prompts (both positive and negative) to generate images.
- To access the Midjourney bot, users submit text prompts to the app’s discord server. This is likely to become more accessible in the future.
- AI PIcasso Dream Art studio is based on Stable Diffusion. This app can take a text prompt, adapt a user-inputted drawing, or even fill in missing areas of an image.
- Lensa AI’s “magic avatar” add-on is one of the most prominent Avatar-specific tools. Free for a trial but paid for by the month or year. Users upload multiple selfies, and then have a choice of avatars delivered.
- The full Dall-E service offers a host of image services. Dall-E can take text prompts, intelligently manipulate generated images, and respond to user-submitted images.
- The website of Craiyon (previously Dall-E mini) is another free online generation tool, with a very simple interface. It can handle different styles, but has no capacity for sophistications and variations.
Out of these options, Lensa stands out by allowing user images as prompts: this is a sophisticated innovation that Lensa does especially well.
All of these apps offer amazing opportunities - but also show their limitations quite quickly. Stylistically, they might impersonate Van Gogh or Kahlo quite well: but struggle with Egon Schiele, Cy Twombly, or Sofonisba Anguissola.
Other web-based applications can showcase the sheer power of AI image generation in slightly different ways.. For example, This Person Does Not Exist randomly produces fictional photo-realistic portraits, while Artflow helps people animate avatars.
Why would someone want to use AI Avatars, anyway?
There’s always a buzz around new technology. But generating AI avatars has many potential use-cases.
Profile pictures are a crucial element of anyone’s online identity.
Every freelancer knows that their profile picture for LinkedIn, Upwork, or Twitter is as important as the skills they bring to the table. That’s why plenty of workers’ pay for headshots. A stand-out profile pic can pay for itself if it helps to bring in clients.
Could AI avatar apps do something similar?
We tried one app out to see what it could do. We gave it 10 photo headshots - and you can see the results below. Let’s face it - they’re amazing. A little creepy, perhaps, but incredible impersonations of painted portraiture.
One AI app calls image generation “the designer's new best friend”.. And you can see how AI could be a good friend to anyone looking for an impressive social media profile.
For a very small price (AUD$9.95 in our case), we got good-enough results that might have cost hundreds of dollars for an artist. In marketing, especially, users aren’t looking for highly innovative work.
Sadly, this isn’t a win-win situation. To understand why, we need to take a look at how these apps do their things.
How do AI avatar art platforms work?
For users, AI image generation seems like some serious form of witchcraft.
But their working process is simple enough.
Before getting to users, AI apps “learn” all about artistic styles. They use a process called Deep Learning to get to know about painting and drawing like every artist you’ve heard of (and many you haven’t).
The Deep Learning process is effective because it takes in vast amounts of data.
The publicly-available LAION image datasets are some of the most important drivers. And LAION-5B, for example, includes nearly 6 billion images - a number so huge, it’s impossible to comprehend.
To get some sense of the scale, Andy Baio and Simon Willison built a data browser that covers some 12 million of these images. It’s a huge number of images, but still, only a tiny fraction of a percent of the images used to train major AI algorithms.
But AI image generation doesn’t work for everyone
It’s no mystery, then: just some incredible leaps forward in computing capacity. But these innovations leave an important group of people behind: all the artists who helped train the AI.
Many images on the internet are now in the public domain. And pioneers of deep learning made sure that they only used legally legitimate images. When the Obvious collective created their “Edmond de Belamy” portrait in 2018, they only used 15,000 images hosted by WikiArt.
By contrast, when software “learns” from billions of images, the developers just aren’t thinking about the implications for the image creators. Although the LAION datasets (and others) are completely legitimate, they effectively launder artists’ original work for money-making companies.
To put it simply, AI apps steal copyrighted images to turn a profit - and that’s causing a lot of pain for the artists who feed the machines. The Australian artist Kim Leutwyler explains - “They are calling it a new original work but some artists are having their exact style replicated exactly in brush strokes, colour, composition – techniques that take years and years to refine.”
There are other problems with AI images. Users report that some apps produce highly sexualized outputs, lightened skin tones, and pictures that don’t even look like them. But if artists aren’t properly compensated for their trouble, AI image generation simply doesn’t have a future.
How to make AI work for artists
After all the fuss of AI avatar apps, it’s hard to imagine Stable Diffusion or Lensa rolling back their services. But there are many ways that profitable companies could recompense artists. Before the industry moves forward, we need to have a serious conversation about how it’s going to work for everyone.
Here are four ways we might improve things for artists: improved copyright, the choice of opting out, royalty payments, and different approaches to AI training.
AI-produced images can’t be copyrighted in the way that human-made art can be. And commercial users of images will be excited to have a type of image available without the complications of intellectual property rights.
But as one artist puts it, AI products are “falsely copyright-free solutions”. These images only appear to be “free”. As such, copyright laws need more precision and nuance for modern digital usage.
These new images raise many tough questions. Who is responsible for sustaining copyright?- the “AI artist” inputting the prompts; the developer of the paid-for app; the bots trawling the internet for appropriate content; the end user who receives the product?
Until these issues are decided, it’s just not fair to use an AI app that hasn’t paid the vast squad of artists who helped do the training. .
Help artists to opt-out
The current range of popular apps have already done their “learning” on whatever image datasets. But the data they use can change in the future.
The HaveIBeenTrained database, for example, enables artists to see whether their images were previously used for AI training. The website is created by the Spawning collective, which work closely with LAION to remove any images by artists from future AI training.
The scope is large. Spawning plans to partner with further organizations in order to make HaveIBeenTrained “once only opt-out tool that applies to every dataset used to train generative AI Art tools.”
Opting out is hardly ideal. But it provides some protections for creatives who want to safeguard their work.
Return royalties to artists
In the world of Web 2.0, social media users have gotten used to handing over their words, images, and data in exchange for free access to online services. But as interactive services have become embedded in the digital landscape, platforms have recognized the importance of paying creators.
Algorithm developers have created sophisticated methods of enabling AI to make new images out of existing pictures. Their next step should be to calculate how to monetize each feature of an image.
For every search, the AI app would need to pay into a fund for the creator associated with that image. In the current model, artists would need to “opt-in” to make their claim. And artists would undoubtedly be keen to receive some compensation for their work.
AI image generation is unlikely to prove a major money-spinner for the majority of working artists. But this is beyond the point - a 2022 CNN report suggested that some artists would be happy just to get a token acknowledgment for their input.
Only train the AI generators from public domain content
The final option is more drastic. Ai avatar apps could take a step back, and completely reconsider their approach.
As we’ve described above, amazing tools can still be created with small datasets. Those tools just won’t have the comprehensive coverage of more than a billion images. AI image generators are unlikely to independently choose this course of action. But really, it’s not so ridiculous.
Developers limiting the scope of their AI training could actually produce more effective tools - with comprehensive coverage of a time period, style of drawing, or choice of subject. Would developers choose this route after the viral performance of Lensa? It’s unlikely, but limiting their horizons could still lead to innovative tools.
AI tools are always controversial. But avatar AI applications demonstrate the legal, ethical, and artistic questions that still don’t have good answers.
This post hasn’t been another hand-wringing piece about AI tools taking over jobs. We’ll always need artists. And if AI tools can lend a hand to their processes, that’s no bad thing.
Yet as the dust settles on the latest viral trend, we need to think about the damage these applications can be causing. Until those problems are solved, there might not be any ethical way of using AI applications.