Lensa’s Viral AI Art Creations Were Bound To Hypersexualize Users

This year, it feels like artificial intelligence-generated art has been everywhere.
In the summer, many of us entered goofy prompts into DALL-E Mini (now called Craiyon), yielding a series of nine comedically janky AI-generated images. But more recently, there’s been a boom of AI-powered apps that can create cool avatars. MyHeritage AI Time Machine generates images of users in historical styles and settings, and AI TikTok filters have become popular for creating anime versions of people. This past week, “magic avatars” from Lensa AI flooded social media platforms like Twitter with illustrative and painterly renderings of people’s headshots, as if truly made by magic.
These avatars, created using Stable Diffusion — which allows the AI to “learn” someone’s features based off of submitted images — also opened an ethical can of worms about AI’s application. People discovered that the “magic avatars” tended to sexualize women and appeared to have fake artist signatures on the bottom corner, prompting questions about the images that had been used to train the AI and where they came from. Here’s what you need to know.
WHAT IS LENSA AI?
It’s an app created by Prisma Labs that recently topped the iOS app store’s free chart. Though it was created in 2018, the app became popular after introducing a “magic avatar” feature earlier this month. Users can submit 10 to 20 selfies, pay a fee ($3.99 for 50 images, $5.99 for 100, and $7.99 for 200), and then receive a bundle of AI-generated images in a range of styles like “kawaii” or “fantasy.”
The app’s “magic avatars” are somewhat uncanny in style, refracting likenesses as if through a funhouse mirror. In a packet of 100, at least a few of the results will likely capture the user’s photo well enough in the style of a painting or an anime character. These images have flooded Twitter and TikTok. (Polygon asked Prisma Labs for an estimate of how many avatars were produced, and the company declined to answer.) Celebrities like Megan Fox, Sam Asghari, and Chance the Rapper have even shared their Lensa-created likenesses.
HOW DOES LENSA CREATE THESE MAGIC AVATARS?
Lensa uses Stable Diffusion, an open-source AI deep learning model, which draws from a database of art scraped from the internet. This database is called LAION-5B, and it includes 5.85 billion image-text pairs, filtered by a neural network called CLIP (which is also open-source). Stable Diffusion was released to the public on Aug. 22, and Lensa is far from the only app using its text-to-image capabilities. Canva, for example, recently launched a feature using the open-source AI.
An independent analysis of 12 million images from the data set — a small percentage, even though it sounds massive — traced images’ origins to platforms like Blogspot, Flickr, DeviantArt, Wikimedia, and Pinterest, the last of which is the source of roughly half of the collection.
More concerningly, this “large-scale dataset is uncurated,” says the disclaimer section of the LAION-5B FAQ blog page. Or, in regular words, this AI has been trained on a firehose of pure, unadulterated internet images. Stability AI only removed “illegal content” from Stable Diffusion’s training data, including child sexual abuse material, The Verge reported. In November, Stability AI made some changes that made it harder to make NSFW images. This week, Prisma Labs told Polygon it too “launched a new safety layer” that’s “aimed at tackling unwanted NSFW content.”
Stable Diffusion’s license says users can’t use it for violating the law, “exploiting, harming or attempting to exploit or harm minors,” or for generating false information or disparaging and harassing others (among other restrictions). But the technology itself can still generate images in violation of those terms. As The Verge put it, “once someone has downloaded Stable Diffusion to their computer, there are no technical constraints to what they can use the software for.”
WHY DID AI ART GENERATORS BECOME SO POPULAR THIS YEAR?
Though this technology has been in development for years, a few AI art generators entered public beta or became publicly available this year, like Midjourney, DALL-E (technically DALL-E 2, but people just call it DALL-E), and Stable Diffusion.
These forms of generative AI allow users to type in a string of terms to create impressive images. Some of these are delightful and whimsical, like putting a Shiba Inu in a beret. But you can probably also imagine how easily this technology could be used to create deepfakes or pornography.
Source: The Verge