Lensa’s Viral AI Art Creations Were Bound To Hypersexualize Users

This year, it feels like artificial intelligence-generated art has been everywhere.

In the summer, many of us entered goofy prompts into DALL-E Mini (now called Craiyon), yielding a series of nine comedically janky AI-generated images. But more recently, there’s been a boom of AI-powered apps that can create cool avatars. MyHeritage AI Time Machine generates images of users in historical styles and settings, and AI TikTok filters have become popular for creating anime versions of people. This past week, “magic avatars” from Lensa AI flooded social media platforms like Twitter with illustrative and painterly renderings of people’s headshots, as if truly made by magic.

These avatars, created using Stable Diffusion — which allows the AI to “learn” someone’s features based off of submitted images — also opened an ethical can of worms about AI’s application. People discovered that the “magic avatars” tended to sexualize women and appeared to have fake artist signatures on the bottom corner, prompting questions about the images that had been used to train the AI and where they came from. Here’s what you need to know.

WHAT IS LENSA AI?
It’s an app created by Prisma Labs that recently topped the iOS app store’s free chart. Though it was created in 2018, the app became popular after introducing a “magic avatar” feature earlier this month. Users can submit 10 to 20 selfies, pay a fee ($3.99 for 50 images, $5.99 for 100, and $7.99 for 200), and then receive a bundle of AI-generated images in a range of styles like “kawaii” or “fantasy.”

The app’s “magic avatars” are somewhat uncanny in style, refracting likenesses as if through a funhouse mirror. In a packet of 100, at least a few of the results will likely capture the user’s photo well enough in the style of a painting or an anime character. These images have flooded Twitter and TikTok. (Polygon asked Prisma Labs for an estimate of how many avatars were produced, and the company declined to answer.) Celebrities like Megan Fox, Sam Asghari, and Chance the Rapper have even shared their Lensa-created likenesses.

HOW DOES LENSA CREATE THESE MAGIC AVATARS?
Lensa uses Stable Diffusion, an open-source AI deep learning model, which draws from a database of art scraped from the internet. This database is called LAION-5B, and it includes 5.85 billion image-text pairs, filtered by a neural network called CLIP (which is also open-source). Stable Diffusion was released to the public on Aug. 22, and Lensa is far from the only app using its text-to-image capabilities. Canva, for example, recently launched a feature using the open-source AI.

An independent analysis of 12 million images from the data set — a small percentage, even though it sounds massive — traced images’ origins to platforms like Blogspot, Flickr, DeviantArt, Wikimedia, and Pinterest, the last of which is the source of roughly half of the collection.

More concerningly, this “large-scale dataset is uncurated,” says the disclaimer section of the LAION-5B FAQ blog page. Or, in regular words, this AI has been trained on a firehose of pure, unadulterated internet images. Stability AI only removed “illegal content” from Stable Diffusion’s training data, including child sexual abuse material, The Verge reported. In November, Stability AI made some changes that made it harder to make NSFW images. This week, Prisma Labs told Polygon it too “launched a new safety layer” that’s “aimed at tackling unwanted NSFW content.”

Stable Diffusion’s license says users can’t use it for violating the law, “exploiting, harming or attempting to exploit or harm minors,” or for generating false information or disparaging and harassing others (among other restrictions). But the technology itself can still generate images in violation of those terms. As The Verge put it, “once someone has downloaded Stable Diffusion to their computer, there are no technical constraints to what they can use the software for.”

WHY DID AI ART GENERATORS BECOME SO POPULAR THIS YEAR?
Though this technology has been in development for years, a few AI art generators entered public beta or became publicly available this year, like Midjourney, DALL-E (technically DALL-E 2, but people just call it DALL-E), and Stable Diffusion.

These forms of generative AI allow users to type in a string of terms to create impressive images. Some of these are delightful and whimsical, like putting a Shiba Inu in a beret. But you can probably also imagine how easily this technology could be used to create deepfakes or pornography.

Source: The Verge

Advertisement

None Of The Girls In These Vintage Polaroids Exist—An AI Made Them Up

AI-generated photos of Black goth girls created with Midjourney have captivated viewers across social media with both the alluring scenes they depict and their striking realness. In recent years, imaging software bolstered by machine learning have grown uncanny in their ability to produce detailed works based on simple text prompts. With enough coaxing, models like Midjourney, Stable Diffusion, and DALL-E 2 can generate pieces indistinguishable from what a human artist might create.

All it takes to get started is a concept. Text-to-image generators are trained on massive, detailed image datasets, giving them the contextual basis to create from scratch. Instruct any one of today’s popular AI image models to whip up an imaginary scene and, if all goes well, it’ll do just that. By referencing specific styles in the prompt, like a historical art movement or a particular format of photography, the models can be guided toward more refined results. They’re not perfect, though — as casual users hopping on the AI-image meme trend have found, they have a tendency to miss the mark, often hilariously.

That makes it all the more effective when the AI does get it right. Former MMA fighter and artist Fallon Fox’s AI-generated photos, which have gone viral since she posted them on Twitter and Facebook on Nov. 13, at first glance seem a look into the not-so-distant past. Black girls decked in leather and heavy eyeliner smolder in nearly two dozen snapshots from metal shows in the ‘90s. Except, these concerts never existed and neither did these girls. Midjourney conjured them up.

Fox told Screen Rant she was just trying to “show a representation of people like [herself],” a Black woman, in the metal scene through the AI experiment. She had no idea it would take off the way it did. “I put a lot of references to ‘90s-era Black goths in there,” Fox told Screen Rant regarding the AI art creation process. “I also put the scenery in there, which was of course a heavy metal concert, and I told it to use a specific type of film, which was ‘90s Polaroid. And a lot of other tweaks, too.”

It’s easy, at first, to miss the telltale signs of AI-made images in this photoset, though they eventually become glaring. Hands, in particular, have proven difficult for AI models to render, and many of the characters in the series suffer bizarre failings in this area (which Fox and social media users have been quick to point out): rubbery fingers that fuse with other objects, a multitude of tangled extra digits, out-of-place fingernails.

There are other telling details, too, like eyes that are just off and features that seem to be pasted haphazardly on. In one image, a bystander appears to have the entire lower half of his body on backward. Overwhelmingly, though, the people and places in the photos look real.

Source: Screenrant

Allen Iverson will receive $32 million trust fund payment when he turns 55 in 2030 due to Sponsor Trust from Reebok

allen-iverson-1

Though the Philadelphia 76ers legend Allen Iverson has not played in the league for over 10 years, Reebok is still paying him $800,000 per year. The deal Iverson signed years ago is said to have saved the 76ers star from going bankrupt after his NBA career ended. As per Action Network’s business analyst Darren Rovell, Iverson will have access to the $32 million Allen Iverson Reebok Trust Fund when he turns 55 in 2030.

Source: Republic