It’s time for Atlas to pick up a new set of skills and get hands on. In this video, the humanoid robot manipulates the world around it: Atlas interacts with objects and modifies the course to reach its goal—pushing the limits of locomotion, sensing, and athleticism.
This year, it feels like artificial intelligence-generated art has been everywhere.
In the summer, many of us entered goofy prompts into DALL-E Mini (now called Craiyon), yielding a series of nine comedically janky AI-generated images. But more recently, there’s been a boom of AI-powered apps that can create cool avatars. MyHeritage AI Time Machine generates images of users in historical styles and settings, and AI TikTok filters have become popular for creating anime versions of people. This past week, “magic avatars” from Lensa AI flooded social media platforms like Twitter with illustrative and painterly renderings of people’s headshots, as if truly made by magic.
These avatars, created using Stable Diffusion — which allows the AI to “learn” someone’s features based off of submitted images — also opened an ethical can of worms about AI’s application. People discovered that the “magic avatars” tended to sexualize women and appeared to have fake artist signatures on the bottom corner, prompting questions about the images that had been used to train the AI and where they came from. Here’s what you need to know.
WHAT IS LENSA AI? It’s an app created by Prisma Labs that recently topped the iOS app store’s free chart. Though it was created in 2018, the app became popular after introducing a “magic avatar” feature earlier this month. Users can submit 10 to 20 selfies, pay a fee ($3.99 for 50 images, $5.99 for 100, and $7.99 for 200), and then receive a bundle of AI-generated images in a range of styles like “kawaii” or “fantasy.”
The app’s “magic avatars” are somewhat uncanny in style, refracting likenesses as if through a funhouse mirror. In a packet of 100, at least a few of the results will likely capture the user’s photo well enough in the style of a painting or an anime character. These images have flooded Twitter and TikTok. (Polygon asked Prisma Labs for an estimate of how many avatars were produced, and the company declined to answer.) Celebrities like Megan Fox, Sam Asghari, and Chance the Rapper have even shared their Lensa-created likenesses.
HOW DOES LENSA CREATE THESE MAGIC AVATARS? Lensa uses Stable Diffusion, an open-source AI deep learning model, which draws from a database of art scraped from the internet. This database is called LAION-5B, and it includes 5.85 billion image-text pairs, filtered by a neural network called CLIP (which is also open-source). Stable Diffusion was released to the public on Aug. 22, and Lensa is far from the only app using its text-to-image capabilities. Canva, for example, recently launched a feature using the open-source AI.
An independent analysis of 12 million images from the data set — a small percentage, even though it sounds massive — traced images’ origins to platforms like Blogspot, Flickr, DeviantArt, Wikimedia, and Pinterest, the last of which is the source of roughly half of the collection.
More concerningly, this “large-scale dataset is uncurated,” says the disclaimer section of the LAION-5B FAQ blog page. Or, in regular words, this AI has been trained on a firehose of pure, unadulterated internet images. Stability AI only removed “illegal content” from Stable Diffusion’s training data, including child sexual abuse material, The Verge reported. In November, Stability AI made some changes that made it harder to make NSFW images. This week, Prisma Labs told Polygon it too “launched a new safety layer” that’s “aimed at tackling unwanted NSFW content.”
Stable Diffusion’s license says users can’t use it for violating the law, “exploiting, harming or attempting to exploit or harm minors,” or for generating false information or disparaging and harassing others (among other restrictions). But the technology itself can still generate images in violation of those terms. As The Verge put it, “once someone has downloaded Stable Diffusion to their computer, there are no technical constraints to what they can use the software for.”
WHY DID AI ART GENERATORS BECOME SO POPULAR THIS YEAR? Though this technology has been in development for years, a few AI art generators entered public beta or became publicly available this year, like Midjourney, DALL-E (technically DALL-E 2, but people just call it DALL-E), and Stable Diffusion.
These forms of generative AI allow users to type in a string of terms to create impressive images. Some of these are delightful and whimsical, like putting a Shiba Inu in a beret. But you can probably also imagine how easily this technology could be used to create deepfakes or pornography.
AI-generated photos of Black goth girls created with Midjourney have captivated viewers across social media with both the alluring scenes they depict and their striking realness. In recent years, imaging software bolstered by machine learning have grown uncanny in their ability to produce detailed works based on simple text prompts. With enough coaxing, models like Midjourney, Stable Diffusion, and DALL-E 2 can generate pieces indistinguishable from what a human artist might create.
All it takes to get started is a concept. Text-to-image generators are trained on massive, detailed image datasets, giving them the contextual basis to create from scratch. Instruct any one of today’s popular AI image models to whip up an imaginary scene and, if all goes well, it’ll do just that. By referencing specific styles in the prompt, like a historical art movement or a particular format of photography, the models can be guided toward more refined results. They’re not perfect, though — as casual users hopping on the AI-image meme trend have found, they have a tendency to miss the mark, often hilariously.
That makes it all the more effective when the AI does get it right. Former MMA fighter and artist Fallon Fox’s AI-generated photos, which have gone viral since she posted them on Twitter and Facebook on Nov. 13, at first glance seem a look into the not-so-distant past. Black girls decked in leather and heavy eyeliner smolder in nearly two dozen snapshots from metal shows in the ‘90s. Except, these concerts never existed and neither did these girls. Midjourney conjured them up.
Fox told Screen Rant she was just trying to “show a representation of people like [herself],” a Black woman, in the metal scene through the AI experiment. She had no idea it would take off the way it did. “I put a lot of references to ‘90s-era Black goths in there,” Fox told Screen Rant regarding the AI art creation process. “I also put the scenery in there, which was of course a heavy metal concert, and I told it to use a specific type of film, which was ‘90s Polaroid. And a lot of other tweaks, too.”
It’s easy, at first, to miss the telltale signs of AI-made images in this photoset, though they eventually become glaring. Hands, in particular, have proven difficult for AI models to render, and many of the characters in the series suffer bizarre failings in this area (which Fox and social media users have been quick to point out): rubbery fingers that fuse with other objects, a multitude of tangled extra digits, out-of-place fingernails.
There are other telling details, too, like eyes that are just off and features that seem to be pasted haphazardly on. In one image, a bystander appears to have the entire lower half of his body on backward. Overwhelmingly, though, the people and places in the photos look real.
Had some time on my hands. So, I created these 90’s female heavy metal/goth girls that never existed, attending a 90’s heavy metal concert that never existed with an AI app. Cause you need that in your life. 🤘🏽🖤 pic.twitter.com/ElQiFwcR1O
WOOOWWW 90’s polaroids of late 90’s black heavy metal/goth girls that never existed, hanging at a heavy metal concert that never existed – completely created by an AI art app 🖤 by fallon fox … hard to believe these aren’t real people pic.twitter.com/fXvjAVpgQi
let it really register that NONE OF THESE PEOPLE EXIST. these photos were created by AI… how can we tell what images on the news or social media are real people? how do we know victims of certain tragedies really existed? how do we know which celebrities you love actually exist? pic.twitter.com/orBk9dtogG
Artificial intelligence has proven time and again that creativity can be taught, having been the brains behind some headline-making artworks, and even a magazine cover, of late.
Now, it’s making its way into adland. But don’t worry—instead of stealing jobs, it’s being used as a resource for an experimental project by advertising agency 10 Days. Here, the studio still assumed the role of a creative director of sorts while Midjourney, an invite-only AI platform, followed the instructions of its human coworkers.
The tool was led simply by the cues of six genre-based words, including “sci-fi,” “noir,” and “cinematic,” to produce spec work for companies like Nespresso, KFC, Gucci, British Airways, and Ray-Ban. Projects that would have each taken human creators months to finalize were concluded by the AI in minutes—with 24 wholly unique designs per brand.
These tools, of course, aren’t for everyone. We can name a few minimalist brands that would turn their nose at the idea of launching advertisements in the form of surreal, Salvador Dalí-esque nightmares.
With that being said, the experiment is a teaser of the implications AI might have on the industry. It envisions a future where less time is spent on ideation to allow more space for execution and delivery. Picture relying on one of these to dream up virtually countless storyboards, or even packaging designs.
“It’s staggering what AI can achieve given the right set of prompts and keywords,” describes Jolyon White, co-founder and creative director of 10 Days. “We’re now able to create 24 layouts in the time it takes our Art Director to take their first sip of coffee.”