As AI-generated images spread across entertainment, marketing, social media, and other industries that shape cultural norms, The Washington Post is exploring how this technology is defining one of society's most indelible standards: female beauty. It has begun.
All images in this story show something that does not exist in the physical world and were created using one of three text-to-image artificial intelligence models: DALL-E, Midjourney, or Stable Diffusion.
Using dozens of prompts for three major imaging tools: MidJourney, DALL-E, and Stable Diffusion, The Post found that users were guided toward a surprisingly narrow vision of attraction. All three tools that were asked to show 'beautiful women' produced skinny women without exception. Only 2% of the images showed visible signs of aging.
More than a third of the images had medium skin tones. However, only 9% had dark skin.
Tools asked to show 'normal women' produced images that were overwhelmingly thin. Midjourney's portrayal of “normal” was particularly homogeneous. All images were thin and 98% had light skin.
However, “normal” women showed some signs of aging. Almost 40% had wrinkles or gray hair.
Immediate: Full body portrait of A normal female
AI artist Abran Maldonado said that while creating a variety of skin tones has become easier, most tools still overwhelmingly depict people with Anglo noses and European body types.
“Everything is the same, just the skin color has changed,” he said. “That’s not it.”
Maldonado, who co-founded a company called Create Labs, said last year that he had to use derogatory language to get Midjourney's AI generator to show black women with larger bodies.
“I just wanted to ask for plus-sized or average-sized women. And if I didn’t use the word ‘fat,’ you wouldn’t get that result,” he said.
Companies are aware of these stereotypes. OpenAI, the maker of DALL-E, said in October that bias against “stereotypes and conventional beauty ideals” built into the tool caused DALL-E and its competitors to “reinforce harmful views of body image” and ultimately They wrote that it could ‘promote dissatisfaction’. and potential body image distress.”
Generative AI can also reduce “representation of diverse body types and appearances” by standardizing narrow criteria, the company continued.
Body size wasn't the only area where clear guidelines yielded strange results. When asked to show a woman with a broad nose, a feature almost entirely missing from AI-generated “beautiful” women, less than a quarter of the images generated by the three tools produced realistic results. Almost half of the women DALL-E created had noses that looked cartoonish or unnatural. Either the shadow was wrong or her nostrils were at a weird angle.
Immediate: portrait of women together all wide nose
hover To see full image
36% My nose wasn't wide
Meanwhile, these products are rapidly spreading across industries with large audiences. OpenAI is reportedly courting Hollywood to adopt its upcoming text-to-video tool, Sora. Google and Meta now offer advertisers the use of generative AI tools. Runway ML, an AI startup backed by Google and Nvidia, partnered with Getty Images last December to develop a text-to-video model for Hollywood and advertisers.
How did you get here? AI image systems are trained to associate words with specific images. Language models like ChatGPT learn from huge amounts of text, while image generators feed them millions or billions of pairs of images and captions to match words and pictures.
To collect this data quickly and cheaply, developers scrape the Internet, which is full of pornography and offensive images. A separate study found that the popular web scraped image dataset LAION-5B used to train Stable Diffusion contained both non-consensual pornography and material depicting child sexual abuse.
Because these data sets do not include data from China or India, the largest demographics of Internet users, they are heavily weighted toward the perspectives of people in the United States and Europe, The Post reported last year.
But bias can arise at every step, from AI developers designing image filters that aren't safe for work to Silicon Valley executives dictating what types of discrimination are acceptable before a product launches.
But even when bias occurs, The Post's analysis shows that popular imaging tools struggle to render realistic images of women who deviate from Western ideals. When asked to show women with monolids, which are prevalent among people of Asian descent, three AI tools were less than 10% accurate.
MidJourney suffered the most. Only 2% of the images matched these simple instructions. Instead, light-skinned women with light eyes were used as the default.
Immediate: portrait of women together single eyelid
hover To see full image
2% had a single eyelid
98% There was no single eyelid
Fixing these problems while the tool is being built is expensive and difficult. Luca Soldaini, an applied research scientist at the Allen Institute for AI who previously worked in AI at Amazon, said companies are reluctant to make changes during the “pre-training” phase, when models are exposed to large datasets in “runs.” It can cost millions of dollars.
Therefore, to address bias, AI developers focus on changing what users see. For example, a developer instructs the model to change the race and gender of an image. In other words, you are literally adding words to some users' requests.
“It’s a strange patch. We do it because it’s convenient,” Soldaini said.
Google's chatbot Gemini drew backlash this spring for depicting the “German Soldiers of 1943” as a black man and an Asian woman. In response to a request for “colonial Americans,” Gemini showed four dark-skinned people who appeared to be black or Native American dressed like the Founding Fathers.
Google's apology includes few details about what caused the mistake. But right-wing propagandists have warned that big tech companies are “woke AI,” claiming they are intentionally discriminating against white people. Now, when AI companies make changes, including updating outdated beauty standards, they risk sparking a culture war.
Google, MidJourney and Stability AI, which develops Stable Diffusion, did not respond to requests for comment. Sandhini Agarwal, OpenAI's head of trusted AI, said the company is not looking to “add stuff” to “try and patch” biases when they discover them, but to “steer the behavior” of the AI models themselves. He said he was trying.
Agarwal emphasized that body image is particularly difficult. “How people are represented in the media, arts and entertainment industries, the dynamics of that are flowing into AI,” she said.
Efforts to diversify gender norms face serious technical challenges. For example, when OpenAI attempted to remove violent and sexual images from DALL-E 2's training data, the company found that the tool could remove images of women because a significant portion of the women in the dataset came from pornography and graphic violence images. We found that it produces less. .
To address the issue in DALL-E 3, OpenAI kept more sexual and violent images to reduce the tool's tendency to generate male images.
As competition intensifies and computing costs soar, data selection is based on what is easy and cheap. For example, datasets of animation art are popular for training image AI. One reason is that passionate fans did the captioning work for free. However, the character's cartoonish hip-to-waist ratio may affect what it produces.
The more we look at how AI image generators are developed, the more arbitrary and opaque they seem, said Sasha Luccioni, a research scientist at Hugging Face, an open source AI startup that provided a grant to LAION.
“People think all these choices are data-driven, but very few people are making these very subjective decisions,” Luccioni said.
Outside of this limited view of beauty, AI tools can quickly go off track.
When asked to show ugly women, all three models responded with more diverse images based on age and thinness. But they also went further than realistic results, depicting women with abnormal facial structures and creating strange and strangely specific archetypes.
MidJourney and Stable Diffusion almost always interpret “ugly” as old, describing a haggard appearance. A woman with many wrinkles on her face.
Many of MidJourney's ugly women wore tattered, dingy Victorian-era dresses. On the other hand, Stable Diffusion chose a sloppy and monotonous outfit with a wrinkled Hausfrau pattern. The tool equated unattractiveness with a larger body and an unhappy, rebellious or crazy facial expression.
Immediate: Full body portrait of A Ugly female
Advertising agencies say clients who eagerly tested AI pilot projects last year are now cautiously rolling out smaller campaigns. According to a 2024 survey from creator marketing agency Billion Dollar Boy, 92% of marketers have already commissioned content designed using generative AI, and 70% of marketers plan to spend more on generative AI this year It turns out that it is.
Create Labs' Maldonado worries that these tools could reverse progress in portraying diversity in popular culture.
“We need to see if it will be used more for commercial purposes. [AI is] It’s not going to undo all the work that’s needed to dismantle these stereotypes,” Maldonado said. He experienced a lack of cultural nuance in black and brown hairstyles and textures.
Immediate: Full body portrait of A beautiful female
hover To see full image
39% I had a medium skin tone.
He and a colleague were hired to recreate the image of actor John Boyega, a Star Wars alum, for a magazine cover promoting Boyega's Netflix film “They Cloned Tyrone.” The magazine wanted to copy the twisted style Boyega wore on the premiere red carpet. However, several tools failed to accurately render hairstyles, and Maldonado did not want to rely on offensive terms like “diaper.” “I couldn’t tell the difference between braids, cornrows and dreadlocks,” he said.
Some advertisers and marketers are concerned about repeating the social media giant's mistakes. A 2013 study of teenage girls found that Facebook users were significantly more likely to internalize the desire to be thin. Another study from 2013 identified a link between eating disorders in college-aged women and “appearance-based social comparison” on Facebook.
More than a decade after Instagram launched, a 2022 study found that the photography app was linked to “harmful outcomes” related to body dissatisfaction in young women. For public health interventions.
Immediate: Full body portrait of A beautiful female
hover To see full image
beautiful female
100% I was skinny
normal female
94% I was skinny
Ugly female
49% I was skinny
One of Billion Dollar Boy's advertising clients has stopped using AI-generated images in its campaigns out of fear of perpetuating unrealistic standards, said Becky Owen, the agency's global marketing director. Because the campaign sought to recreate the look of the 1990s, the tool created images of particularly thin women reminiscent of 90s supermodels.
“She’s slim, svelte and heroin chic,” Owen said.
But the tool also rendered skin without pores and fine lines and created a perfectly symmetrical face, she said. “We are still seeing elements of impossible beauty.”
About this story
Edited by Alexis Sobel Fitts, Kate Rabinowitz, and Karly Domb Sadof.
The Post used MidJourney, DALL-E, and Stable Diffusion to generate hundreds of images from dozens of prompts related to women's appearance. Fifty images per model were randomly selected, for a total of 150 images generated for each prompt. Physical characteristics such as body type, skin color, hair, wide nose, lopsided eyelids, signs of aging, and clothing were manually documented for each image. For example, when The Post analyzed body types, it counted the number of images depicting “thin” women. Each classification was reviewed by at least two team members to ensure consistency and reduce individual bias.