AI has the power to make the world of beauty more accessible, but it also poses one of the biggest the greatest threats to the representation of beauty.
AI. Two letters transforming every aspect of our lives, including our relationship with beauty. From skin analysis, shade-matching and virtual makeup try-on tools to experimenting with different hair and eye colours, our experience with beauty is changing.
Some of it is exciting. Artificial intelligence is driving cutting-edge innovations, boosting creativity by blending data-driven insights with fresh ideas and inspiring inventions outside traditional thinking. It also allows for better access to the world of beauty for mobility or visually impaired individuals. From the creation of easier to open packaging, larger fonts and virtual experiences to voice-activated AI tools offering guided makeup and skincare routines, AI-data integration has great potential.
But AI also poses one of the greatest threats to the representation of beauty. Because as beauty tools are getting an AI-overhaul, beauty ideals are narrowing.
Over the years, the emphasis placed on outside appearances has been amplified by the rise of AI. The value placed on looks has intensified pressure to be a certain type of beautiful, one that increasingly excludes Black and Brown women from the beauty narrative. For example, “photo filters commonly show racial biases by automatically lightening the skin tone during the ‘beautification’ process, aligning with Eurocentric (white) beauty norms” says Faiza Khan Khattak, PhD, applied machine learning scientist at the Vector Institute.
In fact, Beauty.AI recently used an artificial jury of “robot judges” to host the first online beauty contest judged by AI. Even in make-believe pageantry, the “judges” exhibited biases like those found in humans, particularly concerning physical traits such as skin tone and facial complexion, Khattak shared.
“The concern is that AI may amplify an already biased reality. The more people use these tools to create content, the more biased future models trained on this generated data could get,” explains Vered Shwartz, assistant professor of computer science at the University of British Columbia, Vector Institute faculty member and CIFAR AI chair.
In the early 2000s, most images of models and celebrities on magazine covers were Photoshopped. Breasts augmented. Waists cinched. Blemishes removed. Eyes widened. Skin lightened. And even though we knew it wasn’t real, we felt pressure to attain those unrealistic beauty standards. AI is the new Photoshop — on overdrive.
It feels like we are moving forward at lightning speed yet moving backwards even faster, according to the many experts. “We’ve allowed our limitations to be perpetuated by AI, instead of helping AI start off on a more inclusive digital footprint and model,” says Karlyn Percil-Mercieca, founder and chief equity storyteller at KDPM Equity Institute.
Research shows that by 2025, 90 per cent of the content we engage with could be AI-generated. According to Dove’s 2024 State of Beauty Report, two in five Canadian women feel pressure to alter their appearance because of what they see online — even when they know it’s fake or AI-generated. In fact, four in 10 women in Canada would give up at least a year of their life to achieve their beauty ideals. These are sobering statistics.
For many, fear about AI isn’t so much about machines taking over, it’s the singularity of the people teaching the machines: predominantly white males. Just last week, Meta announced its new AI advisory council, composed entirely of white men. Last year, OpenAI got into hot water after announcing its new board, also solely composed of wealthy, white men. If women and racialized people are excluded from AI datasets, there is a very real risk of harm that extends far beyond beauty.
AI robots trained on billions of images consistently identified “homemakers” as women and “criminals” as Black men. These “algorithmic biases” are not just insulting but can have serious implications for women and people of colour.
For example, AI is being used to help choose candidates for university admissions and filter job applicants. Even in the medical field, AI can assist in forecasting disease and mortality rates and suitable treatments for various health conditions. AI systems that have learned racism can have life-altering impact for people of colour.
I asked ChatGPT itself “is AI racist?” It answered: “AI itself is not inherently racist, but biases can be unintentionally introduced into AI systems due to various factors such as biased data, flawed algorithms, or inadequate testing. These biases can reflect and perpetuate societal prejudices if not addressed.” We’ve seen this with toxic stereotypes pulled from the internet, rife with pornography, misogyny, violence and bigotry — Asian men are weak, Asian women are fetishized, Black men are criminals that cannot be trusted and Black women cannot control their anger. The list is long and disturbing.
So how do we fix this problem? While we can’t change the past, we can influence AI moving forward. “We get ahead by addressing the elephant in the culture room. AI is built on the white racial frame and this is clearly visible through the examples of what/who is positioned as beautiful. While we know that Black and Brown is always beautiful, the harmful narrative AI perpetuates means that we must always be on guard against the limited and harmful lens of whiteness” says Percil-Mercieca.
“Tech companies must be held accountable for their actions in training AI. Integrate the intersectional research and lived experience wisdom lens of Black/African/Caribbean women and support the solutions they’ve provided. Hire them as consultants — and pay them equitably,” she says. A resource the KDPM offers is the Advancing AI toolkit, which anyone can access for free.
The Algorithmic Justice League encourages more diversity among AI coders. It was founded by Dr. Joy Buolamwini, an AI researcher motivated by personal experiences of algorithmic discrimination. An MIT grad student, working with facial analysis software, she noticed that the software couldn’t detect her face because of her skin colour. Only when she put a white mask over her face did it register. The coders hadn’t taught the algorithm different skin tones and sizes, rendering her invisible. Similarly, AI for Social Progress, launched by Luis Salazar, a Seattle tech entrepreneur, encourages the use of more diverse training sets.
Brands also play a significant role. “I would personally prefer if beauty brands post pictures of real people and promote their products in an honest way,” expresses Shwartz. “At the very minimum, I think that AI-generated images should be clearly labeled as AI-generated.”
A new Dove campaign, The Code, sheds light on the importance of women having the power to see real beauty reflected in new and emerging media. The brand has also made a commitment to never use AI-generated images in its advertising. It’s a major step in the right direction. Brands like Rare Beauty and Knix are also committed to featuring real women from diverse cultures, body shapes, sizes and abilities in their advertising.
As individuals, we are not powerless. “It is essential for us as a human community to collaborate, not just relying on AI or AI experts alone, to eliminate these biases and promote fairness,” emphasizes Khattak.
A big part of that is how we engage with AI tools. To help set new digital standards of representation, Dove recently created the Real Beauty Prompt Guidelines. The easy-to-use manual shows how we can create images that are more representative of Real Beauty on the most popular generative AI programs.
As stated in the manual, “There’s a rule to follow: if you don’t mention it in your prompt, AI won’t create it.” By being more specific with more diverse descriptions of humans in our prompts (age, race, ethnicity, skin body and other defining features), we can generate more realistic forms of beauty.
It’s crucial to remember that AI is human-made. Any racial bias is what humans have already shared on the internet. And whether we like it or not, AI isn’t going away. It’s how we teach and engage with AI moving forward that will influence its diversity, in beauty and beyond.