Meta‘s AI-powered image generator consistently fails to create accurate images of interracial couples and friendships, according to a report by The Verge. The tool struggles to generate images for simple prompts such as “Asian man and Caucasian friend” or “Asian man and white wife,” instead of producing pictures of people of the same race, even when explicitly instructed otherwise.
While Meta’s tool consistently fails to create images of interracial couples and friendships, Google‘s Gemini has come under fire for depicting historical figures and groups, such as the US Founding Fathers and Nazi-era German soldiers, as people of colour.
The Verge’s Mia Sato tested Meta’s image generator extensively, finding that it struggled to generate images for simple prompts such as “Asian man and Caucasian friend” or “Asian man and white wife.” Instead, it produced pictures of people of the same race, even when explicitly instructed otherwise.
In addition to the tool’s inability to conceive of Asian people standing next to white people, The Verge also noted subtle indications of bias in the generated images. For example, the system consistently represented “Asian women” as being East Asian-looking with light complexions. It also added culturally specific attire even when unprompted and generated several older Asian men while the Asian women were always depicted as young.
Similarly, Google’s Gemini AI tool has faced criticism for its attempts to create a “wide range” of results, leading to historically inaccurate depictions. Google apologised for these “inaccuracies in some historical image generation depictions,” acknowledging their efforts to promote diversity have “missed the mark.”
A former Google employee posted on X, formerly Twitter, claiming that it’s “embarrassingly hard to get Google Gemini to acknowledge that white people exist.” This criticism was echoed by internet users requesting images of historical groups or figures and receiving overwhelmingly non-white AI-generated results.
Google has since limited Gemini’s ability to generate images for specific historical prompts.
Meta has not immediately responded to requests for comment on these findings. The company previously described Meta AI as being in “beta” and thus prone to making mistakes.
The challenges faced by Meta and Google in accurately representing racial diversity in their AI image generators underscore the ongoing struggles of addressing biases in AI systems. As these tools continue to develop, it is essential for companies to carefully consider how they incorporate diversity into their AI-generated images, ensuring that their efforts are executed in a nuanced and historically accurate manner while avoiding the perpetuation of stereotypes or the misrepresentation of historical facts.
While Meta’s tool consistently fails to create images of interracial couples and friendships, Google‘s Gemini has come under fire for depicting historical figures and groups, such as the US Founding Fathers and Nazi-era German soldiers, as people of colour.
The Verge’s Mia Sato tested Meta’s image generator extensively, finding that it struggled to generate images for simple prompts such as “Asian man and Caucasian friend” or “Asian man and white wife.” Instead, it produced pictures of people of the same race, even when explicitly instructed otherwise.
In addition to the tool’s inability to conceive of Asian people standing next to white people, The Verge also noted subtle indications of bias in the generated images. For example, the system consistently represented “Asian women” as being East Asian-looking with light complexions. It also added culturally specific attire even when unprompted and generated several older Asian men while the Asian women were always depicted as young.
Similarly, Google’s Gemini AI tool has faced criticism for its attempts to create a “wide range” of results, leading to historically inaccurate depictions. Google apologised for these “inaccuracies in some historical image generation depictions,” acknowledging their efforts to promote diversity have “missed the mark.”
A former Google employee posted on X, formerly Twitter, claiming that it’s “embarrassingly hard to get Google Gemini to acknowledge that white people exist.” This criticism was echoed by internet users requesting images of historical groups or figures and receiving overwhelmingly non-white AI-generated results.
Google has since limited Gemini’s ability to generate images for specific historical prompts.
Meta has not immediately responded to requests for comment on these findings. The company previously described Meta AI as being in “beta” and thus prone to making mistakes.
The challenges faced by Meta and Google in accurately representing racial diversity in their AI image generators underscore the ongoing struggles of addressing biases in AI systems. As these tools continue to develop, it is essential for companies to carefully consider how they incorporate diversity into their AI-generated images, ensuring that their efforts are executed in a nuanced and historically accurate manner while avoiding the perpetuation of stereotypes or the misrepresentation of historical facts.