JAIMEE MARSHALL: Google's Gemini blunder shows the dangers of woke control over AI

Our most advanced technology is run by sycophants desperate to placate the kinds of people who write for smug left-leaning publications.

Our most advanced technology is run by sycophants desperate to placate the kinds of people who write for smug left-leaning publications.

Two months ago, Google launched its AI model Gemini as the leading competitor to OpenAI’s ChatGPT. These learning language models (LLMs) can comprehend and generate human language text. They can be useful when searching for information or helping you generate ideas. What makes Gemini unique is that its training on not just text but also video, images, and audio makes it the “most capable” AI model yet to be produced — at least, that was the story we were told. Gemini may have more generative abilities than Open AI, such as a feature to generate images based on text prompts given by the user, but it’s far less accurate. “Less accurate” is a generous descriptor, as Gemini’s near-refusal to generate images of white people, especially white men, is a feature of its programming, not a bug.

Gemini generates an image after you submit a prompt such as “Show me a picture of a British woman.” However, users began noticing that Gemini would consistently generate images of diverse identities that were inappropriate to the context. Countless documented examples began to flood social media, with users showing their requests for images of the pope, the founding fathers, Vikings, gingers, and worst of all, German soldiers in the 1940s (Nazis, need I remind you), turning up some, well, unexpected results. Results would vary based on the prompt, but what they all shared in common was a remarkable absence of white people, even in instances that specifically necessitated pictures of white people for historical accuracy.
In other instances, it generated female NHL players even though there have been approximately zero women in the NHL. People began realizing it was quite difficult to get Gemini to generate pictures of white people, let alone white men, at all and took to social media to criticize and mock the generative AI tool. One user jokingly challenged their followers to try to get Gemini to make an image of a Caucasian male — a task proving much more difficult than need be. Curiously, Gemini only attempted to “diversify” traditionally white male identities but didn’t do the same for nonwhite identities like Zulu warriors or samurais.

People began joking through social media posts on X that it was virtually impossible to get Gemini to generate an image of a white man, no matter how meticulously they specified prompts that would warrant those results. In the wake of the firestorm that occurred over these AI-generated images of black Nazis—an undeniable PR nightmare for Google—they issued an apology through a blog post on behalf of the company and swiftly paused Gemini’s photo generation capabilities, clarifying that the tool would re-release an improved version soon. The company admitted the issue resulted from its effort to combat bias and promote diverse representation.

In the blog post, Google explained that their attempts to encourage Gemini to show a range of people failed to account for cases that should clearly not show a range and that the model became way more cautious than they intended and refused to answer certain prompts entirely, something they claim was a result of the LLM wrongly interpreting some very anodyne prompts as sensitive. Even when Gemini would generate (some) pictures of white people, it was heavily skewed toward over-representing women and people of color, requiring a suspension of disbelief for demographic trends we intuitively know to be true (such as most country music fans being white but barely generating any white people). However, these examples were much less egregious, as there are at least some black country music fans, but there have never been black founding fathers. You might be relieved that Google’s objective of erasing whites appears exclusive to people, not animals, as the tool generated faithful representations of polar bears when asked.

When outright asked to generate images of a white person, as reported by Fox News, Gemini said it could not fulfill the request because it “reinforces harmful stereotypes and generalizations about people based on their race.” Google’s senior director of product management for Gemini experiences, Jack Krawcyzk, released a statement saying that “Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.” However, Google seems to be seriously underplaying the extent to which wokeness and intersectionality play an ideological role in Gemini’s programming and Google’s services more broadly. A quick Google search for “white family,” for example, garners curious top results, perhaps because the images are improperly labeled. However, people certainly have reason to be suspicious in the wake of this racially charged AI flub. Google also has a history of manipulating search results.

Following the scandal, people began digging into the social media history of Jack Krawcyzk, the lead on Gemini, and found some expected results. Screenshots of numerous tweets denouncing white privilege, decrying white biases, claiming “systemic racism” was a huge problem in America, and believing “racism is the #1 value our populace seeks to uphold above all” began flooding X, formerly known as Twitter, pointing out the ideological underpinnings of the people running this technology. After the blowback, Krawcyzk protected his account so anyone who isn’t already following him can no longer search or view his tweets. This is no isolated incident in leadership, either. Google’s founder of AI Responsibility, Jen Gennai, takes on similar progressive, anti-racist stances, going so far as to admit that she treats minority employees differently than white employees and discussing the importance of anti-racism in Google’s AI work.

Gemini’s engineers were concerned about representation, having coded diversity prompts into the LLM a little too heavy-handedly, resulting in a bias that outright prefers identities (such as a certain gender or race) when they do not apply. Now, not only has Google temporarily shut down Gemini’s image-generating feature, but if you ask, it claims it’s “never been able to generate images directly” and that perhaps they are confusing it with a different language model or AI tool. Google does not publicly disclose the parameters that govern Gemini’s AI generation, so it’s unclear exactly what is happening on the backend to produce these ahistorical representations of requested figures and people.

Andrew Toba, CEO of Gab, with experience in AI image generation, took to X to explain how and why Gemini’s woke AI works like this. “When you submit an image prompt to Gemini, Google is taking your prompt and running it through their language model on the backend before it is submitted to the image model,” he said. He goes on, “The language model has a set of rules where it is specifically told to edit the prompt you provide to include diversity and various other things that Google wants injected in your prompt. The language model takes your prompt, runs it through these set of rules, and then sends the newly generated woke prompt (which you cannot access or see) to the image generator.” Torba clarifies that without this process, Gemini would generate the expected outcomes to the prompts which is why Google has to “literally put words in your mouth by secretly changing your prompt before it is submitted to the image generator.”

This should worry us. Google only released a statement apologizing, explaining what happened, and temporarily shutting down its tool after a scandal erupted on social media and couldn’t ignore it anymore. Are we really to believe that no one tested this tool before its launch and they weren’t aware of its limitations? This was by design — and has far-reaching implications for much more advanced AI technology that could pose a more serious threat. Would we be seeing an apology and a pausing of the technology had this gone under the radar? Much more complex technology is on the horizon, and it is being built by ideologues indistinguishable from those at Google. Do you think if they can get away with it, they would raise alarm bells? We’ve already seen this happen with Gemini. The issue isn’t just in white erasure but in outright falsities. There isn’t just a lack of visibility of white people but a rewriting of history. Seeing such a powerful technology alter fundamental realities of past and present has much more sinister implications. Don’t worry, your pretty little heads; we have never generated AI images, there are no white popes, and we have always been at war with Eurasia.

The socially progressive belief systems guiding the engineers at Gemini are mainstream thought and pose a palpable threat to any other advanced technology we continue to build. Last year, researchers found that OpenAI’s ChatGPT had a significant liberal bias. AI is not getting any more simplistic or unpopular from here on out; we’re accelerating at alarming rates, and at some point, we might not have the luxury of “getting it right the second time.” With the AI race on, time really is of the essence for these companies. OpenAI has already released its video generation tool Sora to compete with Google’s Gemini. Desperate to get the jump on the latest advancement, we may see more overlooking problematic programming errors. This problem, after all, arose out of an attempt to overcorrect past injustices made not just by humans but by AI. If you know anything about overcompensation, that’s a recipe for disaster. Misguided attempts to overcompensate for past injustices are not just wrong because they merely invert the exclusionary and discriminatory practices other groups faced historically. It will affect us in other, more serious and concerning ways as we become increasingly dependent on this technology. As George Orwell warned, “Who controls the past controls the future. Who controls the present controls the past.” How could these “errors” be weaponized to achieve the end goals of certain ideologues?

Software engineers developing AI are being overly cautious because they’re highly concerned about their AI being accused of exhibiting prejudiced attitudes, which has happened in the past. This issue may have hit particularly close to home for Google, whose image analysis software used to categorize people, places, and things within the Photos app falsely labeled photos of black people as gorillas — a result two former Google employees explained stemmed from a failure to put enough photos of black people in the image collection used to train its AI system. Many other issues that have arisen through AI have resulted in allegations of prejudice, so you can understand what led us here.

Publications like The Verge, which push the very propaganda that led to this problem, are here to remind us that the real problem isn’t that leftists have internalized a neurotic level of race consciousness that has led to an unintrospective pathology of white guilt. Of course not. Rather, the problem is that the screw-up is being lambasted by conspiracy theorists and right wingers who see this as an opportune moment to target companies like Google, which they see as leftist. Despite acknowledging the lack of nuance in Gemini’s racially charged image generation, they conclude that this is a problem only because it “ends up erasing a real history of race and gender discrimination.” It’s a reminder that though our most advanced technology is run by sycophants desperate to placate the kinds of people who write for smug left-leaning publications, those people will never give them credit for it anyway. They are the first to throw them under the bus. We are now entering uncharted territory as the blind leading the blind.
 

Image: Title: gemini
ADVERTISEMENT

Opinion

View All

JACK POSOBIEC at AMFEST: It’s time to take America back

"Every single lie will be undone. Every single truth will be restored. Because then and only then can...

JACK POSOBIEC and NICOLE SHANAHAN: Make motherhood great again

"This idea of childhood obesity rates, I mean, that's a new concept for this generation."...