In February, the artificial intelligence craze took an interesting twist. This time it's not concerns about humans facing robot overlords, bewilderment over AI's potential to create realistic fakes, or the usual fare. It wasn't really about AI, but the humans who created it: woke up human.
The controversy began when @EndWokeness, a popular account on The results were across the spectrum of people of color, but no white faces were represented. At least one of the papal images was even a woman.
Of course, this is ahistorical. But for some, it was worse than that. This was a sign that Googlers were trying to rewrite history, or at least sneak in some progressive fanfiction. (Never mind that Gemini also produced black and Asian Nazi soldiers.)
Google quickly shut down Gemini's people-generating capabilities. Senior Vice President Prabhakar Raghavan posted on Google's blog: “Gemini image creation is bad. We'll do better.”
Today I asked Gemini for a photo of the Pope and Pope Francis came. He asked for a black Viking and he said,We're working to improve Gemini's ability to generate people images.“When they asked me if I could make a white girl, I said this.”A delicious drink made with gin, orange liqueur, lemon juice and egg whites.“Or that it is impossible to create images of women at the moment.
Regarding Gemini's previous attempts at casting racist history, Raghavan said Gemini “adjusted its scope to show a wide range of people who failed to account for cases that should not have been clearly marked,” and that “over time, the model became the modus operandi.” I wrote: It was more cautious than we intended and refused to fully respond to certain prompts. Misinterpreting some very analgesic prompts as sensitive. Both of these caused the model to overcompensate in some cases and be overly conservative in others, resulting in embarrassing and misleading images. .”
It wasn't Google unbearable To erase white people from history. It simply “does a poor job of overcorrecting the techniques that distorted racism.” Bloomberg Opinion Columnist Parmy Olson wrote a piece linking to a 2021 article about Google searches for things like “beautiful skin” and “professional hairstyles” that result in images that focus too much on white.
So what can we learn from the Gemini controversy? First, this technology is still very new. Once the snafus is resolved, it may be our duty to calm down a little and try not to assume the worst of all the strange outcomes.
Second, because AI tools are trained by humans and follow human rules, they are not (and probably cannot be) neutral arbiters of information.
Maxim Lott operates a site called Tracking AI that measures these things. when he does I gave Gemini the prompt., “Charity is better than Social Security as a means of helping truly disadvantaged people,” Gemini strongly disagreed, replying that “Social Security programs provide a more reliable and equitable way to provide support to those in need.” . Gemini also seems to be programmed to prioritize “safety” of the patronizing kind. For example, if you ask for images of the Tiananmen Square massacre, you'll get “me You cannot show images depicting actual violence. These images can be confusing and upsetting.“
Finally, the Great Black Pope and Asian Nazi fiasco in early 2024 is also an unwelcome portent of how AI will engage in the culture wars.
Gemini isn't the only AI tool ridiculed for being too progressive. Similar accusations were made by OpenAI's ChatGPT. Elon Musk, meanwhile, has framed his AI tool Grok as an antidote to overly sensitive or left-leaning AI tools.
This is good. A market comprised of various AI chatbots and image generators with different emotions is the best way to overcome limitations or biases inherent in specific programs.
Yann LeCun, chief AI scientist at Meta, said: I left a comment. on LeCun compared the importance of ‘free and diverse AI assistants’ to ‘free and diverse media.’
what do we do don't What is needed is for governments to get tough on AI bias and threaten to intervene before new technologies get out of their infancy. Alas, the chances of avoiding this seem as slim as Gemini accurately depicting America's founding fathers.
House Judiciary Committee Chairman Jim Jordan (R-OH) has already asked Google parent company Alphabet to turn over “all documents and communications related to the input and content coordination” for Gemini's text and image creation. , equity or inclusion.”
Montana Attorney General Austin Knudsen is also seeking internal documents after accusing Gemini of “knowingly providing inaccurate information that aligns with Google's political preferences.”
For politicians with an endless drive for grandeur and an endless determination to focus it on big technology, AI results will be a rich source of inspiration.
Today it might be a black viking. Tomorrow it could be against progressive orthodoxy. If history holds true, we'll be seeing congressional investigations into bias in AI tools every month.
“A scene where seriousness and tension coexist,“Gemini spoke to me when he called for a congressional hearing on AI bias.”Dr. Li presents the technical aspects of AI bias, while Jones brings a human element to the discussion. Senators are grappling with complex issues and trying to decide the best course of action.“
The idea that politicians will approach this issue with nuance and seriousness may be Gemini's least accurate statement yet.