Welcome to Gen Z Translator, where I break down trending topics on Fridays. If you’re new, you can subscribe here and follow me on Threads, Instagram, or X. Views are my own. Happy reading!
Google’s artificial intelligence software – once called Bard, now called Gemini – was launched on March 21, 2023, to “help people unlock their human potential so they can augment their imagination, expand their curiosity, and enhance their productivity.”
Instead, it’s allegedly telling people to add glue to their pizza, eat “at least one small rock per day,” and use a poisonous bleach-vinegar combination to clean their washing machine.
These recommendations came when Google rolled out an enhanced search feature to the public. They’re called AI Overviews, and they’re powered by Gemini). In a product update, Google said, “AI Overviews will help with increasingly complex questions...Soon, Google will do the searching, simplifying, researching, planning, brainstorming and so much more.”
You might be getting déjà vu.
When ChatGPT launched, it generated its own unique, errors. It made up computer code and created fake case law. The program has a permanent warning underneath its typing bar: “ChatGPT can make mistakes. Check important info.”
Back in February, Gemini was having problems with its image generation, according to Vox. It produced inaccurate and even offensive pictures. “Google and others have been struggling to solve a known problem in AI which is that without some guidance, tools will naturally generate stereotypical images based on the data they are trained on,” Megan Morrone wrote for Axios.
Google’s CEO apologized for the most recent incidents. In the biz, incorrect or made-up responses are called “hallucinations.” They’re to be expected, especially without a second pair of eyes on an infinite amount of possible search queries.
I think, to some extent, we’re looking a gift horse in the mouth. Of course there are going to be issues with AI models as their creators race for flashy debuts, hoping to influence the market. They’re called “emerging products” for a reason. Humans are imperfect, and hence, they create imperfect things. (Woah, that got deep). The bigger question is how much we should be forced to engage with it.
Twitter/X user @Tantacrul wrote “I'm legit shocked by the design of @Meta’s new notification informing us they want to use the content we post to train their AI models. It's intentionally designed to be highly awkward in order to minimise the number of users who will object to it.”
Google listed five potential problem areas in its original overview of Bard: accuracy, bias, persona, false positives/negatives, and vulnerability to adversarial prompting.
“Since the underlying mechanism of [a large language model] is that of predicting the next word or sequences of words, LLMs are not fully capable yet of distinguishing between what is accurate and inaccurate information,” Google said.
Technology like this requires common sense and critical thinking skills. But think of the saying, “You’re only as strong as your weakest player.” With artificial intelligence, we sacrifice accuracy for ease and convenience.
Not everyone wants to put in the effort to vet their answers. In fact, I’d argue the majority doesn’t want to. In fact, Google flat out says it: “Sometimes you want a quick answer, but you don’t have time to piece together all the information you need. Search will do the work for you with AI Overviews.”
That gets tricky when you force a program with inaccuracies upon your audience. I could go into a whole spiel about teaching media literacy, but I won’t. So, is this just part of the learning curve or should Google have waited longer before rolling AI Overviews out to millions of users?
Maybe you should ask Gemini.
(P.S. this article is a handy little guide to turning Google’s AI off, if you’re interested in that).
Read my last story on AI here: Unveiling the digital haunt: Exploring the fascination with creepypastas
My weekly roundup:
🎶 What I’m Listening To: Twenty One Pilots’ Clancy
🎞️ What I’m Watching: Play-throughs of Indigo Park
🔎 What I’m Reading: I got a copy of The Sunbearer Trials signed by the author 😁
📱 What I’m Scrolling: Billie vs. Taylor 😬
⚠️ What’s On My Radar: Snopes did a fact check on supposed footage from a Temu workshop, similar to the Shein video I talked about last week