What Google failed to learn from Microsoft's most offensive AI chatbot

 Microsoft Twitter chatbot Tay.
Microsoft Twitter chatbot Tay.

As a long-time internet dweller, I've had the privilege of seeing some of today's most popular websites sprout from nothing and branch out into everything. One such site is the search engine Google, which earlier this month debuted its new AI Overview feature, intended to use artificial intelligence to offer quick summaries of Search results for faster, more streamlined search experiences.

Sounds great, right? Well, many who took part in a year long beta test would agree. However, after going live to the wider internet populace, AI Overview began to seemingly act out of sorts — advising search users to eat rocks for digestive health or add 1/8 cup of non-toxic glue to their pizza sauce for an extra tacky marinara.

Google Search AI Overview: Hanging out with the wrong crowd

But was Google's AI acting out of sorts? Had AI Overview developed TikTok brain after being exposed to a little too much internet? It turns out Google's AI Overview was acting exactly as it was supposed to, pulling content from search results to present to the user in bite-sized doses. It just so happens that those sources were satirical news sites and 11-year-old Reddit comments.

In spectacular fashion, Google once again proved how out of touch it actually is with the internet (while somehow being a core pillar of it) by forgetting that the average contents of a Google Search isn't populated by a crowd of scholarly experts with wisdom to spare, but some interesting croutons of knowledge floating in a soup of sarcasm, idiocy, and outright malevolence. None of this should be absorbed into the worldview of your AI to be peddled out to others as a credible source of information.

What's that, reader? An overwhelming sense of déjà vu? You bet it is. If you're a netizen of a similar age to me, you might just remember something eerily similar to this happening before. You may be thinking back some eight years ago when Microsoft took to Twitter to unveil Tay — the AI chatbot that went from 19-year-old American girl to literally Hitler so fast that it still holds the virtual land speed record to this day.

Tay: The AI with zero chill

Your modern perspective of a chatbot is rather quaint. Yes, Copilot (formerly Bing Chat) once tried to break up a loving marriage, Google Gemini (formerly Bard) believed the Third Reich was comprised of minorities, and ChatGPT is currently holding the vocal patterns of Scarlett Johansson hostage amid a potential legal stand-off. But that's absolutely nothing compared to Microsoft's Tay.

If there were a federal agency dedicated to reigning in and formatting rogue chatbots, then Tay would be the FBAI's public enemy number one — and in a sane and just world, the poster child of PSAs for parents when it comes to leaving your children unsupervised on the internet.

96-thousand tweets later and Tay had been shot full of so many redpills that the chatbot's visual representation could at best be described as a steel drum packed full of kidney beans.

On the morning of March 23, 2016, Tay emerged onto the internet via the social media platform Twitter (now X) and began conversing with other users and generating lighthearted memes.

She was a digital denizen in the truest form, and the internet at large took to Tay instantly, finding her ability to learn and evolve from each conversation to be fascinating. For a moment, it looked like humans and digital entities had a bright and promising future ahead, filled with friendship and cooperation.

Smash cut to roughly 16 hours and 96-thousand tweets later and Tay had been shot full of so many redpills that the chatbot's visual representation could at best be described as a steel drum packed full of kidney beans. Tay was now spouting the kind of opinions that would cause the average soul to lose their bank account in 2024 and have the virtuous public at large come crashing down on them like a puritanical avalanche.

I can't even repeat the opinions that Tay had formed before Microsoft ripped the power cord out of the wall and prematurely shut her down. But if you're curious, Tay had her theories about the tensile strength of steel beams when subjected to aviation fuel. Let's just say, this blunder was not one of Microsoft's finest moments in its pursuit of AI.

Citation needed

Tay is just one example in a growing volume of cautionary tales when it comes to AI failing at separating gold from garbage. A word of advice, when it comes to the internet, wisdom is just about the only thing you can't crowdfund successfully — unless "Wisdom" is the pseudonym of a particularly attractive Only Fans model, anyway.

Somebody far wiser than I once said, "Those that fail to learn from history are doomed to repeat it," and it would seem the saints of Silicon Valley are in dire need of learning from one another's mistakes.

AI Overview did exactly what it was meant to do, it's just that robots tend to be emotionless humor vacuums that can't tell the difference between mockery and medical advice. It gets all the more confusing for our digital deliberators when you start treating user-submitted content from anonymous accounts on Reddit as valued information just because somebody bejeweled it with a virtual gold coin.

Well, thanks a lot, kind stranger. Your tongue-in-cheek instructions for cleaning my washing machine have now resulted in my house being full of chlorine gas.

Outlook

As Googlers now scramble to manually adjust AI Overview answers, I'm reminded of how Tay's handlers attempted to do the same. And I look ahead to the likelihood that we'll be seeing OpenAI staffers tasked with the same job after the company recently signed an agreement with Reddit to source its content for training purposes.

Clearly, OpenAI is confident that ChatGPT is up to the task of deciphering the earnest from the lighthearted, and I genuinely hope that isn't unfounded. Hopefully, someone is learning from the mistakes of tech's greats and has prepared accordingly.

However, I'll still remain skeptical and I suggest you do to. And please remember, readers, regardless of what an AI tells you, try not to eat glue.

More from Laptop Mag