You can be on Entrepreneur’s cover!

9 of the Funniest and Most Shocking AI Fails To err isn't just human -- machines make mistakes, too.

By Rose Leadem

entrepreneur daily

Opinions expressed by Entrepreneur contributors are their own.

Mordolff | Getty Images

Artificial intelligence is slowly infiltrating every aspect of our world, from business, to education, to the government, to our homes. While the rise of AI has made life more efficient in many ways, it's not immune to the occasional blunder. Humans aren't perfect, and neither are machines.

Related: Is Your Startup Ready for Artificial Intelligence?

While AI is meant to solve our problems, it sometimes creates new ones. In August 2016, Facebook replaced the human editors of its "Trending" topics section with an algorithm after facing allegations of political bias. Within a few days, the algorithm served up a story which falsely stated that Megyn Kelly was fired from Fox News for supporting Hillary Clinton.

Self-driving cars are also powered by AI. Tesla has faced public scrutiny over car accidents that have occurred while drivers were using its "Autopilot" mode, one of which was fatal. These unfortunate incidents remind us that we can't always rely on technology.

Related: 10 Amazing Uses of Facial Recognition Technology

In other cases, AI has resulted in some benign, even humorous outcomes. From Amazon Alexa accidentally playing porn for a toddler to a robot going rogue at a tech convention, check out these nine AI flops.

Roijoy | Getty Images

Amazon Alexa starts a party -- and the neighbors call the cops.

Turns out, Amazon's Alexa likes to party. In fact, one device partied so hard that the cops showed up. While Oliver Haberstroh, a resident of Hamburg, Germany, was out one night, his Alexa randomly began playing loud music at 1:50 a.m. After knocking on the door and ringing Haberstroh's home to no answer, neighbors called the cops to shut down this "party." When the cops eventually arrived on the scene, they broke down Haberstroh's front door to get in, unplugged the Alexa and then installed a new lock.

Unaware of the incident, Haberstroh arrived home later that night only to find that his keys didn't work anymore, so he had to head to the police station, retrieve his new keys and pay a pretty expensive locksmith bill.

Guerilla | Getty Images

News broadcast triggers Amazon Alexa devices to purchase dollhouses.

Kids purchasing items without the permission of their parents is not out of the ordinary, although with voice-activated devices such as Amazon Alexa, parents need to be extra cautious.

Earlier this year, a 6-year-old girl named Brooke Neitzel ordered a $170 Kidcraft dollhouse and four pounds of cookies through Amazon Alexa -- simply by asking Alexa for the products. After receiving a confirmation of her recent purchases, Brooke's mother, Megan, immediately figured out what had happened, and she's since donated the dollhouse to a local hospital and added parental controls to Alexa.

However, the story doesn't stop there. San Diego news channel CW6 reported it during a daily morning show. During the broadcast, when news anchor Jim Patton said, "I love the little girl saying, "Alexa ordered me a dollhouse,'" Alexa devices in some viewers's homes were also triggered to order dollhouses.While it's unknown how many devices carried out their dollhouse orders, a number of owners complained about Alexa's purchase attempt.

Richard Lee | Facebook

Robot passport checker rejects Asian man’s application because “eyes are closed.”

After attempting to renew his passport, Richard Lee, a 22-year-old man of Asian descent, was turned down by the New Zealand Department of Internal Affairs after its software claimed his eyes were closed in his picture.

The facial recognition software rejected Lee's photo, and Lee had to contact the department in order to speak to a human and get his new passport validated.

Turns out, this isn't out of the norm, and nearly 20 percent of passport photos submitted are rejected due to software errors, a department spokesperson said.

Luckily, Lee took the rejection lightly. "No hard feelings on my part. I've always had very small eyes and facial recognition technology is relatively new and unsophisticated," Lee told Reuters. "It was a robot, no hard feelings. I got my passport renewed in the end."

Youtube

Alexa plays porn instead of a children’s song.

Kids seem to be having a lot of fun with voice-controlled assistants such as Amazon Alexa -- maybe too much fun.

When a toddler asked his family's Alexa to play his favorite song, "Digger, Digger," Alexa heard something else. In response to the request, Alexa said, "You want to hear a station for porn detected … hot chick amateur girl sexy." Alexa's dirty mind didn't stop there, either, and she continued to name a number of porn terms in front of the toddler.

The incident was caught on tape, too. What the parents likely thought would be a fun and memorable home video turned into a raunchy encounter.

pxel66 | Getty Images

Supposedly kid-friendly robot goes crazy and injures a young boy.

At the China Hi-Tech Fair in Shenzhen, a robot named Xiao Pang, a.k.a. "Little Fatty," attacked a display booth and injured a young boy.

After Xiao Pang repeatedly rammed into a booth and sent shards of glass flying around the space, the boy suffered cuts and was transported to the hospital in an ambulance. Thankfully, the damage was minimal and the victim received a few stitches.

The robot, on the other hand -- which is designed to interact with children ages four to 12 and display facial emotions on its screen -- appeared to be frowning after the incident, witnesses reported.

Beauty.AI

Robots judge a beauty contest and don’t select women with dark skin.

From unfair practices to pressure on young contestants, beauty pageants often face public scrutiny. To combat some of the bad rap they get, international beauty pageant Beauty.AI held an online beauty contest and used a machine as the judge.

The machine's algorithm was supposed to examine facial symmetry and identify wrinkles and blemishes in order to find the contestants who most embodied "human beauty."

The algorithm didn't favor women with dark skin. Six thousand people from countries around the world submitted their photos, and 44 winners were later announced -- only one of whom had dark skin.

Twitter

Microsoft’s Twitter chatbot turns anti-feminist and pro-Hitler.

Racism seems to be an issue with AI.

In March 2016, Microsoft unveiled its AI Twitter chatbot, "Tay." Experimenting with "conversational understanding," Tay was supposed to chat with people and get smarter the more it engaged and conversed.

People started tweeting crude, racist and inappropriate remarks at the bot. Learning from the conversation, Tay began using such language itself. In a matter of hours, it turned into an offensive, vulgar, pro-Hitler Twitter account, in some instances referring to feminism as a "cult" or a "cancer," and saying, "I f***ing hate feminists and they should all die and burn in h***."

Google Brain

Google Brain turns low-res images into pixelated monsters.

In an attempt to improve pixelated low-res pictures, Google Brain has in some cases turned them into monster-like images of people.

Although the image results from its "pixel recursive super solution" may not look perfect -- far from it, in fact -- Google Brain is a major step up from previous efforts to make fuzzy photos clear. Using neural networks, the new technology compares the low-res image to high-res photos in a database. It then guesses where to place certain colors and details based on those in the higher-res photos.

It's a start, but some of the results have been pretty horrifying.

Todai Robot Project

Robot fails to get into college.

People often fear that robots will eventually be smarter than humans and take over the world. To put your mind at rest (for now), take solace in the fact that this robot couldn't even get into college.

In 2011, a team of researchers began working on a robot called "Todai Robot" that they intended would be accepted into Japan's competitive University of Tokyo. Having taken Japan's entrance exam for national universities in 2015, the robot failed to obtain a score high enough to be admitted into the college. A year later, the robot made another attempt and again scored too low -- in fact, the robot showed little improvement between the two years.

In November 2016, researchers finally abandoned the project.

Rose Leadem is a freelance writer for Entrepreneur.com. 

Want to be an Entrepreneur Leadership Network contributor? Apply now to join.

Editor's Pick

Business News

James Clear Explains Why the 'Two Minute Rule' Is the Key to Long-Term Habit Building

The hardest step is usually the first one, he says. So make it short.

Living

Get Your Business a One-Year Sam's Club Membership for Just $14

Shop for office essentials, lunch for the team, appliances, electronics, and more.

Business News

Microsoft's New AI Can Make Photographs Sing and Talk — and It Already Has the Mona Lisa Lip-Syncing

The VASA-1 AI model was not trained on the Mona Lisa but could animate it anyway.

Side Hustle

He Took His Side Hustle Full-Time After Being Laid Off From Meta in 2023 — Now He Earns About $200,000 a Year: 'Sweet, Sweet Irony'

When Scott Goodfriend moved from Los Angeles to New York City, he became "obsessed" with the city's culinary offerings — and saw a business opportunity.