Get All Access for $5/mo

Why 'Fail Fast' Is a Disaster When It Comes to Artificial Intelligence For typical products, going to market quickly and seeing what happens is fine. The implications of AI merit a much more considered approach.

By Matthew Baker Edited by Dan Bova

Opinions expressed by Entrepreneur contributors are their own.

wigglestick | Getty Images

"Fail fast" is a well-known phrase in the startup scene. The spirit of failing fast is getting to market with a minimum viable product and then rapidly iterating toward success. Failing fast acknowledges that entrepreneurs are unlikely to design a successful end-state solution before testing it with real customers and real consequences. This is the "ready, fire, aim" approach. Or, if the blowback is big enough, it's the "ready, fire, pivot" approach.

Consider this quote from Reid Hoffman, CEO of LinkedIn: "If you're not embarrassed by the first version of your product, you've launched too late."

Related: Ready or Not, It's Time to Embrace AI

The opposite of failing fast is a "waterfall" approach to software development, where a significant amount of time is invested upfront -- requirements analysis, design and scenario planning -- before the software is ever tested with real customers.

When it comes to the emerging potential of artificial intelligence, I believe failing fast is a recipe for disaster.

Artificial intelligence is here to stay.

Many different types of artificially intelligent software surround us. Most AI has minimal authority today. For instance, Amazon's software recommends things you might like to buy, but it doesn't actually purchase those things on your behalf -- yet. Spotify's software makes a decision to create a playlist for you, but if a song doesn't suit your tastes, the consequences are benign. Google's software decides which websites are most relevant for your search terms but doesn't decide which website you will visit. In all of these cases, failing fast is okay. Usage leads to more data, which leads to improvements in the algorithms.

But intelligence software is beginning to make independent decisions that represent much higher risk. The risk of failure is too great to take lightly, because the consequences can be irreversible or ubiquitous.

We wouldn't want NASA to fail fast. A single Space Shuttle launch costs $450 million and places human lives in jeopardy.

The risks of AI are increasing.

Imagine this: What if we exposed 100+ million people to intelligent software that decided which news they read, and we later discovered the news may have been misleading or even fake and resulted in influencing the election for the President of the United States of America? Who would be held responsible?

Related: 5 Ways in Which Digital and Artificial Intelligence are Changing Work Dynamics

It sounds far-fetched, but media reports indicate Russian influence reached 126 million people through Facebook alone. The stakes are getting higher, and we don't know whom to hold accountable. I am fearful the companies spearheading advancements in AI aren't cognizant of the responsibility. Failing fast shouldn't be an acceptable excuse for unintended outcomes.

If you're not convinced, imagine these scenarios as a by-product of a fail fast mindset:

  1. What if your entire retirement savings evaporated overnight due to artificial intelligence? Here's how it could happen. In the near future, millions of Americans will use intelligent software to invest billions of dollars in retirement savings. The software will decide where to invest the money. When the market experiences a massive correction, as it does occasionally, the software will need to react quickly to re-distribute your money. This could lead to an investment that bottoms out in minutes and your funds disappear. Is anyone responsible?
  2. What if your friend were killed in an automobile accident due to artificial intelligence? Here's how it could happen. In the near future, millions of Americans will purchase driverless automobiles controlled by intelligent software. The software will decide the fate of many Americans. Will the artificial intelligence choose to hit a pedestrian that accidentally steps into the street or steer the vehicle off the road? These are split-second decisions with real-world consequences. If the decision is fatal, is anyone responsible?
  3. What if your daughter or son suffered from depression due to artificial intelligence? Here's how it could happen. In the near future, millions of kids will have an artificial best friend. It will be sort of like an invisible friend. It will be a companion named Siri or Alexa or something else that talks and behaves like a confidant. We'll introduce this friend to our children because it will be friendly, smart and caring. It might even replace a babysitter. However, if your daughter or son spends all their discretionary time with this artificial friend and years later can't sustain meaningful relationships in the real world, is anyone responsible?

In some cases, the consequences can't be undone.

Responsible approach to AI.

The counter-argument is that humans already cause these tragedies. Humans spread fake news. Humans lose money in the stock market. Humans kill one another with automobiles. Humans get depressed.

Related: Life Coaching Guru Tony Robbins Tells Us Why He;s Investing in an AI Company

The difference is that humans are individual cases. The risk with AI that replaces or competes with human intelligence is that it can be applied at scale simultaneously. The scope and reach of AI is both massive and instantaneous. It's fundamentally introducing higher risk. While one driver who makes an error is truly unfortunate, one driver that makes the same error for millions of people should be unacceptable.

A more responsible approach to AI is needed. Our mindset should shift toward risk prevention, security planning and simulation testing. While this defies the modern ethos of the tech industry, we have a responsibility to prevent the majority of unlikely and unwanted outcomes before they occur. The good news is that with the right mindset, we can prevent the scenarios above from becoming true.

Matthew Baker

VP of Strategy with Freshbooks

Matt Baker is a contributing writer who covers finances and growth for small businesses. His industry experience includes VP Strategy at FreshBooks, engagement manager at McKinsey & Company, and senior strategist at Google, Inc. He also wrote a children's book.

Want to be an Entrepreneur Leadership Network contributor? Apply now to join.

Editor's Pick

Leadership

7 Telltale Signs of a Weak Leader

Whether a bully or a people pleaser who can't tell hard truths, poor leadership takes many forms.

Franchise

Kick-Start Your Small Business With These Cost Effective Strategies

Starting a small business is an exciting adventure, brimming with both opportunities and challenges. A key to success is effectively managing costs from the outset.

Business Ideas

63 Small Business Ideas to Start in 2024

We put together a list of the best, most profitable small business ideas for entrepreneurs to pursue in 2024.

Side Hustle

'Hustling Every Day': These Friends Started a Side Hustle With $2,500 Each — It 'Snowballed' to Over $500,000 and Became a Multimillion-Dollar Brand

Paris Emily Nicholson and Saskia Teje Jenkins had a 2020 brainstorm session that led to a lucrative business.