📺 Stream EntrepreneurTV for Free 📺

3 Reasons Automation Won't Replace the CFO Expect AI to supplement, not supplant, the insight the human mind brings to complex problem-solving.

By Henri Steenkamp

entrepreneur daily

Opinions expressed by Entrepreneur contributors are their own.

Kanizphoto | Getty Images

From factory assembly lines to long-distance trucking, our automated, robotic future is here to stay. Some believe no industry's workforce will be safe from this wave of automation as robots and programming simply get faster, smarter and cheaper.

Rumors persist that even executives are in danger of being phased out entirely by automation. Will there be an AI C-suite that presides over an army of artificial minds? No. Though automation is indeed the future, companies always will need leadership, strategic thinking, judgment and experience -- all human traits.

What gets automated?

Let's begin by examining automation. In one McKinsey report, the determining factor in whether a job was likely to be replaced was the type of work involved, not the classification of the job. The chart below outlines the relationship between the sort of duties involved in a job and how susceptible such a job is to automation.

Source: McKinsey and Company

As the graphic illustrates, the more time spent on predictable, repetitive work (an assembly line or data entry, for example), the more likely a job is to be automated. Such work is easily programmable and requires little flexibility. This certainly explains why factory jobs have declined so quickly in a developed nation such as the United States. Though politicians often highlight outsourcing or cheap immigrant labor, the truth is automation kills more jobs than anything else.

Related: Here Are the Jobs Most Likely to Be Replaced by Robots, and Those That Are Safe

The duties of CFOs will not be automated any time soon, due to three compelling reasons: Soft skills are very difficult to automate, humans prefer to deal with humans, and it's difficult for machines (at least at their current level) to adapt to new, unexpected situations.

1. Soft skills are difficult to automate.

Harvard Business Review designed and conducted an experiment using iCEO, a digital management platform that can run projects independently. In this trial, HBR tasked iCEO to submit a detailed proposal to a Fortune 500 company -- a task that normally takes months.

Using contractors from Mechanical Turk, eLance and oDesk, iCEO completed the project in mere weeks. Researchers claim this experiment conclusively demonstrated that AIs can not only tackle giant, complex tasks but also manage projects, thereby eliminating executives entirely. As impressive as this is, there are real limits. For instance, this assignment -- though expansive and difficult -- is still just that: a single, large task that easily can be divided into many smaller steps. It's hard to assess how iCEO would deal with bigger challenges that don't necessarily follow a set process.

How would iCEO (and its successors) deal with a CFO's broad tasks, such as steering a company safely through a recession? How would an AI-driven system determine whether to acquire a potentially profitable, dynamic tech startup with key flaws?

The argument isn't that AIs are incapable of evolving rapidly. It's this: AIs can't make judgment calls and tackle decisions that can affect thousands of human lives. Not every problem can be broken down into quantifiable factors. Even today's high-tech world has a need for intuition and human-based decision-making.

Related: Human Intuition Is the Future of Innovation and Entrepreneurship

2. Humans prefer dealing with humans.

Ultimately, decisions made by C-suite execs will impact not only employees and their families but also many others living in an increasingly globalized, connected economy. Robots are entrusted to complete smaller tasks, but humans remain leery of putting AI in charge of significant decisions that can mean the difference between a company's prosperity or its ruin.

Besides, there's a reason that HAL 9000, the murderous AI from the film "2001: A Space Odyssey," features so prominent in our cultural memory: He's written as unfeeling, cold and dangerous. His lack of humanity reduces his strategies to pure programming rather than moral quandaries. Faced with a difficult decision, HAL commits a heinous crime -- one that an ethical, human commander would be unlikely to make.

It's one of the complaints many leading tech figures have about building a hyper-intelligent AI: Moral norms and protocols must be included in its code. Failure to do so could have dire consequences.

Related: Making Decisions Under Uncertain Conditions

3. Machines can't easily adapt.

Automation cannot adapt to the unexpected. Consider self-checkouts, an increasingly common sight in supermarkets and discount stores. As of October 2016, Costco, Albertson's and Texas-based Randall's no longer feature self-checkouts. The reason? Reduced theft.

As The New York Times points out, self-checkouts are especially susceptible to theft: It's simply too easy for someone to enter the wrong code and get away with paying less -- something that would be impossible with a human cashier. If a checkout machine's straightforward programming and simple task set is so easily fooled, what does that mean for a more complicated machine?

Related: Here's How This Company Is Adding Robots But Also Keeping Its Workers

Think about how these implications apply to HBR's experiment. While there probably were snags and slowdowns (as in any project), researchers never mentioned any sudden, unpredictable developments (as there would be in any real work environment). As anyone who's ever worked on a project can tell you, things happen: Clients call with demands; stock markets rise and fall. The iCEO program was free to work without outside interference, but actual companies don't exist in a vacuum.

Just as we subject cars to field-testing, AIs must be stress-tested in real-world conditions. Judging from the case of automatic checkouts, this testing reveals inflexibility to be a critical system flaw.

What's next?

It's clear by now that AIs aren't taking executive jobs any time soon (if ever). Automation can be used reponsibly if it's phased in gradually and strategically. Only one thing is guaranteed: There always will be unexpected problems that need a human's mind to solve.

Related: How Humans Plus Machines Will Equal Amazing Advancements

Henri Steenkamp

CFO of Saratoga Investment Corp.

Henri Steenkamp is CFO of Saratoga Investment Corp., a provider of financial solutions to middle-market companies. Follow him on Twitter and read his thoughts on his finance blog and South Africa blog.

Want to be an Entrepreneur Leadership Network contributor? Apply now to join.

Editor's Pick

Leadership

We've Normalized Testing Our Employees. But Why Don't We Test Our Leaders?

Here's how leaders can grow and improve their leadership and management skills.

Living

This Wine Assortment Can be a Great Mother's Day Gift for $65

Treat your mom to an amazing selection of reds, whites, and a bottle of bubbly with this limited-time Mother's Day discount.

Leadership

The Real Reason You Struggle With Accountability — and What You Can Do to Master It

Uncover how to stop sabotaging your own success, and discover practical steps to mastering accountability.

Marketing

How AI Is Transforming Keyword Research (and Why You Can't Afford to Ignore It)

Learn how AI tools can streamline keyword research, improve content targeting accuracy and boost SERP rankings. Whether you're a beginner or a seasoned professional, this guide is a must-read for success in the digital space.