You can be on Entrepreneur’s cover!

5 Things Business Leaders Must Know About Adopting AI at Scale Despite growing awareness of the importance and growth potential in AI, most AI implementations fail in production.

By Roey Mechrez

entrepreneur daily

Opinions expressed by Entrepreneur contributors are their own.

As part of my job, I meet on a daily basis with enterprise leaders who tackle the challenge of implementing AI in their business. These are typically executives in charge of their organization's AI transformation, or business managers who wish to gain a competitive edge by improving quality, shortening delivery cycles and automating processes. These business leaders have a solid understanding of how AI can serve their business, how to start the AI-implementation process and which machine-learning application fits their specific business needs. Despite their understanding of AI and its potential, most managers seem to lack understanding in key technical areas in AI adoption at scale.

Managers that strive to overcome these blind spots, which currently derail successful implementation of AI projects in production, should address the following five questions.

What data goes into the model?

If you have a basic understanding of deep learning, you probably know that it's based on an algorithm that takes input data samples and produces an output in the form of classification, prediction, detection and more. During the training phase of the model, historical data (whether labeled or unlabeled) is used. Once trained, the model will be able to deal with data similar to the samples it was trained with. This model may keep running smoothly in a controlled lab environment, but it is locked within the "convex hull" of the training data. If, for some reason, the model is fed with data that is outside the scope of the training data distribution, it will fail miserably. Unfortunately, this is what often happens in real-life production environments.

The ability to process data that deviates from the boundaries of the sterile training environment is determined by how robust and stable the AI system is. Enterprises that use low robustness and stability systems will inevitably realize they're facing a case of "garbage-in, garbage out" model in terms of how data is analyzed and processed.

Related: When Should You Not Invest in AI?

What are the model boundaries?

With the understanding that the model is highly coupled with the training data that feeds it, we would like to know when the model is right and when it's wrong. Building a trustful human-machine collaboration is vital for success in AI adoption. The first step is to control the model uncertainty for each given sample. Take an example in which the AI application is automating a mission critical operation that requires very high accuracy (for example, claim processing for an insurance company, quality control on an airliner assembly line or fraud detection in a big bank). Considering how sensitive the output is in these use cases, the required accuracy cannot be achieved with solo AI automation. Complex, rare cases must be passed to a human expert for final judgment. That's the essence of setting a boundary for the AI system. The huge flow of data that comes into the model must be divided into two categories: a fully automated bucket of data and a semi-automatic bucket.

The ability to split the data into these two buckets is based on uncertainty estimation: For each sample of data (case and input), the model needs to generate not just a prediction output, but also a confidence score of this prediction. This score is compared against a pre-set threshold that governs how data is split between the fully automatic path and the human-in-the-loop path.

Related: Here's What AI Will Never Be Able to Do

When should the model be retrained?

The first day in production is the worst day. That's the point where the model needs to be constantly improved by ongoing feedback. How is that feedback loop provided? Following the above example, the data that is passed to a human for analysis, the data with a low-confidence score and the data that is out of the training distribution should be used to improve the model.

There are three main scenarios in which AI models should be retrained with feedback mechanisms:

  1. Insignificant data. If the data used in training the system is not well distributed across the production data, you will need to improve the model over time with additional data to achieve better generalization.

  2. Adversarial environments. Some models are prone to external hacks and attacks (such as in the case of fraud detection and anti-money laundering systems). In these cases, the model must be improved over time to ensure it's one step ahead of the fraudsters, who may invest plenty of resources to break into it.

  3. Dynamic environments. Data is constantly changing, even in seemingly stable and traditional businesses. In most cases, maintaining high sustainability levels of solutions require taking new data into consideration.

In simple terms, AI models are not evergreen by nature; they must be nurtured, improved and fine-tuned over time. Having these mechanisms in production is highly coupled with sustainable AI and with the adoption of AI at scale.

Related: How Entrepreneurs Can Use AI to Boost Their Business

How to detect when the model goes off the rails?

By now, you understand the complexity of different production and operational elements of AI, which are at the core of adopting AI at scale. In light of these complexities, having the ability to monitor the system, understanding what goes on under the hood, getting insights, detecting data drifts (change of the distribution), and having a general observation of the system's health are crucial. The industry standards state that for every $1 you spend developing an algorithm, you must spend $100 to deploy and support it. Given the amount of academic research, open-source and centralized tools (like PyTorch and TensorFlow), the process of building AI solutions is becoming democratized. Productizing AI at scale, on the other hand, is something only a few companies can achieve and master.

There's a common saying about deep learning: "When it fails, it fails silently." AI systems are fail-silent systems, for the most part. Advanced monitoring and observability mechanisms can shift them into fail-safe systems.

How to build a responsible AI product?

The fifth element is the most complex to master. Given the latest advancement in AI regulation, particularly in the E.U., building responsible AI systems is becoming more necessary, and not just for the sake of regulation, but rather to ensure companies conduct themselves in an ethical and responsible way. Fairness, trust, mitigating bias, explainability (the ability to explain the rationale behind decisions made by AI), and result repeatability and traceability are all key components of a responsible, real-world AI system. Companies that adopt AI at scale should have an ethics committee that can gauge the ongoing usage of the AI system and make sure it's "doing good."

AI should be used responsibly not because regulation demands it, but because it's the right thing to do as a community, as humans. Fairness is a value, and as people who care about our values, we need to incorporate them into our daily developments and strategy.

Adopting AI at scale requires a lot of effort, but is a massively rewarding process. Market trends indicate that 2021 will be a pivotal year for AI. The right people, partners and mindset can help make the leap from the lab to full-scale production. Business leaders who acquire deep understanding of the technical and operational aspects of AI will have a head start in the race to adopt AI at scale.

Roey Mechrez

CEO and Co-founder of BeyondMinds

Roey Mechrez is the CEO and co-founder of BeyondMinds. As a leading AI pioneer and global visionary, he is passionate about fostering a data-driven culture, using AI as a transformational catalyst to address complex regulatory, operational and business-intelligence challenges.

Want to be an Entrepreneur Leadership Network contributor? Apply now to join.

Editor's Pick

Side Hustle

He Took His Side Hustle Full-Time After Being Laid Off From Meta in 2023 — Now He Earns About $200,000 a Year: 'Sweet, Sweet Irony'

When Scott Goodfriend moved from Los Angeles to New York City, he became "obsessed" with the city's culinary offerings — and saw a business opportunity.

Data & Recovery

Better Communicate Data with Your Team for $20 with Microsoft Visio

Visio features a wide range of diagramming tools that can support projects across all industries.

Growing a Business

How To Leverage Social Media to Optimize PR Success and Increase Your Brand Awareness

Entrepreneurs can establish authority and trust in their industries through the strategic use of social media, leveraging platforms for podcast appearances, guest posts, and consistent, quality content that aligns with their brand's mission.

Science & Technology

AI Will Radically Transform the Workplace — Here's How HR Teams Can Prepare for It

HR intrapreneurs are emerging as key drivers of AI reskilling, thoughtful organizational restructuring and ethical integration, shaping an inclusive future where technology enhances both efficiency and employee development.

Business News

Some Costco Stores Are Now Selling a Frozen Item That Looks Just Like a Trader Joe's Fan Favorite

The Frozen Kimbap is a Trader Joe's cult favorite, and now a version can be found at Costco, too.