Join our Waitlist for Expert Advice!

Why Entrepreneurs And Enterprises Should Not Rush Into Fine-Tuning GenAI Or LLMs By embracing a governance-first approach, we can unlock the transformative power of GenAI.

By Arun Mohan

Opinions expressed by Entrepreneur contributors are their own.

You're reading Entrepreneur Middle East, an international franchise of Entrepreneur Media.

Shutterstock

The allure of generative artificial intelligence (GenAI) and large language models (LLMs) is undeniable- especially for entrepreneurs looking to differentiate them in what has become an increasingly competitive marketplace, where even the slightest technological edge and data-based insight can make the difference between winning and losing a client or a customer.

As such, these cutting-edge technologies hold the "keys to the kingdom" in terms of which businesses, sectors, industries, and nations will prevail, while the others lag in their ability to personalize products, revolutionize experiences, automate tasks, and unlock unprecedented levels of creativity and efficiency.

Naturally, there's an urge to dive headfirst into fine-tuning and unleashing the untapped potential of these artificial intelligence (AI) powerhouses. But that's exactly why entrepreneurs and enterprises need to exercise caution.

Look at it this way: you wouldn't launch a rocket without a guidance system, would you? Diving into GenAI without robust governance is like launching a product into the market blindfolded. You might achieve liftoff, but the gradual unravelling and the explosion of the craft as it steps into orbit is inevitable.

If this rocket is even 0.5 degree off-course due to the errors, it will miss its target (whether it's the moon or Mars) by millions of miles- and that's exactly what's happening to entrepreneurs and enterprises attempting to "customize ChatGPT as a solution."

Proper governance acts as the navigation system, ensuring that your AI initiatives stay on course and aligned with your values.

It's no surprise, then, that days after the Dubai Crown Prince H.H. Sheikh Hamdan bin Mohammed bin Rashid Al Maktoum's initiative to appoint 22 Chief AI officers, he brought together more than 2,500 government officials, executives, and experts for an AI Retreat to prioritize AI governance in the public sector, thereby setting a strong precedent for the private sector as well.

Related: Here's How We Can Create A Future Where Artificial Intelligence Is A Force For Good

THE LIMITATIONS OF LLMS

To understand the complexities involved, let me give you a brief overview of the architecture of LLMs, such as OpenAI's GPT series, which are built using a special type of architecture called transformers.

These models can understand and generate text that sounds very human-like by looking at the context of whole sentences or paragraphs, not just individual words. They do this using a technique called self-attention.

However, there are some limitations to LLMs. They can only know and understand things based on the data they were trained on, and how they behave and respond is determined by the methods and goals used when they were developed.

It can be challenging to manage what the model generates, because of the biases found in the training data. Mitigating these biases requires careful data curation and ongoing monitoring.

In addition to biases, it's important to note that when enterprises bring large models to their environment and adapt or add context with enterprise data, they often face challenges related to data quality. Many enterprises lack proper metadata on their data, which can hinder the effective implementation of AI systems. These factors must be considered when defining architectures and designing practical systems for enterprise use.

Related: As Artificial Intelligence Soars, Startups Still Need The Human Touch To Succeed

Biases in training data is only the tip of the iceberg, let me take a deeper dive into why a governance-first approach is crucial:

DON'T JUST BUILD, CURATE: GUARDRAILING IS KEY

LLMs, for all their brilliance, are essentially sophisticated pattern-matchers. They lack the nuanced understanding of context and consequence that humans possess. Without proper guardrails, even the most well-intentioned fine-tuning can lead to downright nonsensical outputs. Establishing clear boundaries and oversight is crucial to prevent unintended harm.

Think of interaction guardrailing as building fences around your AI playground. These "fences" in the form of bias detectors, content filters, and security protocols, are essential for preventing your AI from venturing into dangerous or unethical territory.

Proactive guardrailing ensures that your AI interacts with the world responsibly, mitigating risks and fostering trust among users.

FOSTERING THE FEEDBACK LOOP

Training an LLM is not a one-and-done affair. To truly harness its potential, a robust feedback loop must be put in place. This involves systematically gathering high-quality feedback and model outputs, cleaning and labelling the data collaboratively, and running disciplined experiments on fine-tuning methods.

By comparing the results of different tuning approaches, the model's performance can be continuously optimized. While setting up such a feedback mechanism may take 4-6 weeks of focused effort, the payoff in terms of enhanced LLM capabilities is immense.

The true potential of GenAI and LLMs lies not in hasty deployments, but in fostering a culture of responsible AI development. This requires a long-term perspective, one that prioritizes ethical considerations, transparency, and ongoing learning.

To be truly useful, LLMs need to be adapted to the domain and particular use cases where they'll be employed. This can be achieved through domain-specific pre-training, fine-tuning, retrieval augmented generation, and prompt engineering techniques like few-shot learning, and instruction prompting.

The choice of approach depends on the specific use case and considerations like prompt window size, model size, compute resources, and data privacy.

Instead of rushing to be the first, strive to be the best. Invest in robust governance frameworks, engage in open dialogue with stakeholders, and prioritize continuous monitoring and improvement.

Governance should dictate who gets access to the AI system within an enterprise. It should not be a situation, for example, where any team member is able to ask the model about another employee's salary and receive that information.

LLM implementation and access should follow or be an extension of existing data governance policies, which often include well-defined role-based access controls

All in all, slow and steady wins the race, it couldn't be truer in the case of AI adoption. By embracing a governance-first approach, we can unlock the transformative power of GenAI.

Related: The Entrepreneur's Guide To Setting Up An Artificial Intelligence Business In Dubai

Arun Mohan is the founder and Managing Director of Adfolks. An expert on generative artificial intelligence (GenAI) and large language models (LLMs), Arun is also an active investor, and a senior advisor on cloud-native transformations. With more than 15 years of hands-on experience as a coder and a cloud-native transformations leader, he has become the go-to–expert for strategic innovation in cloud observability, management, governance, and artificial intelligence for information technology (IT) operations (AIOps).

In the Middle East, Arun is recognized for bridging the region’s “IT engineer entrepreneurship gap,” as well the “developer skills and software gap.” He has also launched, scaled, and exited two cloud-based services startup companies, Adfolks and Appsintegra -one focusing on Amazon Web Services, and the other focused on Microsoft’s Azure- while also being actively invested in OnePane.AI.

Arun has pioneered B2B enterprise software-as-a-service and empowered developer entrepreneurs within the region. As a keynote speaker and leading GenAI and cloud-focused panelist, he is acclaimed for his in-depth experience with open-source technologies, and for educating hundreds of software engineers and operators to embrace platform play in the Middle East. Arun has his ear to the ground, and genuinely knows the challenges various enterprises and organizations are facing in the adoption of AI and LLMs.

Side Hustle

At 16, She Started a Side Hustle While 'Stuck at Home.' Now It's on Track to Earn Over $3.1 Million This Year.

Evangelina Petrakis, 21, was in high school when she posted on social media for fun — then realized a business opportunity.

Business News

Hybrid Workers Were Put to the Test Against Fully In-Office Employees — Here's Who Came Out On Top

Productivity barely changed whether employees were in the office or not. However, hybrid workers reported better job satisfaction than in-office workers.

Science & Technology

This Influencer Has Nearly 150,000 Instagram Followers and Makes Over $10,000 a Month. There's Just One Catch — She's Not Real.

Aitana López has over 149,000 Instagram followers and brands love her. Is she the future of social media marketing?

Living

How to Achieve Superhuman Levels of Focus with Nutritional Psychology

Could poor nutrition be the reason for a lack of focus?

News and Trends

Entrepreneur Middle East Publishes Report Looking Into The GCC's US$3 Billion Cloud Kitchen Industry

According to the report, the cloud kitchen industry is currently booming, especially as more customers are opting for ordering in, instead of dining out, in a world still mired in pandemic realities.