You can be on Entrepreneur’s cover!

Do We Want Computers That Behave Like Humans? Computers' abilities can exceed our own. Why do we work so hard to make them more like us?

By Baruch Labunski

entrepreneur daily

Opinions expressed by Entrepreneur contributors are their own.

I consume a lot of content dealing with technological advances. It's my professional field, and it's my passion as well. It's rare, though, that I'm so profoundly captivated and challenged by a podcast that I immediately begin scribbling notes for future thought experiments and article topics.

Tim Ferriss' interview with Eric Schmidt was most definitely one of those experiences. In late 2021, Schmidt made the rounds of radio shows and podcasts promoting his latest book, The Age of AI: And Our Human Future, co-authored with Daniel P. Huttenlocher and Henry Kissinger. Schmidt explores the promises and perils of AI and its role in our current and future worlds. He answers some questions and, naturally, raises even more.

Man vs. machine?

To my mind, one of the most fundamental questions Schmidt and his book raise is why we compare computers to humans at all. For example, one of Schmidt's claims in the Ferriss interview is that computers can see better than humans, and he means that literally and definitively. He even goes on to declare that computers should be driving cars and administering medical exams. My questions are more simplistic, I suppose, but I want to know if it's even possible for a computer to really see. Does seeing necessitate a sentient seer? And why is human sight the measure for the sight of a machine?

While Schmidt's book forecasts the inevitable improvements in machine perception and processing that undergird AI, I think he begs an unasked question. Why is it that we regularly set up our measure of computing capability as compared to human abilities? A computer can perform mathematical calculations at tiny fractions of human speed. They can detect patterns better than we mere mortals can. They often demonstrate better predictive abilities than we do. We can all cite examples of computer failures, like labeling the image of an elephant in a room a couch, rather than a pachyderm. But the fact is that in many, if not most cases, computers do see better than we do.

We know that humans have limitations. We don't always see or understand things clearly. We're influenced by emotion, by fatigue and by the limits of our disappointing bodies. We're fragile and fallible, and computers can make us less so. Computers generally, and AI specifically, let us transcend our limitations.

We know that computers' abilities can exceed our own, at least in some circumstances. Why, then, do we work so hard to make computers — and again, more specifically AI — more human-like? Why do we want our computers to behave like humans if we're so fallible?

Related: How Machine Learning Is Changing the World -- and Your Everyday Life

The person-ification of AI

Eric Schmidt's definition of AGI is telling, in terms of the degree to which we conceptualize the development and refinement of AI in relation to human capabilities. Schmidt explains: "AGI stands for artificial general intelligence. AGI refers to computers that are human-like in their strategy and capability. Today's computers and today's algorithms are extraordinarily good at generating content, misinformation, guidance, helping you with science and so forth. But they're not self-determinative. They don't actually have the notion of "Who am I, and what what do I do next?" Paradoxically, the goal we're driving AI toward is the very thing that fundamentally differentiates us from the computers we create.

And yes…this is a simultaneously terrifying and exciting prospect.

Schmidt gives two examples that highlight the kinds of issues that we'll have to navigate as AI develops into AGI, a process that Schmidt predicts will take roughly fifteen years. First is a technology called GAN, which stands for Generative Adversarial Networks. Schmidt explains a situation in which the GAN can develop a solution for developing a genuinely lifelike image of an actor: "the computer can generate candidates, and another network says, "Not good enough, not good enough, not good enough,' until they find it. GANs have been used, for example, to build pictures of actresses and actors that are fake, but look so real you're sure you know them."

These sorts of deep fakes — the ability of a computer to produce an image or a video that will convince us they're real – can and should make us uneasy. We're generating technology that can be used to fool us. That's power. When we can't distinguish real from fake. When the computer can successfully mimic us so we can't even tell the difference between a simulated human and a real human. That imperils our ability to make sound judgements about the world around us.

And it's that ethical question — is it wise to create AI and AGI with these capabilities? — that is at the heart of Schmidt's book and my ambivalence about the future of AI. We have to tread carefully. But the possibility does exist that we may be able to avoid creating our own destroyer.

Related: How to Improve Corporate Culture with Artificial Intelligence

It's on the subject of video surveillance that I think Schmidt touches on one of the ethical centers of AI. He explains: "If you want to eliminate the vast majority of crime in our society, put cameras in every public space with face recognition. It's, by the way, illegal to do so. But if you really care about crime to the exclusion of personal freedom and the freedom of not being surveilled, then we could do it. So the technology gives you these choices, and those are not choices that computer scientists should make, those are choices for society." We can create it, and then it's our responsibility and prerogative to control it, rather than letting it control us.

The point is that we do want computers to learn. We do want computers to evolve to develop more human-like capabilities. And human abilities are a logical way to think about and characterize the growth of AI. But if we value liberty, independence and self-determination, a clear hierarchy must be established. With humans at the top and a clear set of rules about how technology should be employed. Who determines and enforces those rules is an entirely different and essential topic. We should explore it with a transparency that I believe will be difficult to achieve.

Related: What Every Entrepreneur Must Know About Artificial Intelligence

Baruch Labunski

Entrepreneur Leadership Network® Contributor

CEO of Rank Secure

Baruch Labunski is an entrepreneur, internet marketing expert and author from Toronto. He currently serves as CEO of Rank Secure, an award-winning web design and internet marketing firm.

Want to be an Entrepreneur Leadership Network contributor? Apply now to join.

Editor's Pick

Side Hustle

This Flexible Side Hustle Is Helping Millions Earn Extra Cash — and Might Be 'More Attractive' Than an Office Job

Side hustles remain popular for additional income — and have many questioning the 9-5 model altogether.

Marketing

A Step-by-Step Guide on How to Make Money With Facebook Ads, According to Experts

Creating the right message is a crucial first step to making money with Facebook ads, but it doesn't end there. This guide will help you create ads that actually convert.

Business Ideas

63 Small Business Ideas to Start in 2024

We put together a list of the best, most profitable small business ideas for entrepreneurs to pursue in 2024.

Business News

A Surprising Number of U.S. Couples Have Secret Financial Accounts, According to a New Survey — And Most Have Not Talked About a Key Retirement Question

Two in five Gen X and young Boomer couples surveyed do not have a financial plan in place for retiring together.

Business Models

Why the Coaching Industry Is Poised for Transformative Growth in the Gig Economy Era — and How to Navigate the Waves of Change

This article highlights five trends shaping the coaching industry and offers insights into how entrepreneurs can adapt and thrive in this evolving landscape.