Do We Want Computers That Behave Like Humans?

Computers' abilities can exceed our own. Why do we work so hard to make them more like us?

learn more about Baruch Labunski

By Baruch Labunski

Opinions expressed by Entrepreneur contributors are their own.

I consume a lot of content dealing with technological advances. It's my professional field, and it's my passion as well. It's rare, though, that I'm so profoundly captivated and challenged by a podcast that I immediately begin scribbling notes for future thought experiments and article topics.

Tim Ferriss' interview with Eric Schmidt was most definitely one of those experiences. In late 2021, Schmidt made the rounds of radio shows and podcasts promoting his latest book, The Age of AI: And Our Human Future, co-authored with Daniel P. Huttenlocher and Henry Kissinger. Schmidt explores the promises and perils of AI and its role in our current and future worlds. He answers some questions and, naturally, raises even more.

Man vs. machine?

To my mind, one of the most fundamental questions Schmidt and his book raise is why we compare computers to humans at all. For example, one of Schmidt's claims in the Ferriss interview is that computers can see better than humans, and he means that literally and definitively. He even goes on to declare that computers should be driving cars and administering medical exams. My questions are more simplistic, I suppose, but I want to know if it's even possible for a computer to really see. Does seeing necessitate a sentient seer? And why is human sight the measure for the sight of a machine?

While Schmidt's book forecasts the inevitable improvements in machine perception and processing that undergird AI, I think he begs an unasked question. Why is it that we regularly set up our measure of computing capability as compared to human abilities? A computer can perform mathematical calculations at tiny fractions of human speed. They can detect patterns better than we mere mortals can. They often demonstrate better predictive abilities than we do. We can all cite examples of computer failures, like labeling the image of an elephant in a room a couch, rather than a pachyderm. But the fact is that in many, if not most cases, computers do see better than we do.

We know that humans have limitations. We don't always see or understand things clearly. We're influenced by emotion, by fatigue and by the limits of our disappointing bodies. We're fragile and fallible, and computers can make us less so. Computers generally, and AI specifically, let us transcend our limitations.

We know that computers' abilities can exceed our own, at least in some circumstances. Why, then, do we work so hard to make computers — and again, more specifically AI — more human-like? Why do we want our computers to behave like humans if we're so fallible?

Related: How Machine Learning Is Changing the World -- and Your Everyday Life

The person-ification of AI

Eric Schmidt's definition of AGI is telling, in terms of the degree to which we conceptualize the development and refinement of AI in relation to human capabilities. Schmidt explains: "AGI stands for artificial general intelligence. AGI refers to computers that are human-like in their strategy and capability. Today's computers and today's algorithms are extraordinarily good at generating content, misinformation, guidance, helping you with science and so forth. But they're not self-determinative. They don't actually have the notion of "Who am I, and what what do I do next?" Paradoxically, the goal we're driving AI toward is the very thing that fundamentally differentiates us from the computers we create.

And yes…this is a simultaneously terrifying and exciting prospect.

Schmidt gives two examples that highlight the kinds of issues that we'll have to navigate as AI develops into AGI, a process that Schmidt predicts will take roughly fifteen years. First is a technology called GAN, which stands for Generative Adversarial Networks. Schmidt explains a situation in which the GAN can develop a solution for developing a genuinely lifelike image of an actor: "the computer can generate candidates, and another network says, "Not good enough, not good enough, not good enough,' until they find it. GANs have been used, for example, to build pictures of actresses and actors that are fake, but look so real you're sure you know them."

These sorts of deep fakes — the ability of a computer to produce an image or a video that will convince us they're real – can and should make us uneasy. We're generating technology that can be used to fool us. That's power. When we can't distinguish real from fake. When the computer can successfully mimic us so we can't even tell the difference between a simulated human and a real human. That imperils our ability to make sound judgements about the world around us.

And it's that ethical question — is it wise to create AI and AGI with these capabilities? — that is at the heart of Schmidt's book and my ambivalence about the future of AI. We have to tread carefully. But the possibility does exist that we may be able to avoid creating our own destroyer.

Related: How to Improve Corporate Culture with Artificial Intelligence

It's on the subject of video surveillance that I think Schmidt touches on one of the ethical centers of AI. He explains: "If you want to eliminate the vast majority of crime in our society, put cameras in every public space with face recognition. It's, by the way, illegal to do so. But if you really care about crime to the exclusion of personal freedom and the freedom of not being surveilled, then we could do it. So the technology gives you these choices, and those are not choices that computer scientists should make, those are choices for society." We can create it, and then it's our responsibility and prerogative to control it, rather than letting it control us.

The point is that we do want computers to learn. We do want computers to evolve to develop more human-like capabilities. And human abilities are a logical way to think about and characterize the growth of AI. But if we value liberty, independence and self-determination, a clear hierarchy must be established. With humans at the top and a clear set of rules about how technology should be employed. Who determines and enforces those rules is an entirely different and essential topic. We should explore it with a transparency that I believe will be difficult to achieve.

Related: What Every Entrepreneur Must Know About Artificial Intelligence

Baruch Labunski

Entrepreneur Leadership Network Contributor

CEO of Rank Secure

Baruch Labunski is an entrepreneur, internet marketing expert and author from Toronto. He currently serves as CEO of Rank Secure, an award-winning web design and internet marketing firm.

Related Topics

Editor's Pick

Everyone Wants to Get Close to Their Favorite Artist. Here's the Technology Making It a Reality — But Better.
The Highest-Paid, Highest-Profile People in Every Field Know This Communication Strategy
After Early Rejection From Publishers, This Author Self-Published Her Book and Sold More Than 500,000 Copies. Here's How She Did It.
Having Trouble Speaking Up in Meetings? Try This Strategy.
He Names Brands for Amazon, Meta and Forever 21, and Says This Is the Big Blank Space in the Naming Game
Business News

American Airlines Sued After Teen Dies of Heart Attack Onboard Flight to Miami

Kevin Greenridge was traveling from Honduras to Miami on June 4, 2022, on AA Flight 614 when he went into cardiac arrest and became unconscious mid-flight.


How to Detect a Liar in Seconds Using Nonverbal Communication

There are many ways to understand if someone is not honest with you. The following signs do not even require words and are all nonverbal queues.

Business News

Mark Zuckerberg Has Promised More Transparency Amid Meta Layoffs — 5 Reasons That's a Smart Strategy

For decades, transparency hasn't been particularly popular among business leaders who manage teams. The times are changing though, and transparency is now gaining traction.

Business News

Would You Buy Maggie Murdaugh's Monogrammed Snake Print Pillows? Items From the Murdaugh Family Home Are Going Up for Auction

The sale comes just weeks after Alex Murdaugh was sentenced to two consecutive life terms for the June 2021 murders of his wife, Maggie Murdaugh, and son Paul Murdaugh.


The Dark Side of Pay Transparency — And What to Do If You Find Out You're Being Underpaid

There are many reasons employers and workers advocate for pay transparency — it can build trust, lead to fair compensation for historically underpaid individuals and eliminate the stigma surrounding money talk. But as the practice becomes normalized, honesty can backfire.