Get All Access for $5/mo

Are Elon Musk's Warnings About AI Manipulating Social Media Coming True? Elon Musk voiced a dire warning about AI. Should we be worried?

By Yoav Vilner Edited by Dan Bova

Opinions expressed by Entrepreneur contributors are their own.

Valery Matytsin | Getty Images

Tech leader Elon Musk is known for sounding the alarm bells on the risks of artificial intelligence.

Musk has said that he believes that AI will soon manipulate social media if it hasn't already — a concern that pales in comparison to his previous predictions of a future humanity governed by an intelligent machine dictator.

A year ago, he told Recode Decode that the relative intelligence ratio between such a dictator and the rest of humanity would resemble the ratio between a person and a cat.

The great Musk doesn't stand alone in fearing the risks of AI gone wrong. Stephen Hawking and other researchers have said that intelligent machines could become very dangerous. But there's another, brighter, possible future that Musk agrees could materialize as well.

Related: A Look at Elon Musk's Insanely Busy Daily Schedule

In a conversation with Musk and journalist Maureen Dowd for Vanity Fair, the Tesla founder agreed with Y Combinator's Sam Altman's prediction: "In the next few decades we are either going to head toward self-destruction or toward human descendants eventually colonizing the universe."

The first, Terminator-like option is terrifying. But the latter possibility sees artificial intelligence opening doors for humankind to become a race of space-exploring Han Solos and Princess Leias. So far, we're headed in the right direction, as AI today is improving human lives in various applications, including healthcare, defense, and business.

Healthcare, perhaps, is where machine learning's potential will become increasingly visible in the years to come.

If AI works as optimists hope, it could democratize healthcare by boosting access for underserved communities and lowering costs across the board, all while assisting in the early detection of life-threatening diseases. Already, AI models are changing the way cancer, the leading cause of death in wealthy countries is diagnosed.

Related: Here's What You Need to Know About Elon Musk

MIT's Computer Science and Artificial Intelligence Lab has developed an AI model that can anticipate the development of brain cancer up to five years in advance. When it comes to cancer treatments, time is of the essence, and AI's ability to help diagnose early on has the potential to save lives.

AI is disrupting the genetic-care space as well.

There are currently around 5,000 geneticists in the world, and though genetic sequencing has become easier and cheaper to perform, making sense of the data is still largely a human effort.

Each test is a tedious, mini-research project, taking hours to perform. Genomics platform Emedgene, for example, uses AI to bring the world a genetic interpretation platform, to help the human geneticists interpret data that can be used to inform doctors' treatments of various illnesses.

No one can deny that AI is improving human life in various fields, and nowhere is its value more apparent than medicine.

The alarmists' primary concern today, though, lies with the prospect of machines intruding on people's privacy. With the AI-powered, super-convincing doctored videos known as "deepfake" and the data-privacy controversy surrounding Russian photo-editing app FaceApp dominating headlines this past summer, it's quite reasonable that such fears persist.

Related: 19 Times Elon Musk Had the Best Response

But it's important to keep in mind that there are AI companies out there actively working to ensure AI is trained in a secure manner, and that consumer data remains private.

"Artificial intelligence is set to shape the future of many industries, and public perception of the matter is substantial," says Leif Lundbaek, CEO of XAIN, a company that provides GDPR compliance for AI applications. "The time of misusing personal data is over. Data privacy will become a key competitive factor for machine learning solutions because it will be demanded by both governments and ourselves as users."

It's only a matter of time before AI companies across the globe start complying with increasingly strict government-imposed data privacy regulations, as even various U.S. states are racing toward crafting regulations of their own to keep up with GDPR. While that fact alone might not be enough to put the likes of Musk at ease, it does prove governments are taking the right steps toward ensuring the AI of the future is an AI that's good for humanity.

Let's keep heading in that direction.












Yoav Vilner

Entrepreneur, thought leader and startup mentor

Yoav Vilner has founded several companies, and is currently CEO at Walnut. He is also a startup mentor in accelerators associated with Google, Microsoft, Yahoo and the U.N.

Want to be an Entrepreneur Leadership Network contributor? Apply now to join.

Editor's Pick

Business News

These Companies Offer the Best Work-Life Balance, According to Employees

The ranking is based on Glassdoor ratings and reviews.

Productivity

6 Habits That Help Successful People Maximize Their Time

There aren't enough hours in the day, but these tips will make them feel slightly more productive.

Business News

Apple Is Adding ChatGPT to iPhones This Week. Here's How It Works.

ChatGPT will take over questions that Siri can't answer.

Leadership

Why Your AI Strategy Will Fail Without the Right Talent in Place

Using fractional AI experts through specialized platforms allows companies to access top talent cost-effectively, drive innovation and scale agile strategies for growth.

Science & Technology

Use This Framework to Successfully Integrate AI Into Your Business Operations

Here's how to ensure both innovation and compliance when using AI in your organization.