What Is AI, Anyway? Know Your Stuff With This Go-To Guide.
Scenario: You’re a startup office. People in hoodies and graphic tees are throwing the term “AI” around like confetti. You nod and try to play along, managing to churn out a brief mention of Elon Musk and Tesla as you look up the definition of “artificial intelligence” on your phone. You try to translate it into plain English. No luck. Relatable?
Never fear: Our trusty guide is here, no prior knowledge required. Let’s talk about what it is -- in layman’s terms -- and how it could affect your life.
What AI is
AI is the advancement of computer systems to perform tasks usually limited to humans. Translation: Some things you used to have to do yourself -- or call someone about, or visit a physical location for help with -- can now be done by a computer.
The difference between AI and “machine learning”
Chances are, if you’ve heard the term AI ballooning over the last few years, you’ve also heard “machine learning” as a buzzword. Here’s what it means: Advanced machines use large data sets to “learn” and create patterns -- then, they use what they’ve learned to recognize more of the unknown.
AI and machine learning have a similar relationship to rectangles and squares. Just as all squares are rectangles, but not all rectangles are squares; machine learning is one application of AI, but AI is a broader concept that has other uses, too.
What AI isn’t
Some say AI doesn’t even truly exist yet -- that it will only be possible when computers become more similar to sentient beings. People using that definition would say most companies claiming to use “AI” are incorrect. They’d also usually view “machine learning” not as a subset of AI, since it works largely based on pattern recognition and not a more advanced system.
But the late John McCarthy, the American computer scientist recognized as having coined the term “artificial intelligence,” did consider pattern recognition to be a branch of AI. He said it had many branches, some of which haven’t even been discovered yet -- and that some were much more advanced than others.
All this to say: As McCarthy wrote, AI encompasses “the science and engineering of making intelligent machines, especially intelligent computer programs.”
How AI affects your life
The idea of AI may sound futuristic and scary. That’s because it is futuristic, and it can be scary -- at least in terms of the amount of personal data in play. But AI can also save people considerable time, money and error margins. And it’s likely already a much larger part of your life than you realize.
Exhibit A: Worried you’re not saving enough money? Personal finance apps can now analyze your spending patterns, then sock away small amounts of money on your behalf that they deem you won’t notice.
Exhibit B: Many hospitals around the country already incorporate AI in an advisor capacity for medical professionals. Since new breakthroughs and research are relatively constant, AI tools help doctors stay up to date on the latest findings, gauge the impact of certain symptoms and make decisions regarding diagnoses.
Exhibit C: Whenever you use a traffic or GPS app to navigate your way to work or a friend’s house, AI has a hand in the route it suggests, using an extensive amount of data from smartphones about speed, routes and traffic incidents. And when you’re using a rideshare app, AI helps determine the price of your ride, which route the driver will take and which other passengers will be picked up when.
Tesla CEO Elon Musk, who incorporates AI into his company’s autonomous cars, fears for what the technology could mean for the future of humanity. “If you're not concerned about AI safety, you should be,” he tweeted in August 2017. “Vastly more risk than North Korea.” He also encouraged the government to regulate the technology before it becomes too advanced. “Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that's a danger to the public is regulated,” he wrote on Twitter. “AI should be too.”
Mark Zuckerberg, on the other hand, seems to disagree wholeheartedly. The Facebook CEO hosted a 2017 Facebook live in which he called his views on AI “really optimistic” and mentioned that those who “drum up doomsday scenarios” about AI are “negative” and, in some ways, “really irresponsible.” People naturally pointed to Elon Musk, who later tweeted, “I’ve talked to Mark about this. His understanding of the subject is limited.”
Key figures at Amazon lean more towards Zuckerberg’s view of the subject, saying the benefits of AI outweigh the risk. “We believe it is the wrong approach to impose a ban on promising new technologies because they might be used by bad actors for nefarious purposes in the future,” wrote Dr. Matt Wood, general manager of AI at AWS. “The world would be a very different place if we had restricted people from buying computers because it was possible to use that computer to do harm.” The company recently sold its Rekognition facial recognition software -- which identifies and tracks faces in real time, including those of “people of interest” -- to police departments and government agencies. Critics argued it could easily be misused and harm marginalized people.
Sundar Pichai, CEO of Google, recently released new guidelines surrounding the company’s future with AI. His views are more in line with regulation, even if it’s self-regulation, of the company’s use of AI. “We recognize that such powerful technology raises equally powerful questions about its use,” he wrote in a June blog post. “How AI is developed and used will have a significant impact on society for many years to come. … We feel a deep responsibility to get this right.” He clarified that where there’s a material risk of harm, the company will proceed only when it believes the benefits substantially outweigh the risk. The company also said it won’t collaborate on weapons or “other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.”
Potential for bias
AI has an intrinsic potential for bias in terms of the data used to train each algorithm to do what it’s supposed to. For example, Google Photos came under fire for tagging African American users as gorillas in 2015, and in 2017, the developers of FaceApp “beautified” faces by lightening skin tones. That’s why it’s vital for AI companies to look at the data they’re using and make sure it’s engineered to reduce bias.