Are We Jumping The Gun When We Worry About AI?
Red hot furnaces blaze.
Natty men sporting suspenders, spectacles and pipes puff out their cheeks and blow into a long hollow rod with a molten blob on one end. Soon, an amorphous, limpid shape pushes its way into existence. That nebulous shape will in, a few minutes, be moulded into a dainty wine glass or a solid glass jar.
In his cheerful, Bebop-y Oscar-winning documentary ‘Glas’ shot in 1959, Dutch film director Bert Haanstra compares this fascinating affair with the mechanical wizardry of a glass factory, where bits of molten glass whizz off into a maze of machinery that blows them, moulds them and cools them into thousands of identical bottles.
While beads of sweat dot the brows of skilled men in the first half of the video, the automated machines that replace them clink away tirelessly for hours, churning out bottle after perfect bottle.
Will this happen again with artificial intelligence? And this time, will brows creased in thought be replaced by the soft voice of a robot that will spare us the trouble?
A Eurobarometer study conducted last year recorded the public opinion on future innovations, science and technology across Europe. The general consensus was:
“Optimism about the future, but tempered by real concerns.”
And the prevailing concern was how we’ll stack up against the machines, mano-a-mano. Will they leave us all unemployed? Will they control us?
Machines Are Good at Some Things We Do, Not All
It’s inevitable. With AI, machines will get better and better at doing what we do as their capacity to perceive, learn about and understand the world around them improves. But for now, they’re pretty much like calculators. They can do a few things, but they do it incredibly well.
For instance, Baxter is a multi-purpose robot that can learn how to do a task with anything that’s within arm’s reach. Amelia is an AI platform that can automate IT services. Narrative Science’s Quill is an NLG (natural language generation) AI that can write news stories and conversational reports after analyzing large amounts of data.
Tess and Karim are friendly conversational bots. Tess provides emotional therapy and support over Whatsapp. Karim provides much-needed emotional psychotherapy for Syrian refugees. Atlas can open doors, navigate uneven terrain with ease, stack boxes and self-correct when blocked or pushed down. And watching Kiva work the warehouse floor is nothing short of fascinating.
And there are so many more. Even robots that used to be the stuff of sci-fi have become ubiquitous in our lives today.
But they’re nowhere near replacing humans entirely. Rather, they are the baby steps we’re taking towards the bigger picture: robots that can think for themselves. Robots will one day be able to do most of the things we do for a living today, including jobs like programming and creative work.
This is probably much more alarming.
But let’s keep in mind that fear and skepticism have always been our first reactions when any new technology is introduced. Brian John Davidson in his excellent article in the Slate, talks about the four stages of introducing new technologies. After the initial fear of the machines killing us all passes, moral panic sets in. Then the early adopters begin to show interest, and finally as more and more people begin to use the technology, we forget about our initial fears.
From electricity to satellites to smart phones, we’ve met every new technological breakthrough with misgiving and wariness. Anticipating these wide-spread concerns, most people who build AI today go to some lengths to make them seem non-threatening. And that’s a good thing.
Australian roboticist Rodney Brooks says that it’s important to have technologies that ordinary workers can interact with. People need to be able to collaborate with the tools — even like them, and still be a part of the solution. He believes that instead of replacing humans, we’re upgrading people’s jobs to supervise and program the manual drudge work that they’ve had to do so far.
The robots we know today are different from their industrial predecessors. They have familiar human names. Physical robots are designed to look cute and friendly like hitchBOT. They even speak on stage and hug astrophysicists.
All of this goes some way in allaying our irrational fears for the present about killer robots, but that’s no reason to dismiss other fears entirely. Instead, we need a more balanced way to approach them.
Concern is Good, But Not At The Cost of Rationality
Roy Amara was a researcher, scientist and former president of the Institute for the Future. He is best known for a statement which is now known as Amara’s Law:
“We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”
In the next 40 years, we’re going to see some crazy developments that none of us has probably dreamed of yet. We cannot stop AI from advancing, and it’s going to improve and spread exponentially faster than anything else we’ve seen before.
But there’s time left before that future hits us. The spectre of a looming economic crisis or a doomsday for humanity is wide off the mark for the near future.
The change will be gradual, so it’s time to stop thinking about AI either as a panacea for all ills, or as an apocalyptic event that will transform our planet into a dystopian landscape.
However, there are consequences in the long term that we need to think about, like the laws and best practices that have to be set in place to govern the design and use of AI. Like all other technology, AI has the capacity to be both wonderful and terrible in equal measure. It’s up to us to set the boundaries.
As Davidson says,
“We haven’t yet established norms, or language, for what’s socially acceptable and what’s off limits. Gadgets and technology may change quickly, but people and our behavior does not…
…we are currently in the middle of coming to grips with what these devices mean to us. This isn’t a technology problem; it’s a broader cultural conversation about what kind of future we want to live in. We need to have more conversations in our families, in our offices, and in the media about what we want and what’s acceptable.”
Organizations like Open AI are taking this up as a serious issue and focussing on providing a positive human impact rather than just making a profit. And that’s a step in the right direction. A lack of processes and control measures is what engenders fear, not the AI itself.
So it is good to be concerned about the future consequences of AI. But it will only be effective if we’re concerned about the right things and temper that concern with optimism.