Do Robots and AI Deserve Rights?
A robot ethicist from MIT's Media Lab helps us parse what our treatment of tech says about us.
When it comes to robot-human relations, the conversation typically centers on the welfare of the sentient. Science fiction paints us as petrified by our own creations; fears of a bot planet have influenced everything from Asimov's "Laws of Robotics" to HAL 9000's homicidal impulses to Skynet's global genocide.
These human-centric anxieties are understandable. However, as our assorted bots and bits gain skills and personalities, should they be afforded some form of protection from us? It's a question people are starting to seriously ponder.
Last month, the European Parliament's legal affairs committee issued a report on the use and creation of robots and artificial intelligence (AI). It recommended creating a form of "electronic personhood" that would afford rights and responsibilities to the most advanced forms of AI.
Many surely bristle at the concept of "rights" being awarded to software. While AI is increasingly capable of performing specific tasks, it's not complex enough to have an opinion on how it is treated. It's completely reasonable to ask if robo-rights is even a debate worth having right now. Indeed, humanity has far more immediate concerns on its plate (the humans of the European parliament in particular), but the era of personhood-worthy bots isn't as far off into the crazy super future as you might think.
While the human-like AI long promised by science fiction has thus far failed to materialize, researchers around the globe are hard at work turning it into reality. I don't expect to see anything resembling Star Trek's Data or Rosie from The Jetsons in the immediate future, but I wouldn't be surprised to meet them in my lifetime: History has shown time and again that technology -- particularly information technology -- doesn't just improve incrementally, it rockets forward exponentially. Consider some of modern AI's very impressive feats and try to imagine what it will be able to accomplish in 10, 20 or 30 years.
I can't say for sure what robots or AI of the future will be able to do. But I can say that if robot ethics doesn't rise to the level of a serious concern for society, then -- at the very least -- robot etiquette should.
The AI among us
The average reasonably connected person in the developed world has probably interacted with modern AI in the form of increasingly capable chatbots or digital assistants (Alexa, Siri, Cortana etc.). But most AI remains hidden below the virtual surface.
A sub-field of AI known as "machine learning" is particularly promising -- this discipline is interested in creating algorithms that improve at tasks over time to come to original conclusions. There are even algorithms that are able to re-write their own source code in limited scenarios. Taken together, the most advanced algorithms could be said to form a unique identity.
The question then becomes: Will we ever reach a point where this uniqueness rises to the level of being a personality worthy of protection? Few would argue that personhood should be awarded to, say, your smartphone's OS. But your device (including all its the networked cloud resources) has a completely unique character unlike any other piece of software. Your phone remembers the Wi-Fi sources it routinely connects to, it learns your commuting habits based on GPS and even uses algorithms to learn the nuances of your voice commands (it's how Siri and Google get better at understanding your voice over time).
We can delete all or part of this data and not feel any emotional response. However, we will probably experience a deeper form of attachment if this data takes a physical, touchable form. Humans are inclined to relate to physical objects, no matter how "dumb" they are -- people personify stuffed animals, name their cars or feel bad when their Roomba (one of the dumbest robots you can buy) gets stuck in a corner.
While the gap between the robots we were promised and those we have is even more extreme than the one between promised and actual AI, the field is improving at a frighteningly rapid pace. This development is important to our discussion because it is far less emotionally taxing to "pull the plug" on a text-based chatbot, no matter how advanced, than it would be on a machine with a discernable face.
It might be decades before technology forces us to truly confront the issue robot rights, but the debate surrounding the ethics of how we treat machines is probably worth having right now.
Recently, I interviewed Dr. Kate Darling, a robot ethicist from MIT's Media Lab as part of our streaming interview series and podcast, The Convo. While Darling isn't quite on board with electronic personhood (at least not yet), she is interested in how humans interact with their technology and believes our choices are ultimately a reflection of us.
"The one thing that does separate robots from other machines is that we tend to treat them like their alive," explains Darling. "I think that there's a Kantian philosophical argument to be made. So Kant's argument for animal rights was always about us and not about the animals. Kant didn't give a shit about animals. He thought 'if we're cruel to animals, that makes us cruel humans.' And I think that applies to robots that are designed in a lifelike way and we treat like living things. We need to ask what does it do to us to be cruel to these things and from a very practical standpoint -- and we don't know the answer to this -- but it might literally turn us into crueler humans if we get used to certain behaviors with these lifelike robots."
While science fiction has gotten a lot wrong in its predictions of what the robo-future would look like, it does provide a laboratory of the imagination. Would you rather live in, say, a Westworld universe filled with humans who feel free to rape and maim the park's mechanical inhabitants, or on the deck of Star Trek: The Next Generation, where advanced robots are treated as equals? The humans of one world seem a lot more welcoming than the other, don't they?
So, when it comes to the question of how we interact with our creations, maybe we should be less concerned with determining their personhood than we are with defining our humanity.
Entrepreneur Editors' Picks
Kale Was a Garnish Before This Creative Genius Made It Famous. Here's How She Did It — and What She's Planning Next.
Telling Your Brand Story Is Crucial. 4 Steps to Ensure That It Resonates.
This Baker Was Told Not to Speak Spanish With Colleagues, So She Started Her Own Cake Company That Values Employees Just as Much as Customers
Improving Yourself Takes 9.6 Minutes of Work Each Day
Meet the Women Behind Some of McDonald's Most Iconic (and Essential) Ingredients — and How They're Setting New Standards
Remote Work Shouldn't Be Up for Debate
Employees Are Over Foosball Tables and Free Snacks. Your Company Culture Needs This Instead.