Will robots take our jobs? That’s a question that gets asked in many industries, including the legal one. And for good reason—robots are becoming more and more ubiquitous: Machines now produce automobiles, perform complex surgeries, and even write sports and business articles.
What doesn’t get asked as often is how the law might need to change to reflect these advances in technology. Ed Walters of Fastcase discussed this topic at the Clio Cloud Conference, suggesting that the ability to change laws along with advances in technology might mean the difference between an exciting future and a dystopian nightmare.
Here’s a brief overview of what he had to say:
Law and revolutions
The law has lagged behind major technological changes in the past, leading to dire consequences. For example, issues like slavery and child labour became more pronounced during the industrial revolution as a consequence of the law falling behind changes in industry and society.
“There were social changes, and industrial changes, and business changes, but there weren’t legal changes that went with them,” Ed said. “It would be about 100 years before we abolished slavery, before we enacted the first wage and hour laws, before we created weekends, before we said that you can’t have children working in factories.”
Of course, the industrial revolution wasn’t all bad—in fact, it brought about an “American Century,” but there was needless suffering that came as a result of the law falling behind changes in industry.
Law lagged behind at the start of the information age as well.
“All of the law that dealt with the information revolution, that dealt with the internet, lagged behind the technology,” Ed said. “We’re living in this world right now with warrantless searches, net neutrality, and serious epistemological questions about whether Taco Bell is a person under an original reading of the constitution, or whether a corporations can have the same rights as people.”
To have better outcomes in the future, the law must adapt more quickly.
Computers and societal change
The third revolution—the robotics revolution—is happening faster than we think.
“Quietly, robots are all around us,” said Ed. “It’s a testament to how prevalent these machines are in our lives that we don’t even see them anymore. We kind of just take them for granted.”
Robots are becoming more ubiquitous, but they’re also becoming smarter. And, as Ed pointed out, this improvement isn’t linear—it’s exponential.
Consider his example of chess playing computers. In 1989, Russian chess grandmaster Garry Kasparov easily defeated the computer Deep Thought. In 1996, Deep Thought evolved into Deep Blue, which Garry still handily defeated in a six game match. Just one year later, in 1997, the tables turned—Garry eked out one win over Deep Blue, but drew or lost the rest of the games in their six-game match.
“That [first] game was historic, because it was the last time the best human chess player in the world could beat the best machine chess player in the world,” Ed said.
To give a more immediate example, Ed explained that currently, our biggest, best computers have about the processing power of a rat’s brain, but that by 2026, “a single computer will have more processing power than anyone that has ever lived combined.”
That could mean huge societal changes—necessitating thoughtful, proactive changes to the law.
New technology and new laws
“How do we regulate robots?” Ed asked, “How does our law deal with this impending social change?” To start, he suggested looking for lessons in the legal community’s experience trying to regulate cyberspace over the past 10 to 20 years.
The internet has changed the world as we know it, but when it was still new, some didn’t see the point in coming up with new laws to regulate it. Back at the first conference on law and cyberspace at the University of Chicago in 1996, keynote speaker Judge Frank Easterbrook encouraged young lawyers not to create new laws for the internet, arguing that we don’t need new law for every new thing in the world.
However, Larry Lessig, the keynote speaker for the next day of the conference, had a very different point of view. As Ed explained:
The point of [Larry’s] talk was, sure, Judge Easterbrook is right. That’s what common law does. You always apply existing law to new facts. Common law grows over time.
However, sometimes, the new facts are so new that we have to create exceptions. Sometimes the new facts are so exceptionally different that when you apply the old law to the new facts you get the outcome completely wrong.
For example, in the real world, if a 10-year old goes to the store and tries to buy beer, the clerk will automatically deny the purchase.
The real world has a self-authenticating feature here. You can tell who someone is. But that’s not true on the internet. There is no kind of self-authentication on the internet. And so if you apply the laws that apply to a 7-Eleven purchase of liquor online, and just require someone to certify that they are 18, people can very easily bypass that. So you may need a new mode of regulation on the internet.
In other words, the internet is so exceptionally different that we need a new set of laws to govern it.
Robots and the rules of the road
Are robots exceptional as well?
“The answer, of course, is never ‘yes, robots are exceptional,’” Ed said. “You look at them one at a time. You take each different set of facts and try to figure out ‘what happens if we apply existing common law in this world? Do we like the outcome, or not?’”
This is also a question of whether or not the values that the original law was meant to support are still being upheld. For example, Ed spoke about the issues that arise when laws surrounding auto-accidents are applied to self-driving vehicles.
Auto accidents are premised on the idea that we compensate people for these injuries when someone negligently hurts them. The central idea of that mechanism is that someone is at fault. Someone makes a mistake. Someone acts negligently. If they don’t, there won’t be an injury.
Well, we’re coming into a world where self-driving cars, or software, might cause an accident, but totally without fault … if we apply the principles of tort law here, we’re not going to like the outcome. The person who is hurt through no fault of their own doesn’t get compensated in this world, because there is no negligence.
As Ed explained, many legal scholars are suggesting that, in a self-driving car world, we may need a no-fault insurance scheme. Everyone would pay a bit more on their insurance, but we’d have fewer accidents.
The law and our future
Beyond self-driving cars, lawmakers will need to consider broader issues like how to apply criminal law to machines, and how to bring machines to justice (they likely won’t care about being powered down for five years in robot jail). These might seem like problems from a science fiction novel, but Ed assured the audience that they are very real, and that they must be dealt with sooner rather than later.
“These are issues for 2016, 2017, and 2018,” Ed said. “Many of our laws are going to have to change, and fast.”
There are a lot of questions to answer when it comes to the law of the future. But today’s lawmakers could help prepare us for a better tomorrow.
“We could be on the cusp of a time where machines help us to be even bigger than ourselves, to have an outsized impact on our future in a way that we never have been able to before,” Ed said. “This new American century really could be ours, if we make it. If we act fast enough, if we don’t create a giant lag between these massive changes in industry, in software, in hardware, in robotics, in culture—if our laws are able to catch up to these changes and in time. This could be our newest American century.”
Want to watch the full talk? See Ed’s presentation here: