Robot Law

Today we’re back to the SF Chronicle, which has an interesting piece by James Temple on the impact of robotics on the law. (There’s a related article on Ars Technica as well.)

Among other things, Temple discusses a recent experiment done in automating detection of traffic violations and issuing tickets. One group of programmers were given the instruction to follow the letter of the law; a second group were given specifications crafted by a computer scientist and an attorney that were intended to implement the spirit of the law. Both sets of code were then run against real-world data taken from the black box in a recent-model car. No particular surprise: the first group’s code issued more tickets than the second group’s code. What did surprise me was the extent of the difference: the “letter of the law” programs averaged more than 300 times as many tickets as the “spirit of the law” programs.

Given the current trend to automate everything that can be automated, more and more responsibility will be given to automation in enforcing the law. Temple – and the study’s authors – suggest that the discrepancy between the two extremes casts doubt on the ability to automate the interpretation of the law. Temple in particular treats the question as a binary one: “Do we write laws that more accurately get at the behavior we’re truly worried about; or do we write code with tolerance built in?” I think it’s false dichotomy. The correct answer is to embrace the power of “and” and do both. Granted that getting lawmakers and lawyers to adjust their approach is going to be a slow process, but it’s not as insuperable a problem as it seems at first glance. Many laws already include guidance on how they should be interpreted. Lawyers can work with software engineers to render that guidance into more code-friendly form. (Perhaps panels of lawyers could be chosen for the duty by random selection in the same way that juries are chosen. But I digress.)

Even without changes on the legal side, though, this is something of a solved problem in the software world. Any well-run software project will include a design phase where the developers (and yes, QA as well) have an opportunity to discuss ambiguities in the specifications; in this field it would allow the development team to tune the sensitivity of the code to appropriate levels. Additionally, any well-run project will give the people who began the project an opportunity to confirm whether the code actually meets their needs; in the legal world it would entail building the code, running it against available data, and allowing the lawmakers and judges who are ultimately responsible for enforcing the law to review the results. If the code is too strict or too lenient, tweak the parameters.

In the final analysis, though, the laws are not only written by human beings, they’re also enforced by human beings. So long as the human element remains in the process in the form of judges and juries, cases will continue to consider all of the shades of gray (mitigating, aggravating, or extenuating circumstances, including the possibility of computer error).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s