Additional thoughts on yesterday’s piece on robotics and the law. (The column, by the way, is behind the Chronicle’s pay wall, so you may not be able to view it more than once.)
Temple notes that medical device makers are adopting user agreements similar to those used by computer manufacturers with respect to warranties and “unauthorized access”. Combine that with reports going back at least to 2008 about unencrypted wifi access to pacemakers, and you come to some troubling conclusions. Put yourself in the mind of a medical device manufacturer. You want to add wireless access to allow doctors to reprogram your devices without having to operate on the patient. Which is the more cost-effective solution: Develop and implement a secure communication protocol or use standard wifi protocols with the limited security offered by WPA, and rely on the DMCA’s restrictions on bypassing access controls to prevent any research into your device’s functionality and safety?
Before you answer that regulators would prevent the decision from being made on the basis of cost, consider that regulators don’t get to review the software that goes into most medical devices. ZDNet reported last year on a lawyer’s attempts to review the software in her pacemaker. Medical device manufacturers are as secretive about their software as voting machine manufacturers – and the latter have already tried using the DMCA to block research into their software (see, for example, Diebold’s attempted use of the DMCA to block distribution of emails that contained information about their voting devices).
There’s an obvious solution here: give regulators the ability to review the software, but I’m dubious about the likelihood of that happening. Given the lack of progress in giving the FDA the ability to regulate “herbal cures” and homeopathic “remedies”, there seems to be a lot of resistance to expanding their powers, especially in directions that would reduce profits. Suggestions?
Temple also raises the question of liability. If robotically-controlled equipment breaks the law (for example, if a robotic car is ticketed for speeding), who bears the liability: the car, the car’s owner, or the manufacturer? I can’t see it being the car, since it has no ability to make restitution to society (i.e. in the example, the car cannot pay the speeding fine). My assumption is that it would default to the owner: typical user agreements that come with any computer-related equipment specifically decline liability on the part of the manufacturer, and I don’t see that changing. It’s a political problem, and every attempt to shift any part of the responsibility to the manufacturer has failed in the face of software industry resistance. That prospect opens up a whole big can of worms that should keep lawyers profitably employed for decades. Can the owner be held liable if he isn’t in the car at the time of the offense? Current laws, I believe, require a licensed driver to be in the car, ready to override the robot, but those laws are, I suspect, going to be relaxed by the time robotic cars are widely available to the public. What if there’s nobody in the car – it’s only a matter of time before robotic cars are parking and returning to pick up the owner/driver on their own – or only minors (how long before people start sending the kids off to school by themselves)?
Interesting times ahead, folks.
PS: A Bay Bridge update for those who find the whole bolt/rod situation as interesting as I do: Caltrans has completed a “reinspection” of other components supplied by Dyson without finding any problems and additional bolts from a later production batch have been installed and tightened with no problems. However, until a decision is made about how to fix the broken bolts (or even if a fix is needed), the Metropolitan Transportation Commission refuses to make a statement about whether the bridge will open September 3rd as planned.