Make It Didn’t Happen

So California has a new law to protect children and teenagers. Yay?

The law has two main threads: it allows minors to request the removal of any content they’ve posted and requires Web companies to comply with the request, and it forbids companies that have mobile apps to market products that are illegal for minors to minors.

Proponents are hailing the law as a victory for privacy and the Safety of Our Children. It’s a chance to undo mistakes and give oneself a fresh start. The problem is that the law doesn’t really do much. Consider:

  • The law only applies to content directly posted by the minor. If someone else posts an embarrassing picture or message about a minor, the law doesn’t apply. To get the content taken down, the minor would have to work through the company’s existing policies and procedures. Presumably if most company’s practices were adequate, there wouldn’t have been a need for this law.
  • Similarly, the law does not cover material copied from a minor’s post. If a teen were to post a potentially-embarrassing photo to Facebook, for example, he could require that Facebook take it down, but could not do anything about copies residing in various archives (our old friend the Internet Archive, for example), search engine caches (Google Image Search, anyone?), or even the copy his buddy posts to his own Facebook page. Consider, too, a Twitter post: the minor could require Twitter to take down a specific tweet, but would not be able to require the take down of any retweets.
  • Note the use of the phrase “take it down”. Companies are not required to actually delete content, only remove it from public view. Depending on the company’s actual setup, the content might remain on the servers, vulnerable to deep linking and hacking.
  • Your 18th birthday is on Monday, so you’re partying all weekend? Better send the removal request for all those Twitter updates about where you got your fake ID, which bars you’re hitting, and just how blasted you are before midnight. The law doesn’t apply once you turn 18, so Twitter has no obligation to honor your request come Monday.

Bottom line: The law is intended to protect teenagers from the consequences of their bad judgment. What it’s actually doing is encouraging irresponsible posting and leading minors to develop bad habits. By allowing them unlimited “take backs”, it encourages a “post first, think second” mentality. Post something embarrassing or illegal? No problem! Send a take-down request and it’s gone. Until you’re 18 and head off to college. Suddenly you have to think before posting. In a new environment, with greatly-reduced potential for adult supervision. Not such an easy habit to break, is it? Good luck!

Biting the Forbidden Fruit

Things have been getting sexy for Apple lately, and not in a good way.

There has long been a class of malware in the Windows world called “ransomware”. Once it finds its way onto a computer, it blocks functionality or encrypts data and then demands money.

Recently, this type of evil has made its way to the Mac in the form of a JavaScript program that takes over the Safari browser. It displays a web page that claims the FBI has been monitoring your computer use and has detected that “You have been viewing or distributing prohibited Pornographic content (Child Porno photos and etc were found on your computer).” The code prevents you from leaving the page until you fork over a “release fee” of $300. It also uses Safari’s ability to reload a page if the browser crashes to forcibly reload the lock page if you force-close Safari.

Other claimed violations in the warning message include violations of copyright and, amusingly enough, “your PC may be infected by malware”.

No word on whether paying the ransom actually results in returning control of your browser to you. Fortunately, you can escape very simply without paying the ransom, but you will lose your browsing history, passwords, and other browser data.

So on the one hand, we have Mac users being falsely accused of possessing kiddy porn and being given the option of paying a small “fine” to sweep it under the rug. Meanwhile, Apple itself is being sued for making pornography available.

The International Business Times is reporting that a man is blaming his porn addiction on his MacBook, and is suing Apple. Among his demands are that Apple include a porn filter on every device they make that has the ability to display pornography and to require users to read and agree to a consumer notice about the evils of pornography in order to turn the filter off.

Reports indicate that the plaintiff, a lawyer, has been barred from practicing law due to Post-Traumatic Stress Syndrome-related mental illness, which could certainly explain some of the more… interesting claims made in the suit. Many people are pointing to the claim that Apple is harming the economy buy driving sex shops out of business as an indication of the plaintiff’s mental illness. Personally, I think a better example is the claim that directly equates Apple’s failure to provide porn filters as the cause of (among other things) ADHD and thrill seeking to the U.S. Government’s failure to invade Afghanistan as the cause of the 9/11 World Trade Center attacks.

Contrary to what Trekkie Monster would have you believe, the Internet is not just for porn. Why, I’ve used my iPad for as much as five minutes at a time without seeing any pornography!

Ahem.

More seriously, the plaintiff’s suit would also require Apple to proactively seek out sites specializing in pornography and work with the FBI to shut them down. An interesting notion: privatization of the determination of what constitutes “pornography” (Mr. Sevier’s complaint helpfully provides a definition: “any picture, photograph, drawing, sculpture, motion picture film, or similar visual representation or image of a person or a portion of the human body, which depicts nudity, sexual conduct, excess violence, or sadomasochistic abuse, and which is harmful to minors and adult males.”) Apparently women cannot be harmed by such material. But I digress. Other sections of the legal filing make it clear that the last clause is redundant; in Mr. Sevier’s opinion, any depiction of such materials is harmful to adult males (and probably minors as well).

So we’re not just talking about obscenity here, nor are we talking about legal pornography, we’re talking about any depiction of the unclothed body. And apparently anywhere in the world; Mr. Sevier seems to be unaware that the Internet is global, extending far beyond the FBI’s jurisdiction.

*sigh*

I could go on for hours – the complaint is 50 pages long, and I doubt that there’s a single page that doesn’t contain an outrage against common sense.

The problem here is that if the case isn’t thrown out immediately, Apple will be in a difficult position. The publicity they would receive in fighting the suit would do grave harm to their reputation as a “Family Friendly” company. On the other hand, not fighting the suit and adopting even some of Mr. Sevier’s proposed remedies would cost millions of dollars in creating and maintaining a porn filter that wouldn’t work to Mr. Sevier’s standards anyway (a fact that’s been widely acknowledged since at least the turn of the century). And the intermediate position of paying him a settlement to withdraw the suit and go away would subject them to the same negative publicity as fighting and open them to a potential flood of nuisance suits seeking similar settlements.

Will the suit be tossed? I certainly hope so, but I’m not hugely optimistic. It was filed in Tennessee, a state with a history of prosecuting pornography and obscenity cases across state lines (see, for example, the 1994 “Amateur Action BBS” case in which BBS operators in Milpitas, California were charged with distributing obscene materials in Tennessee in part through a dial-up BBS. Let us hope that Apple meets a happier fate than the Thomases did.

Don’t Say It

Say what, now?

I’m as unhappy about the events playing out in Valley Springs, California as anyone else who’s not directly involved, but for the last couple of days, every story has tripped a mental fuse for me.

In case anyone has missed it, the story in question is that of eight-year-old Leila Fowler, who was stabbed to death in late April. Yesterday, her twelve-year-old brother was arrested. (No links, it’s not hard to find all the coverage anyone could want, and then some.)

What keeps tripping me up is the statement in every story yesterday and today: “His name is not being released since he is a minor.” Just to be clear here, it’s not just this case, it’s every news story reporting on a juvenile accused of a crime.

Yes, I understand the desirability of keeping the names of minors out of the press, especially given the fact that an arrest is far from proof of guilt. For that matter, I hope that all of the various news agencies have updated any earlier stories that gave his name. I’m even in the apparent minority that would be happy to have his name continue to be withheld even if he is tried as an adult.

I’m not suggesting that the news media should give his name. Quite the contrary, in fact.

What I’m getting stuck on is the incessant repetition of that sentence. Is it really necessary to say the same thing every time? It wouldn’t be that hard to find out his name if one were motivated to do so – let’s face it, how many twelve year old brothers is she likely to have had? Repeating this sentence over and over just feels like it’s calling attention to the omission, and daring someone to start digging.

I’ll grant you that it’s not as easy to find someone’s name as it often appears in mystery novels, but that might just make it worse. If someone goes to the effort of doing the research and learning the brother’s name, he’s going to want to do something with it, and the harder he has to work, the more likely he is to want to show off.

Really, if the paper didn’t say “His name is not being released…” would you notice? Would you care? Most of you probably wouldn’t. Those few who would care are going to care regardless of whether the disclaimer is present or not; at best, the presence of the disclaimer serves no function, and at worst it provokes a few people.

Let’s just drop the disclaimer, state the facts, and move on.

Robot Law, Part 2

Additional thoughts on yesterday’s piece on robotics and the law. (The column, by the way, is behind the Chronicle’s pay wall, so you may not be able to view it more than once.)

Temple notes that medical device makers are adopting user agreements similar to those used by computer manufacturers with respect to warranties and “unauthorized access”. Combine that with reports going back at least to 2008 about unencrypted wifi access to pacemakers, and you come to some troubling conclusions. Put yourself in the mind of a medical device manufacturer. You want to add wireless access to allow doctors to reprogram your devices without having to operate on the patient. Which is the more cost-effective solution: Develop and implement a secure communication protocol or use standard wifi protocols with the limited security offered by WPA, and rely on the DMCA’s restrictions on bypassing access controls to prevent any research into your device’s functionality and safety?

Before you answer that regulators would prevent the decision from being made on the basis of cost, consider that regulators don’t get to review the software that goes into most medical devices. ZDNet reported last year on a lawyer’s attempts to review the software in her pacemaker. Medical device manufacturers are as secretive about their software as voting machine manufacturers – and the latter have already tried using the DMCA to block research into their software (see, for example, Diebold’s attempted use of the DMCA to block distribution of emails that contained information about their voting devices).

There’s an obvious solution here: give regulators the ability to review the software, but I’m dubious about the likelihood of that happening. Given the lack of progress in giving the FDA the ability to regulate “herbal cures” and homeopathic “remedies”, there seems to be a lot of resistance to expanding their powers, especially in directions that would reduce profits. Suggestions?

Temple also raises the question of liability. If robotically-controlled equipment breaks the law (for example, if a robotic car is ticketed for speeding), who bears the liability: the car, the car’s owner, or the manufacturer? I can’t see it being the car, since it has no ability to make restitution to society (i.e. in the example, the car cannot pay the speeding fine). My assumption is that it would default to the owner: typical user agreements that come with any computer-related equipment specifically decline liability on the part of the manufacturer, and I don’t see that changing. It’s a political problem, and every attempt to shift any part of the responsibility to the manufacturer has failed in the face of software industry resistance. That prospect opens up a whole big can of worms that should keep lawyers profitably employed for decades. Can the owner be held liable if he isn’t in the car at the time of the offense? Current laws, I believe, require a licensed driver to be in the car, ready to override the robot, but those laws are, I suspect, going to be relaxed by the time robotic cars are widely available to the public. What if there’s nobody in the car – it’s only a matter of time before robotic cars are parking and returning to pick up the owner/driver on their own – or only minors (how long before people start sending the kids off to school by themselves)?

Interesting times ahead, folks.

PS: A Bay Bridge update for those who find the whole bolt/rod situation as interesting as I do: Caltrans has completed a “reinspection” of other components supplied by Dyson without finding any problems and additional bolts from a later production batch have been installed and tightened with no problems. However, until a decision is made about how to fix the broken bolts (or even if a fix is needed), the Metropolitan Transportation Commission refuses to make a statement about whether the bridge will open September 3rd as planned.

Robot Law

Today we’re back to the SF Chronicle, which has an interesting piece by James Temple on the impact of robotics on the law. (There’s a related article on Ars Technica as well.)

Among other things, Temple discusses a recent experiment done in automating detection of traffic violations and issuing tickets. One group of programmers were given the instruction to follow the letter of the law; a second group were given specifications crafted by a computer scientist and an attorney that were intended to implement the spirit of the law. Both sets of code were then run against real-world data taken from the black box in a recent-model car. No particular surprise: the first group’s code issued more tickets than the second group’s code. What did surprise me was the extent of the difference: the “letter of the law” programs averaged more than 300 times as many tickets as the “spirit of the law” programs.

Given the current trend to automate everything that can be automated, more and more responsibility will be given to automation in enforcing the law. Temple – and the study’s authors – suggest that the discrepancy between the two extremes casts doubt on the ability to automate the interpretation of the law. Temple in particular treats the question as a binary one: “Do we write laws that more accurately get at the behavior we’re truly worried about; or do we write code with tolerance built in?” I think it’s false dichotomy. The correct answer is to embrace the power of “and” and do both. Granted that getting lawmakers and lawyers to adjust their approach is going to be a slow process, but it’s not as insuperable a problem as it seems at first glance. Many laws already include guidance on how they should be interpreted. Lawyers can work with software engineers to render that guidance into more code-friendly form. (Perhaps panels of lawyers could be chosen for the duty by random selection in the same way that juries are chosen. But I digress.)

Even without changes on the legal side, though, this is something of a solved problem in the software world. Any well-run software project will include a design phase where the developers (and yes, QA as well) have an opportunity to discuss ambiguities in the specifications; in this field it would allow the development team to tune the sensitivity of the code to appropriate levels. Additionally, any well-run project will give the people who began the project an opportunity to confirm whether the code actually meets their needs; in the legal world it would entail building the code, running it against available data, and allowing the lawmakers and judges who are ultimately responsible for enforcing the law to review the results. If the code is too strict or too lenient, tweak the parameters.

In the final analysis, though, the laws are not only written by human beings, they’re also enforced by human beings. So long as the human element remains in the process in the form of judges and juries, cases will continue to consider all of the shades of gray (mitigating, aggravating, or extenuating circumstances, including the possibility of computer error).