Well, Scoot

For anyone who hoped the Era of Disruption was almost over, I have one piece of advice: don’t hold your breath.

We’ve made some progress, but far from backing off on the importance of disruption in defining business models, today’s corporate warriors are doubling down.

That’s right, we’ve left the first period of the era and entered the second: the Period of Meta-disruption. We’re now seeing the disruptors disrupted. Nowhere is this clearer than in San Francisco.

Uber, Lyft, and their various brethren set out to disrupt the taxi industry, and in large part they’ve succeeded, especially here in the Bay Area. But now we’re getting a wave of companies out to disrupt the ride-hailing business model.

Three companies–Bird, LimeBike, and Spin–are pushing motorized scooters as superior to ride-hailing over short distances or when traffic is congested–and when is it not?

San Francisco was late in regulating ride-hailing (just as they were late in regulating short-term rentals) and the Board of Supervisors is determined to get ahead of the curve on scooter rentals.

Frankly, they don’t have a choice.

The model all three companies are pursuing is “convenience”. They want to be sure there’s always a scooter nearby. That means depositing caches of them in high-traffic areas and encouraging users to spread them around by leaving them at the end of their rides.

Which is great for the companies, of course, but not so great for the general public who wind up dodging scooters left on the sidewalk, in bus zones, truck loading zones, doorways, and basically anywhere there’s enough room for them.

And that’s without even considering the impracticality of forcing riders to abide by state and city laws requiring helmets and forbidding riding on the sidewalk. After all, if the app won’t unlock the scooter if the customer isn’t wearing a helmet, nobody would bother with the service.

I do agree that it’s not the rental companies’ job to enforce the law, but they could certainly do a better job of reminding riders that they shouldn’t ride on the sidewalk. Give ’em a great big warning–a sticker on the footboard, or a click-through screen in the app–and let the police take it from there.

On the other hand, it shouldn’t be necessary to get law enforcement involved on the parking end. It should be technologically possible to use the phone’s camera to take a picture of the parked scooter and then use a bit of AI to determine whether it’s been left in a safe spot. If not, just keep billing the user until they move it*. At fifteen cents a minute, people will figure out fairly quickly that it behooves them to not leave the thing where someone will trip over it.

* Or until someone else rents it, of course. Double-charging would be unethical.

All that said, despite the back-and-forth in the press between City and companies, I haven’t seen anyone address the question of privacy.

By design, the apps have to track users: where did they pick up the scooter, where did they leave it, where did they go, and how long did it take? All tied solidly to an identity (or at least to a credit card).

Who gets access to that information? Do the companies sell information to advertisers? Do the apps continue to track customers between scooter rentals?

Don’t forget, these companies think the way to launch their businesses is to dump a bunch of scooters on the street and let the market sort things out. Do you really want them knowing you used your lunch hour to visit a doctor? A bar–or maybe a strip club? How about a political demonstration?

Uber has certainly been tagged for over-zealous information collection. What safeguards do LimeBike, Spin, and Bird have in place to protect your identity?

That Switch On Your Dashboard

Well, it’s been almost a month since I bitched about the impending End of Civilization As We Know It as brought about by drivers. That’s long enough that I hope you’ll indulge me in another rant along the same lines.

It’s not about the idiots who weave in and out at high speed. They’ve upped their game: it’s no longer enough of a thrill for them to zip across three lanes, missing four cars by no more than six inches, on rain-slick pavement. They’ve begun doing the same thing with the driver’s door open. Yes, really. Saw it myself a couple of days ago.

Nor is it about the lunatics who believe 35 is the minimum speed on residential streets, though Ghu knows there are plenty of those.

No, today’s complaint is about the people who’ve either forgotten or never learned the rules for using their high-beams. As best I can tell, based on this weekend’s random sampling, this group amounts to roughly 90% of the drivers on the road.

The rules aren’t difficult. There are only two.

  1. When approaching the top of a hill or coming around a blind curve, turn the high-beams off.
  2. When following another car–especially if you’re tailgating–turn the high-beams off.

That’s it.

They both boil down to the same bit of common sense: don’t blind a driver who might collide with you if they can’t see.

I don’t blame video games for violent behavior. But I’ve gotta admit it’s really tempting to blame them for stupid behavior.

People, there’s a reason why I haven’t hooked up my Atari 2600 in decades, and it’s not that I can’t find the cables. I sucked at “Night Driver“. Okay, yes, I made it through the other day’s unplanned real life version* unscathed. Doesn’t mean I enjoyed it, especially on the higher difficulty/no vision setting.

* Is Live Action Videogaming: Ancient (LAVA) a thing? If not, maybe it should be. If it gets a few of the idiots off the road and…uh…on the road, um…

Hang on, let me rethink this one.

Neglect

Google Photos can be scary.

Not their facial recognition, although that can be startling.

Certainly not the automatic grouping of photos by date and location; that’s downright useful.

No, I’m talking about the Assistant function. I think The Algorithm gets bored sometimes.

It’ll find similar pictures and stitch them together into an animated GIF, or apply some crazy color scheme and call it a “stylized photo”. Most of them are useless, and I just delete them.

But every so often, it’ll decide one of the cats isn’t getting enough attentions, and it’ll go to a very weird place.

A couple of days ago, it decided Yuki was feeling neglected, and as a result, it created this.

Google Photos. Don’t neglect your cats.

Face It

Thousands–perhaps tens or hundreds of thousands–of people are deleting their Facebook accounts in the wake of the Cambridge Analytica scandal.

And that’s great. I look forward with great anticipation to the day when the exodus reaches critical mass and I can delete my own account.

Keep in mind, I created my account when I started doing the writing thing. In today’s world of publishing, the best thing you can do for yourself as an author is to promote your books. And the best–the only–way to do that is to go where the people are.

It doesn’t do much good to do promotion on MySpace, LiveJournal, or any place else your potential readers aren’t. Today, that means Facebook. Yes, Twitter to a lesser extent. Much lesser.

At Facebook’s current rate of decline, I should be able to delete my account around the end of 2020. And that’s the best case scenario.

I’m assuming here that Facebook’s claimed two billion users statistic is grossly inflated. I’m also assuming that there are a million account deletions a day, which is, I suspect, also grossly inflated.

‘Cause, as Arwa Mahdawi said in The Guardian, “…there is not really a good replacement for Facebook.” She quotes Safiya Noble, a professor of information studies at USC: “For many people, Facebook is an important gateway to the internet. In fact, it is the only version of the internet that some know…”

And it’s true. Remember when millions of people thought AOL was the Internet? I think they’ve all moved to Facebook.

They’re not going to delete their accounts. Neither are the millions of people who say “You don’t have anything to be concerned about from surveillance if you haven’t done anything wrong.” Ditto for the people who still don’t regret voting for Trump and the ones who say “There are so many cameras watching you all the time anyway, what difference does it make if Facebook is watching too?”

Even if there’s a lot of overlap among those groups, that’s still hundreds of millions of accounts.

(Why isn’t the paranoid fringe–the people who literally wear aluminum foil hats to keep the government from controlling their minds–up in arms about Facebook? Is it only because they’re not “the government”? Or am I just not looking for their denunciations in the right places?)

Facebook isn’t going away any time soon. Not until the “new hot” comes along. If the new hot isn’t just Facebook under another name. Don’t forget that Instagram and WhatsApp are Facebook. They’re watching you the same way the parent company is, and if one of them captures the next generation of Internet users, it’ll be “The king is dead! Hail the new king, same as the old king!”

Unfortunately, stereotypes aside, those people who are staying on Facebook do read. And that means I need to keep my account open, touting my wares in their marketplace.

I’ve seen a number of people saying “If you can’t leave Facebook, at least cut down the amount of information you give them.” Which is good advice, but really tricky to do. Even if you follow all of the instructions for telling Facebook to forget what they already know, there are other things they track. You can tell them to forget what you’ve liked, but you can’t tell them to forget how long you looked at each article. (Yes, they do track that, according to credible reports. The assumption is that their algorithms give you more posts similar to ones you’ve spent a long time on.)

And then there are those apps. Those charming, wonderful apps.

I checked my settings to see how many apps I’d allowed to access my information. There were only eight, which puts me way down at the low end of the curve. It’s down to four now, two of which are necessary to have my blog posts show up on Facebook. And when I killed off two of the four, I got popups reminding me that removing their access to Facebook does not delete any data they’ve already gathered.

Should I be concerned that I didn’t get a warning about the other two?

But let’s assume a miracle. Say, half a billion accounts get closed. The FTC fines Facebook an obscene amount of money*. What happens next?

* They almost have to. How many of those 50,000,000 accounts compromised by CA belong to government officials. Officials who are now very worried about what CA–and thus whoever they’ve shared that data with, starting with the Trump family, the Russian government, and who knows who all else–has inferred about their non-governmental activities, health, sexual orientation, and so on. If the FTC doesn’t hammer Facebook, heads will roll, no matter who has control of Congress after the November elections.

Absolutely nothing. Facebook goes on. They make a show of contrition, talk up new controls they’ve put in place to keep anything of the sort from happening again*. And they keep marketing users’ personal information to anyone who might want to advertise.

* It will. We’ve seen every form of access-control ever invented hacked. The information exists, it’s valuable, therefor someone will steal it.

That’s their whole business model. They can’t change it. The only thing that might–and I emphasize “might”–kill Facebook would be for them to say, “You know, you’re right. It’s unethical for us to make money by selling your private information. We won’t do it any more. Oh, and effective immediately, Facebook will cost you $9.99 a month.”

Not Even Close

Now there’s a misleading headline!

According to CBS Denver, “Startup Offers ‘100 Percent Fatal’ Procedure To Upload Your Brain“.

Even a cursory reading of the article, something the headline writer must have neglected to do, reveals quite a different story.

What Nectome is actually offering to do is plasticize not-quite-dead people. Or maybe “glassticize” would be a better word; the article says the process will turn a body into “a statue of glass” that will last for centuries.

Regardless, there’s no cloud upload involved. The founders of the company are just hoping to preserve bodies at the instant their process kills their clients in the hope that someday there will be a way to read the memories locked in the glass brains and computerize them.

Assuming this isn’t a hoax–and it wouldn’t be the first time a news agency has been fooled–it’s still a horribly speculative notion. Reaching their goal would require at least three major and separate medical and technological breakthroughs:

There’s no evidence that memories are preserved in the brain after death. Nobody is anywhere close to reading memories out of a living brain, much less a dead one. And AI technology capable to preserving a human mind is even farther from realization.

I only see only significant difference between Nectome’s approach and the bizarre idea of cutting someone’s head off after they die and freezing it in the hope science will eventually be able to unfreeze it intact and grow it a new body: if you get Nectomed, your heirs can stand you up in the corner of the living room, instead of paying thousands of dollars to a cryogenic facility.

Someone needs to remind Nectome’s founders that it’s only in the performing arts that you can legitimately suggest that someone go out and knock ’em dead.

Salon’s Experiment

Have you heard about Salon’s experiment in revenue generation?

Like most sites offering free content, they show ads to bring in money. And like most ad-supported sites, they’ve been hit hard by the rise of ad-blocking software. So they’re exploring other ways to bring in the bucks.

One of those methods is cryptocurrency mining. If the site detects an ad blocker in use, it’ll pop up a dialog asking the visitor to either disable the blocker for Salon or to allow them to run Coinhive’s mining software on the visitor’s computer while they’re looking at the site.

It’s interesting to note that the software they want to run on visitors’ computers is the same mining software used by any number of porn, piracy, and malware sites. The only difference is that Salon asks for permission before launching it.

Which does make me wonder how much money the ads have been bringing in. According to Ars Technica, the software doesn’t generate much cryptocurrency, and Coinhive only passes a small fraction of the proceeds to the site that deployed the software. If that’s enough to make up for the lost ad revenue, it suggests Salon is hurting for bucks.

But I digress.

I approve of Salon earning money they can use to pay their writers (and editors, techies, and even managers*). And I certainly approve of them being upfront about what they’re doing.

* Of course, it’s the writers I really care about, for obvious reasons. Everyone else is there to support the writers, right?

Their experiment won’t affect me directly. I don’t use an ad blocker–although I do use the EFF’s Privacy Badger tool which some sites treat as an ad blocker–and I can’t remember the last time I visited Salon.com.

But cryptocurrency mining is CPU-intensive, and I do tend to keep a lot of browser tabs open. I worry that if the idea catches on, I’ll wind up with half a dozen sites all trying to use my computer to make money at the same time. That seems like a recipe for browser crashes and an unresponsive computer.

Still, it’s an experiment, and if it’s successful, it should mean fewer ads–and hopefully fewer obnoxious ads–to ignore while I’m browsing.

And we’ll see how it works out.

Duck and Cover

Hopefully by now you’ve heard that Hawaii was not attacked with ballistic missiles Saturday. It was, however, attacked by poor software design or, quite possibly, poor QA.

Let’s recap here.

The Hawaii Emergency Management Agency erroneously sent a cell phone warning message to damn near every phone in the state. The message warned of an incoming missile attack. Naturally, this caused a certain amount of chaos, confusion, fear, and panic.

Fortunately, it did not, as far as I can tell, cause any injuries or deaths, nor was there widespread looting.

The backlash has been immense. Any misuse of the cell phone emergency warning message system is going to trigger outrage–does anyone else remember the commotion back in 2013 when the California Highway Patrol used the same functionality to send an AMBER alert to phones across the entire state of California?

Many people turned off the alert function on their phones in the wake of that and similar events elsewhere–although, let’s not forget that one level of warnings can not legally be turned off. I don’t know if HEMA used the “Presidential” alert level–certainly a nuclear attack would seem to qualify for that level of urgency–but it may be that only the White House can send those messages.

For the record, my current phone doesn’t allow me to disable Presidential or Test messages; the latter seems like an odd exclusion to me. In any case, I’ve turned off AMBER alerts, but have left the “Severe” and “Extreme” messages on. I suspect many who have gotten spurious or questionable alerts have turned those off.

Which puts those charged with public safety in an awkward position. The more often they use the capability, the more people are going to turn off alerts. I hope the people looking into a California wildfire alert system are keeping these lessons in mind.

But I digress. I had intended to talk about the Hawaii contretemps from a software perspective.

The cause of the problem, according to a HEMA spokesperson, was that “Someone clicked the wrong thing on the computer.” Later reports “clarify” that “someone doing a routine test hit the live alert button.” I put “clarify” in quotes, because the explanation actually raises more questions than it answers.

See, for a test to be meaningful, it has to replicate the real scenario as closely as possible. It’s would be unusual to have one button labeled “Click Here When Testing” and a second one that says “This Is the Real Button.” The more typical situation is for the system to be set to a test mode that disables the connection to the outside world or (better yet) routes it to a test connection that only sends its signal to a special device on the tester’s desk.

Or heck, maybe they do have a test mode switch, and the poor schlub who sent the alert didn’t notice the system wasn’t in test mode. If so, that points to poor system design. The difference between modes should be dramatic, so you can tell at a glance, before clicking that button, how the system is set.

If it’s not poor design, the reports suggest some seriously poor test planning. Though I should emphasize that it probably wasn’t a failure on QA’s fault. They probably wanted a test mode, but were overruled on cost or time-to-launch concerns.

Wait, it gets better: now we’re hearing the problem has been solved. According to the news stories, “the agency has changed protocols to require that two people send an alert.” In other words, the problem hasn’t been fixed at all. The possibility of a mistaken alert may have been reduced, but as long as people can click on a live “Send an alert” button while testing, they will.

Better still, by requiring two people to coordinate to send an alert, they’ve made it harder to send a real message. Let’s not forget that emergency messages are time critical. If the message is warning of, say, a nuclear attack or a volcanic eruption, seconds could be critical.

But have no fear: the Homeland Security Service assures us that we can “trust government systems. We test them every day.”

How nice. In the immortal words of Douglas Adams, “Please do not push this button again.”

Nanny Speaks

A final thought on Spectre and Meltdown: while you’re updating your systems, don’t forget about your video cards. Modern cards have powerful processors. Even if the card itself isn’t vulnerable, there could be interactions between the video card and the main CPU that could be exploited. Nvidia is currently releasing new drivers that eliminate at least one such vulnerability.

Moving on.

In the latest sign of the impending Collapse of Civilization, a couple of Apple’s shareholders, the California State Teachers Retirement System and Jana Partners, are demanding that Apple modify their products to avoid hurting children.

Let that sink in for a moment.

Okay, ready to continue. Yes, there is evidence that overuse of smartphones (or, I suspect more accurately, apps) can result in feelings of isolation, anxiety, and depression. But the key word there is “overuse”.

The groups say that because the iPhone is so successful, Apple has a responsibility to ensure they’re not abused.

Apparently, less-successful companies don’t have a similar responsibility to their users, but leave that aside.

Apple certainly doesn’t have an unfulfilled legal responsibility here. So I’m assuming the groups believe Apple’s responsibility is moral. The same moral responsibility that forces companies that make alcoholic beverages to make them less attractive to teenagers and to promote them in ways that don’t make them seem cool. Ditto for the companies that make smoking and smokeless tobacco products, automobiles, and guns.

There are bigger, more important targets for Jana Partners and CSTRS to go after, in other words. But leave that aside too.

What their argument seems to boil down to is that Apple isn’t doing enough to protect the children who use their devices.

Keep in mind that currently parents to can set restrictions in iOS to limit which apps kids can use (including locking them into one specific app) and to require parental approval to buy apps or make in-app purchases.

The groups’ letter asks that Apple implement even finer degrees of control, so that parents can lock out specific parts of apps while allowing access to others.

Technically, that could be done, but it would be a programming and testing nightmare–and make customer support even more hellish than it already is. Every app would have to be modularized far more completely than they are now. That often results in apps getting larger and more complicated as critical functionality gets duplicated across the app, because developers can’t count on being able to invoke it from another module.

And just how fine-grained would it have to be? Could a parent prevent their kids from, say, messaging anyone with certain words in their user name? Or only prevent them from messaging anyone? Would Apple have to implement time-based or location-based restrictions so certain parent- or teacher-selected functions couldn’t be used at school?

How about a camera restriction that prevents teenagers from taking pictures of anyone the age of eighteen? That’ll stop sexting dead in its tracks, right?

The groups’ other suggestion is that Apple implement notifications to remind parents to talk about device usage with their kids.

Sorry, but if the parents aren’t already paying attention to what their offspring are doing on their phones, popups aren’t going to suddenly make them behave responsibly.

And that’s really where the responsibility lies: with parents. Responsible parents don’t buy their underage children booze and smokes, they don’t let their kids get behind the wheel on I-5 before they have a driver’s license, and they don’t leave their guns where their rugrats can get to them.

It’s a good goal, guys, but the wrong approach.

The Spectre of Meltdown

I’m seeing so much “OMG, the Earth is doomed!” noise about Meltdown and Spectre, the recently-revealed Intel bugs, I just couldn’t resist adding my own.

I know some of you have managed to miss the fuss so far, so here’s a quick rundown of the problem: All Intel CPUs and some other manufacturers’ chips are vulnerable to one or both of a pair of issues that were just discovered recently. That includes the Apple-designed chips in iPhones and iPads; many of the CPUs in Android phones; some, if not all, AMD CPUs; and every Intel processor from the Pentium* on.

* I find it ironic that the bug dates back to the Pentium. Turns out that chip’s early inability to do division was the least of its problems.

Both bugs are related to something called “speculative execution”. The brief explanation is that in order to give faster results, CPUs are designed to guess what work they’ll have to do next and work on it when they would otherwise be idle. If they guess right–and a huge number of engineering hours have gone into establishing how to guess and how far ahead to work–the results are already there when they’re needed. If not, the wrong guesses are thrown away.

The details are way too deep for this blog, but the upshot is that because the bugs are in the hardware, there isn’t any perfect fix possible. Meltdown can be patched around, but Spectre is so closely tied into the design of the chips, that it can’t realistically be patched at all. It’s going to require complete hardware redesigns, and that’s not going to come soon. I’ve seen articles speculating that it could be five years before we see Intel CPUs completely immune to Spectre.

Personally, I suspect that’s insanely pessimistic. Yes, it’s a major architecture change, but Intel’s motivation is huge.

More worrisome is how many other hardware bugs are going to turn up, now that researchers are looking for them. Even if we get Spectre-free Intel chips this year–which is as optimistic as five years is pessimistic–the odds are overwhelmingly good we’ll see more such bugs discovered before the Spectre fix rolls out.

It’s also worth noting that the patches for Meltdown aren’t cost-free. According to Intel, depending on what kinds of things you do, you could see your computer running anywhere from five to thirty percent slower. Let’s be blunt here: if you mostly use your computer for email, looking at pictures, and web surfing, you’re not going to notice a five percent drop. You might not even notice thirty percent–but your workload isn’t going to be the kind that has a thirty percent slowdown*. The people who will get the bigger hits are the ones doing work that already stress their CPUs: video processing, crunching big databases, serving millions of web pages, and so on.

* Unless some website hijacks your computer to mine cryptocurrency. But if that happens, you’d notice your computer slow down anyway.

So the bottom line here: Eventually, replacing your computer will be a good idea, but we’re not there yet. (And yes, given the speed and power increases we’re going to see between now and then, even if it’s possible to just upgrade the CPU, it’ll probably make more sense to replace the whole computer.) And in the meantime, unless you’re running a big server, do what you’ve been doing all along: keep your OS up to date with all the vendor patches, don’t run programs from untrusted sources, and if your search engine tells you a web site is dangerous, don’t go there!

All the News

Kind of a strange news day yesterday.

It started with the Amtrak train derailment in the Seattle area. Nothing inherently weird about the story itself–sad, depressing, and dispiriting, yes, but not weird. What was odd was that the first mention of it I saw was a tweet linking to a news report on an Irish newspaper’s website.

I have mixed feelings about what Robert Heinlein described as “the unhealthy habit of wallowing in the troubles of five billion strangers.” “Think globally, act locally” is appropriate in many cases–climate change springs immediately to mind–but are we really better off as a species when we can find out about every disaster, no matter how small, anywhere in the world? Maybe if the small triumphs were as widely reported as the failures.

But I digress. My original point was that I find it fascinating that not only does news travel so quickly, but so does news about the news. Taken by itself, I find that cause for a certain amount of optimism: it shows that transparency has never been a more attainable goal.

A couple of thoughts about the accident, as long as we’re on the subject. It’s laudable that Amtrak took steps to move their passenger service onto tracks not used by freight service. In theory, sharing tracks shouldn’t be a problem. In practice, the revenue generated by hauling freight has resulted in those trains being given absolute priority. The result has been ever-increasing unreliability in the passenger service, which results in lower ridership, which widens the income gap, and around we go in a spiral that makes it harder and harder to sustain the passenger side of the business.

So there’s that. But the fact that the accident occurred on the very first run over those new tracks suggests strongly that driver training was inadequate. Combine that with American railroads’ persistent unwillingness or inability to adopt train control safety technology that’s been in use everywhere else in the world for decades, and an accident of this severity seems inevitable.

It’s almost enough to make one start thinking in terms of conspiracy theories. Emphasis on “almost”.

Anyway, back to the news.

We also had an unusual example of synchronicity here in the Bay Area. Sunday night, a Richmond police officer began walking around a San Francisco hotel. He was allegedly talking about spirits for some time before he fired half a dozen shots, apparently into the walls. Eventually, he surrendered to the San Francisco police.

Then, apparently to balance the scales in some kind of karmic sense, on Monday a San Francisco police officer pulled into a parking lot in Richmond and shot himself. According to the Chron, he was under investigation, and he was being pursued by a Richmond police officer.

The timing of the two incidents is, of course, coincidental, but they did add a bit of surreality to the day.