Blinders

They can’t all be winners, I suppose.

Ideas, that is.

Case in point, I’ve been sitting here for the last hour, trying to make something entertaining out of my recent discovery that Google Calendar supports time zones.

The key word there, of course, is “trying”.

It’s a useful feature, especially when dealing with an event that spans multiple time zones (hello, plane flight to Missouri). But entertaining? Not so much.

There’s some minor humor in the fact that the feature has been around for nearly a decade–the oldest references I can find to it date back to 2011–but I only discovered it last week. And you all trust me to be on the cutting edge. Sorry about that.

Maybe it says more about the user interface designers than it does about me. Google does have something of a fetish for hiding controls behind menus, so they can display the actual information in a sea of whitespace.

That’s a fetish they share with Apple, by the way. Which means most of the rest of the tech industry falls in line. Arguably, it’s an improvement over the previous state of affairs, where every possible control was squeezed onto the main screen, or at most, moved one menu level down.

There are still some holdouts in the old style–Microsoft’s Ribbon Bar, I’m looking at you–but I digress.

In any case, I can’t blame the UI here. There’s a prominent “Time Zone” button right next to the date and time fields on the event creation/edit page.

Clearly, there’s a lesson here about willful blindness, seeing only what we expect to see, and the triumph of imagination over reality.

Puts a whole different light on climate change deniers, Trump supporters, and anti-vaccination activists, doesn’t it? It’s not that they’re denying the evidence. They literally don’t see it, even though it’s right in front of them.

Not that that’s a legitimate excuse. The Time Zone button is right there, whether or not I saw it.

Does make me wonder what else I’m missing out on, though.

Google I/O 2019

Welcome to my annual Google I/O Keynote snarkfest.

In years past, I’ve used Ars Technica’s live blog as my info source, but this year it appears they’re not at Google I/O. So all the snark that’s fit to print comes to you courtesy of Gizmodo’s reporting.

My apologies, by the way, for the later-than-usual post. Blame it on Rufus. No, not really. Blame it on Google for scheduling the I/O keynote speech at 10:00. But I did have to duck out to take Rufus to the vet for a checkup. He’s fine. The keynote is over. I’m caught up. Enjoy your post.

First up, Google is bringing augmented reality to search on phones. The demo involves getting 3D models in your search results. You can rotate them to see all sides and you can place them in the real world with an assist from your phone’s camera. Why do I suspect the porn industry is going to be all over this technology?

Seriously, though, it’s part of an expansion of the Google Lens technology we’ve been seeing for the past few years and integrating it into search. Other enhancements to Lens include the ability to highlight popular items on a recipe and displaying videos of recipes being made when you point the camera at a printed recipe.

Does anyone really want these features? If I’m at a restaurant, I’m going to pick the dish that sounds the tastiest, not the one the most people have ordered. My tastes aren’t necessarily yours, after all, and sometimes it’s the odd little dishes tucked away in the corner of the menu that are the most interesting. As for the cooking videos, I try to keep my phone in the case in the kitchen. I’d rather not wind up preparing pixel ‘n’ cheese or nexus stew. Silly of me, I know.

Anyway.

Remember last year’s big feature? Duplex, in case your memory is as short as mine. That’s the feature that let your phone make reservations on your behalf. Did anyone use it? Maybe a few people will try this year’s iteration which can make car reservations and buy movie tickets. I can’t say I’m thrilled at the possibilities this opens up.

Assistant, the voice behind “Hey, Google,” gets an update this year, as well. It’ll be able to figure out what you mean by personal references. Want directions to your mother’s house? Just ask. Because it’s good to know that, when you can’t remember where your relatives live, Google can.

Slightly more useful is a new driving mode, intended to reduce distractions. Speaking as someone who nearly got rear-ended yesterday by someone looking at the phone in her lap, I think the only legitimate “driving mode” would be one that turns the damn phone off as soon as you start the engine. Not that anyone is going to implement that.

Moving on.

Google is very, very sorry for whatever biases their machine learning technology has revealed. They’re working very, very hard to reduce bias.

Let’s be honest here. The problem isn’t the machine learning tools. It’s the humans who select the data that the machines learn from. Fix the developers’ biases and the machines fix themselves.

Onward.

More privacy features. Which seem to boil down to giving people more ability to delete whatever Google knows about them, but precious little to prevent them from learning it in the first place.

Oh, wait, one exception: there’s going to be an incognito mode for Maps, so you can get directions to the doctor’s office without Google being easily able to tie the request to your earlier searches. They’ll still know someone searched for the office and there are a number of ways they could tie it to you, but at least they’ll have to work for the data.

I’m a big fan of incognito mode in the browser, and I hope they roll it out everywhere sooner rather than later–and that’s no snark.

Furthermore.

Generating captions for videos on the fly seems like an interesting, if somewhat niche application. Applying the same technology to phone calls, though… If Google can pull that one off, it’d be a big win for anyone who’s ever tried to take a call in a noisy environment or even just sworn at the lousy speaker in their phone. Yes, and for those whose hearing isn’t the aural equivalent of 20/20 vision.

Looks like there’s a related effort to teach their voice recognition software to understand people with conditions that affect their speech. The basic idea there is good–but Google needs to beware of inappropriate extensions of the technology.

Correctly interpreting the speech of someone who’s had, say, a stroke, is a good thing. Suggesting that someone see a doctor because there are stroke-like elements in their speech is moving into dangerous waters, ethically speaking.

On to Android Q.

Support for folding devices, of course. That was inevitable. Moving apps from one screen to another, either literally or figuratively (when the device is folded and the screen dimensions change, for example).

Improved on-device machine learning, which will let phones do voice recognition themselves without help from Google’s servers. That’s a win for privacy and data usage.

Dark mode. Personally, I dislike dark mode; I find white text on a black background hard to read. But I know others feel differently. So enjoy, those of you who like that kind of thing.

More privacy features, including new controls over which apps have access to location data and when they have it.

OS security updates without a reboot? Would that Windows could do that. It’s a small time-saver, but worthwhile.

Focus Mode–which will also be retrofitted to Android Pie–maybe somewhat less useful: you can select apps to be turned off in bulk when you turn on Focus Mode. If the goal is to get you off your phone, this seems like a fairly useless diversion, because who’s going to put their important apps on the list? It does tie in with expanded parental controls, though, so there’s that.

Moving on.

Like your Nest thermostat? That’s cool. (sorry) Now all of Google’s smart home gear will be sold under the Nest name. I guess they figured with the demise of “Nexus,” there was an opportunity for an “N” name to distinguish itself.

So, no more “Google Home Hub”. Now it’s “Nest Hub”. Expect similar rebranding elsewhere. It looks, for instance, like Chromecast (remember Chromecast?) will be moving to Nest. NestCast? Or something stupid like “Google Chromecast from Nest”?

And, speaking of Pixel–we were, a few paragraphs back–we’re getting cheaper Pixel phones, as expected.

The 3a and 3a XL, starting at a mere $399, and coming in three colors. (Yes, we see what you did there, Google.) The usual black and white, naturally, but also something Google is calling purple. Looking at the photos, I’d say it’s faintly lavender, but maybe it’s the lighting.

Judging by the specs, it sounds like you’ll get roughly Pixel 2 levels of performance, except for the camera, which should be the same as the high end Pixel 3 models.

And, unlike Apple, who preannounce their phones*, the Pixel 3a devices are available online and in stores now.

* Remember signing up to get on the list to pre-order an iPhone?Fun times.

Moving on.

Bottom line: once again, we’re not seeing anything wildly new and different here. Granted, some of the incremental advances over the past year are large, but they’re all still evolutionary, not revolutionary.

And no, there weren’t any hints about what the Q in Android Q stands for.

A Happy Thought

There may actually be some positives to the impending arrival of self-driving cars.

Yeah, I know, that sounds odd coming from me, doesn’t it? But it’s true.

Consider the case for predictability. Many of the concerns I see about self-driving cars boil down to “How do I know what it’s going to do in Situation X?”

How do you know what any driver is going to do in Situation X? In truth, you mostly don’t. You guess, based on your experience, your familiarity with the law, and what you’ve seen of the driver’s behavior. Usually you haven’t seen much of the last. If they’re doing something blatant–speeding, weaving back and forth across the lanes, going the wrong way on a one-way street–you pay special attention to them. But for the majority of drivers, you’re guessing.

How many times have you seen someone sit at a green light because traffic is backed up from the next light, so there’s no place to go? Not very often, at least around here. The default assumption is that two or three cars will move into the intersection and still be sitting there when the light changes. Now the cars on the cross street are blocked. Presto! Instant traffic jam.

And yet, this morning I saw three different cars waiting at green lights for the traffic ahead of them to move. It was startling enough that I took special note.

My point is that over time, we’ll build up a category of experience specific to self-driving cars. We’ll assume they’ll wait at green lights instead of blocking intersections*. And we’ll make that assumption because we’ll see them do it every time. Not just on days when they’re not late for work, didn’t have a fight with a loved one, or feel like being passive-aggressive.

* Assuming they weren’t programmed by Bay Area drivers.

We’ll be able to make better predictions about what they’re going to do than we will about all those human drivers on the road.

That’s a good thing, but here’s an even better one.

Even a tiny number of self-driving cars on the road have the potential to break up traffic jams before they start.

Seriously.

Okay, I’ll admit I’m extrapolating wildly from a study I saw back in the days before journal papers went online. But I’m a science fiction writer; wild extrapolation is part of my job description.

The gist of the study was that one of the common reasons traffic jams develop is that a few drivers slow down. The drivers behind them overreact and slow further, then speed up to close the gap. Errors accumulate, and again, Presto! stop-and-go traffic.

I see this every day. There’s a curve on the freeway where most drivers slow down from 65 (or whatever faster-than-the-limit speed they were going) to 60. Outside of commute hours, it doesn’t matter. Everyone slows a bit, then resumes speed on the next straight patch. But at rush hour, that curve always turns into a parking lot.

But the study went further. The investigators found that if a small percentage* of the drivers maintain a constant speed–even if that speed is well below the limit–instead of braking and accelerating, the jam never develops.

* I want to say five percent, but I’m working with twenty-year-old memories, so that may be incorrect. I am sure it was a single digit number.

Self-driving cars, if properly programmed, aren’t going to slow down for a curve they can safely negotiate at the speed limit. More to the point, if they get proper information about traffic conditions ahead of them, they won’t get into the slower/faster/slower/faster cycle that causes jams. They’ll just slow to the maximum speed that won’t result in a collision.

Maybe that doesn’t sound like such a big deal to those of you outside the Bay Area and other commute-infested regions. But not sitting in stationary traffic on that one single stretch of freeway would trim my morning commute by ten minutes. And there are two other spots on my normal route where traffic behaves the same way.

Saving half an hour a day and however much gas the car burns idling in traffic sounds like a very good deal to me.

 

Construction Ahead

Here’s a question for you. No, it’s not a poll, and I don’t insist you answer in the comments. And I’m not sure there is a right answer .

Suppose you’re in the left lane of a three lane road. You pass a sign warning that, due to construction, the two left lanes are closed ahead.

Do you:

  1. Immediately start working your way over to the right lane,
  2. Wait until you can see the lighted arrows where the closure begins, then move to the right,
  3. Stay in your lane until you reach the point where it’s closed, then merge to the right?

As you might have guessed, I’ve got strong feelings about this one.

Remember the Richmond-San Rafael bridge? The one I use to get to and from work? The one where they’re busily replacing the expansion joints? The one where two lanes are closed in each direction for hours at a stretch so the construction can be done safely? Yeah, that one.

The backups are, to put it mildly, horrific.

Once everyone has gotten into a single lane, traffic moves at almost normal speeds. The problem is in getting to that point. Within minutes of the cones and signs going up, all three lanes are filled for miles leading up to the bottleneck.

It’s easy to blame the tie-up on the people who picked the third answer. After all, they’ve taken the “me first” approach. Sure, going all the way up to the point where they have to merge may save time for the first few people who do it, but when they stop and wait for a chance to merge across, they trigger a cascade of stopped cars in all the lanes.

On the other hand, one could just as easily point fingers at the people who were already in the right lane or who moved into it at the first warning sign. If they were more willing to allow late movers to merge, the delays would take longer to develop.

The rule of the road–written or otherwise–used to be “take turns, one from each lane”. That seems to have been kicked to the curb.

The people I don’t understand are the ones who picked the second answer. Do they think the first warning signs are a prank? Do they have to get stuck in the miles-long parking lot before they believe the signs are real? It seems like waiting but not going all the way to the final merge point just gets you the worst of the other two possibilities. But maybe I’m missing something. I await enlightenment.

As I said originally, I’m not sure there’s a right answer to the question, though I’m fairly certain that the second choice is the wrong answer.

But I hope we can all agree that the folks who repeatedly lane-hop into whichever lane is moving fastest and the ones who drive up the shoulder are the absolute worst.

Bridging the Gap

Speaking of the Richmond-San Rafael Bridge (as I was last week) maybe you’ve heard that it’s joined the Bay Area’s roster of troublesome infrastructure?

The problems aren’t as severe as the Bay Bridge’s issues, nor as expensive to resolve as BART’s shortcomings, but they’re still an interesting little tale of terror.

Okay, maybe “terror” is excessive. Trauma, though…that works.

The story, or at least the current phase of it, started earlier this month–but let me give you some background first. The bridge is double-decked. The top deck is for westbound traffic (Richmond to San Rafael). There are two lanes and a wide shoulder, part of which is currently being converted into a bike and pedestrian path. The lower, eastbound deck, also has two lanes and a wide shoulder. As I explained in that earlier post, the shoulder is used as a third lane during the evening commute.

The bridge opened in 1956 and has been updated several times since, including undergoing a seismic retrofit in the early 2000s. Of particular note, the majority of the bridge’s joints–795 of 856–were rebuilt during the retrofit. The remaining 61 have been in place since the bridge opened.

Which brings us to February 7 of this year. At approximately 10:30, the California Highway Patrol received a report that chunks of concrete falling onto the lower deck. Specifically, someone told them a rock had fallen onto the hood of their car, denting it severely. Inspection showed that concrete was falling from around one of the expansion joints on the upper deck. Yes, one of the Original Sixty-One. At 11:20, give or take a few minutes, Caltrans closed the bridge in both directions.

Fortunately, the morning rush hour was mostly over by the time the bridge closed. And, for the curious, yes, I had driven over the bridge that morning, headed for San Rafael. And no, my car did not knock loose the chunk of concrete that was the cause of the CHP being called in. I’d passed that part of the bridge about fifteen minutes before the caller’s hood was crushed. Not guilty.

Without the bridge, there really isn’t a good way to get from San Rafael to the East Bay. You can use the Bay Bridge, but that means going through San Francisco, which is a nightmare of a commute even in the best of circumstances. Or you can go around to the north, via Novato, Vallejo, and Crockett, which involves a long stretch on the one-lane-in-each-direction Highway 37.

The bridge remained closed until shortly before 3:00. By then, of course, the evening commute was totally snarled. Opening one lane in either direction didn’t help much, and when more concrete fell, those lanes were closed again. (Again, I lucked out: I left work at three and made it across just before the 3:45 re-closure.)

After that, the upper deck stayed closed. A single lane on the lower deck opened around 4:30, but by then any commute anywhere in the Bay Area was a multi-hour affair.

Caltrans got a temporary patch in place–metal plates on the top and bottom of the upper deck–and reopened the bridge around 8:30. Amazingly, the congestion had all cleared by the following morning, and my commute to work was no worse than usual, aside from the jolt to my car’s suspension going over the temporary patch.

The upshot is that the Original Sixty-One are now being replaced. At least in theory. It’s been too wet for actual repairs to be carried out, which means the planned completion date of March 5 is totally out the window. The repairs and the delays to the repairs also means the bike lane is going to be delayed by at least two months.

To be fair, the rain is hardly Caltrans’ fault. And, as far as I can tell, the delay isn’t going to raise the cost of the repairs (about $10,000,000 for the 31 joints on the upper deck; the 30 on the lower deck were actually planned for replacement later this year in a separate rehabilitation project.)

But I doubt there are many Bay Area commuters looking forward to weeks or months of overnight lane closures.

And, even though there’s no evidence of problems at any of the other commuter bridges–and yes, that include the Golden Gate–I doubt I’m the only person who has second thoughts about driving on the Carquinez, San Mateo, or Dumbarton Bridges.

I mean, really, how much bridge luck can I reasonably expect to have?

Unfolding Before Your Eyes

The future is here–or will be on April 26–and it ain’t cheap.

Unless someone sneaks out a surprise, two months from now, Samsung will have the first folding phone commercially available in the US: the Galaxy Fold.

Though that’s actually a bit of a misnomer. When the device is folded, it looks like a fairly standard high-end phone, albeit one with an unusually narrow screen (1960×840) and really, really wide bezels.

Unfold it and it’s not really a phone anymore. The phone screen winds up on the back (here’s hoping they disable that screen when the device is unfolded) and you get a front-facing seven-inch tablet with a more-than-decent 2152×1536 resolution.

So what do you call it? Ars is saying “phone-tablet hybrid” but that’s a bit of a mouthful. Phablet is already in use and tablone isn’t very inspiring–and it sounds too much like Toblerone.

There’s been a lot of speculation about how well Android is going to handle folding screens, but largely in the context of a screen that folds into a different size and shape. In this case, you’re either using one screen or the other with no on-the-fly reconfiguration. Though, to be fair, it sounds like there’s some communication between screens. That’s a slightly different situation, however, and one that developers already know something about.

Frankly, I can’t see this gaining much traction, even among the early adopters who need every new thing that comes along. It looks prone to breakage (remember Apple’s butterfly keyboard?) and, because the folding screen can’t have a glass cover, likely to scratch easily.

Personally, I think a seven-inch tablet is exactly the right size, but by and large, the market doesn’t agree with me. Fans of eight to ten inch tablets are going to find the Fold’s tablet mode cramped, especially if they try to multitask. Samsung is saying you can display three apps at once, but how large are they going to be when they’ve divvied up those seven inches? I can’t be the only person who’s worried that text will be either too small to read or too large to fit well on a phone-optimized UI.

More important, however, is the price tag. At a whisker short of $2000, there aren’t a whole of people who’ll pick one up on impulse. And, as the iPhone X has shown, even Apple is having trouble convincing the general public to shell out four figures for a phone, no matter how large its screen may be.

When you can pick up a good phone and decent tablet for half the price of the Fold, two grand is going to be a hard sell. That folding screen has to deliver some solid value as a display or it’s going to come off as a gimmick.

Don’t get me wrong. I love the idea of a folding display. A tablet I could legitimately fold up and tuck in a pocket sounds like a winning idea.

I just don’t think the Galaxy Fold is the right implementation. Even if I had $2000 to spend on a phone or table right now (I don’t), I’d sit back and see what other phone makers come up with. And I suspect a big chunk of Samsung’s potential market will too.

Follow the Leader

Can we talk about self-driving cars again? Oh, good. Thanks.

It occurred to me the other day that the public press (as opposed to the technical press) isn’t paying much attention to one particular aspect of autonomous vehicles: interoperation.

Every article I’ve seen that touches on the subject makes mention of emerging standards and the need for inter-vehicle communication, but they all seem to assume that having standards is the solution to all the potential problems.

Believe me, it ain’t. For one thing, there’s the ever-popular catchphrase “the great thing about standards is that there are so many of them”. Just because a car implements a particular standard, that doesn’t mean it implements every standard. And which version of the standard? They do evolve as the technology changes. Will a car that’s compliant with version 1.2 of the car standard for signaling a left turn recognize the intention of the oncoming truck that’s still using version 1.1 of the truck standard?

Lest you think I’m exaggerating the problem, consider the rules (not a standard, but similar in intent and function) for the noise-making apparatus in electric vehicles. (I talked about it several years ago.) That one document runs to 370 pages. Do you really think there are no errors that will require updates? Or a significant amendment to cover cars made in other countries? Or a missing subsection for retrofitting the technology to older electric cars released before the rules were finalized?

And, speaking of those 370 pages, that brings us to the second problem. Even assuming the best will in the world, no spec is ever totally unambiguous. Consider web browsers. Remember back around the turn of the century, when we had Internet Explorer, Netscape, and AOL’s customized versions of IE? All theoretically compliant with web standards, all delivering different user experiences–rendering pages slightly–or extremely–differently.

Nor do they do anything to prevent developers from introducing non-standard extensions. Do we really want some latter-day Netscape-wannabe from coming up with an automotive blink tag while their competitors over at Microsoft-like Motors are pushing their equivalent of the scrolling marquee tag?

But I digress slightly.

What started this train of thought was wondering how autonomous vehicle developers are going to handle weird, one-off situations. We know some of them are working up plans for turning control over to remote drivers (like OnStar on steroids). But how well is that going to work at 60 MPH?

Case in point: The Richmond-San Rafael has a part-time lane. For most of the day, it’s actually the shoulder on the eastbound part of the bridge. But during the afternoon rush hour, it becomes a traffic lane. There are lights to signal when it’s open to traffic–and the open hours are scheduled–but it can be taken out of service when necessary. That means developers can’t count on programming open times. Cars may or may not be able to read the signal lights. Maybe there’s a standard compliant (for some standard or other) radio signal as well.

But the critical point here is that the lane markings are, well, weird. There’s a diagonal stripe that cuts across the lane; when the lane is open, drivers are expected to ignore the line, but at other times, they’re supposed to follow it in merging into the next lane over.

How is the car supposed to know when to follow the line? (Come to think of it, how do current lane assist technologies handle that stretch of road?) How are the programmers prioritizing lane markings versus other signals?

Maybe, I thought, in ambiguous situations, the rule could be “follow the car in front of you”. That could work. Sooner or later, the chain of cars deferring to the next one forward will reach a human-driven car which can resolve the conflict. Hopefully that driver is experienced enough to get it right and neither drunk nor distracted by their cell phone.

But how are the cars going to know if the car in front of them is trustworthy–i.e. is following the same “follow the car in front of me” rule? Is your Toyota going to trust that Ford in front of it? Or will it only follow other Japanese manufactured vehicles? Maybe the standard can include a “I’m following the car in front of me” signal. But what if the signal changes in version 2.2a of the specification?

There’s a classic short story* in which cars and trucks have evolved from trains. Each manufacturer’s vehicles require a different shape of track and a different width between tracks. Some are nearly compatible, able to use a competitor’s tracks under certain special circumstances. As you might imagine, the roads are a mess, with multiple tracks on every street, except where a city has signed an exclusive deal with one manufacturer.

* Whose title and author currently escape me, darn it. If you recognize it, please let me know in the comments.

The story is an allegory on the early personal computer industry with its plethora of competing standards and almost-compatible hardware, but I can’t help wondering if we’re about to see it play out in real life on our roads.

A Modern Headache

Need a break? Too much going on in your life, and you just need to veg out for a while? Kick back, turn on the TV. You pick the channel, it doesn’t matter.

Because your relaxation will be interrupted. Probably by a telemarketer–but that’s a subject for a different post. No, I’m talking about the commercials. Specifically, the drug commercials.

Annoying as all get-out, aren’t they? Most likely you don’t have the condition the drug they’re touting is intended to cure. Even if you do, the list of side effects would make any rational person flee in terror.

I’m especially confused by the ads that say “Don’t take this if you’re allergic to it.” How are you supposed to know you’re allergic to it unless you’re already taking it?.

But I digress.

What really puzzles me about the whole phenomenon is how many people think this is new.

It’s not. Consider Allan Sherman’s classic paen to one class of medical ads from 1963:

Sounds familiar, doesn’t it? Disturbing scenes of body parts you’d rather not see. Appeals to bypass authority. Untested claims of efficacy.

Replace “Bayer Aspirin” with “Otezla” and the only way the audience could tell the difference between the 1960 commercial and the 2019 commercial would be that the older one is in black and white*.

* Anyone else remember seeing “The following program is brought to you in glorious, living color” on a black and white TV set?

Bottom line, this kind of ad has built more than fifty years of inertia. That means they must work, or the advertisers would have tried something different. And that means they’re not going away, no matter how many people scream for legislation.

Let’s face it: Allan had it right. The only way to ensure you’ll never be bothered by a drug ad again is to eat your TV.

Slow Development

I’m waiting for the next big advance in automotive safety. As you might have gathered from my post last week, it’s not noise cancellation.

No, this isn’t a guessing game. It’s the logical outgrowth of the lane assist and automatic braking technology we already have.

When are we getting…um…heck, I can’t think of a good advertisable name for it. Which might just be part of the reason we haven’t seen it deployed yet. I’m talking about some kind of warning system that alerts a driver who’s following too closely.

It could just be an alert, like the lane monitoring routines that trigger if you cross the lines without signaling for a lane change. Or it could be a proactive solution, slowing down your car to increase the space in front of you, in the same way that automatic braking takes over the vehicle.

Yes, it’s a complicated problem. Off the top of my head, it would need to consider your speed and the speed of the cars around you, weather conditions, the height and direction of the sun, the state of the road, and even the age and condition of your tires. And there are other questions that would need to be addressed. Should the technology shut off in parking lots and other areas where the typical top speed is measured in single digits? What about in bumper to bumper freeway traffic? Can the driver shut it off, either temporarily or permanently?

But complicated doesn’t mean impossible, and it’s a problem that’s going to have to be solved for autonomous vehicles. That means there are plenty of bright people (and history suggests there are even more not-so-bright people) working on it right now.

I’d even be willing to bet that there’s at least one auto manufacturer who has it solid enough that they could deploy it by 2021–the same year as Bose’s noise cancellation.

But we’re probably not going to see it that soon, if ever, because if you thought the technology was complicated, give some thought to the marketing!

To be blunt, the people who most need this are the ones who are least likely to buy a car that has it. Do you think that guy who rides your bumper and goes zipping across three lanes of traffic will be willing to pay for his car to slow him down (or even nag him to back off)? How about the woman who’s trying to improve her fuel economy a little by drafting behind a big rig?

So, no, we’re not going to see Safe Distance any time soon. Not until some smart marketer comes up with a more salable name for it and all manufacturers are ready to deploy it–or there’s a legal mandate to include it in all new cars sold.

I’ll be dreaming of the day.

Quietly Bad

One bit of tech news that hasn’t gotten as much attention as I expected is Bose’s announcement that they’ve come up with noise reduction technology for cars.

They’re not making the cars quieter. They’re reducing the amount of road noise inside the car. Yes, like noise-canceling headphones, only for an entire vehicle instead of one person’s ears.

This is, IMNSHO, a bad idea.

Maybe not as bad as electronic license plates or the no-pitch intentional walk. Not quite.

Look, I don’t fly without my headphones. They work brilliantly at filtering out continuous sounds–like the plane’s engines–and not quite as well on repetitious sounds–like the crying baby in the seat behind me. But you know who doesn’t wear noise-canceling headphones on a plane? The flight attendants and the flight deck crew. In other words, the people who are responsible for the safety and comfort of the passengers.

Because the technology isn’t perfect. It also partially eliminates conversation. It glitches occasionally, allowing the background noise to leak through. Those glitches are distracting, and the unintended reduction of non-continuous sounds is a potential safety concern.

Consider how this would apply in your car.

Will Bose’s technology filter or reduce the siren of the police car behind you? Will it make your navigator–human or GPS–quieter? Will it be smart enough to know that droning noise is your favorite bagpipe CD, or will it filter out part of your music? Except, of course, for the occasional glitch where it cuts out and lets through a sudden burst of B flat.

All that aside, even if the technology was perfect, reducing only road noise, without hiccups or glitches, it’s still a bad idea.

Road noise is one of the signals a driver uses to keep tabs on the state of the car and the road. The pitch is part of the feedback system that lets you hold a constant speed on the freeway (traffic permitting, of course). Sudden changes in the sound signal a change in the road surface, alerting you to the possibility of potholes or eroded asphalt.
Do we really want to increase driver distractions and decrease their awareness of what’s going on outside their cars?

Apparently we do. Bose’s announcement says the technology “is planned to be in production models by the end of 2021.” Given the lead time involved in automotive design, that means contracts have been signed and engineers are hard at work now.

I’d offer congratulations to Bose, but they probably wouldn’t hear me.