A Forgotten Virtue

Seen this paean to obsolescence? I hadn’t until Jackie brought it to my attention, and now I’m passing the favor on to you.

It is fairly lengthy–though that shouldn’t bother anyone who reads my ramblings here–but if you don’t have the patience right now, the tl;dr is that the author, one Ian Bogost, believes that computing technology reached its peak in the early 1990s.

He argues that all of the advances since then–the ability to run quietly, multitask, go online without dialup, use a display big enough to see clearly, and so on–are actually regressions.

I detect a certain amount of Mr. Bogost’s tongue in his cheek, yet the final impression is that he’s quite serious in his praise of archaeo-computing*.

* Yes, I know that the term “retrocomputing” is in common parlance. Mr. Bogost, however, takes the concept to a whole ‘nother level.

Look, I’m not immune to the lure of the small, underpowered computer. You know my love for my Windows tablet. I’ve got a couple of netbooks*. They still work, and I still use them occasionally.

* I’m convinced that what doomed the netbook was not its lack of power, but its lack of screen resolution. 1366×768 just isn’t big enough to get any serious work done in a GUI environment. Give me a ten-inch screen large enough to display something close to a full page of text at a readable size, and I’m in. Why do you think that the iPad is so popular? It’s basically a netbook that swaps the keyboard for a high-resolution screen.

But there are things that just can’t be done with a small computer. Writing, sure. Editing? Probably. Software development? Only if you’re building something to run on that same device. Art? Video editing? Forget it, unless you’re okay with an input lag measured in seconds and rendering times measured in weeks. Games? Anything more taxing than a crossword puzzle or hand of solitaire is going to run slower than real time.

Mr. Bogost, it appears, considers the greater part of the last two decades to have been wasted effort. There is, he says, virtue in a computer that makes you wait and that pummels you with noise while you twiddle your thumbs.

The lack of capability and speed and the noise generated combined to force computer owners to limit their screen time (to use an expression that dates to 1921).

Apparently he missed–or has forgotten–the online communities of the time. There might not have been a Facebook sucking up hours of users’ time. But there was GEnie. Prodigy. AOL. Usenet, for crying out loud.

Text adventures. I can’t count how many hours I spent on the computer game of “The Hitchhiker’s Guide to the Galaxy”. Heck, anyone else remember “Leisure Suit Larry”? I wonder if Mr. Bogost remembers “Zork”.

It’s the final paragraph of Mr. Bogost’s piece that really sets my teeth to grinding. He concludes by turning off his ancient computer and declares that act to be literally impossible today.

I’ve got news for him. Every flipping piece of technology he references–his laptop, his tablet, and his smartphone–has a power switch. He can do exactly the same thing as with his Macintosh SE.

Why doesn’t he? Because he doesn’t want to wait for them to turn back on. Waiting, it seems, is only a virtue when you have no choice.

WWDC 2019

I’m back from Sedalia, mostly caught up on everything that’s been going on in the world while I’ve been out of touch, and feeling guilty about not having commented on Apple’s WWDC last year. I’m sure we can all agree that Apple’s plans for the coming year are far more important than anything else that’s happening (Trade tarifs? Disaster relief? What are those?), so I’ll start there.

Of course, the keynote address, which is where I get all my information was Monday–while I was driving halfway across Missouri–so you’ve probably seen some of this in your local newspapers already. But that’s okay. The extra days should allow me to give a more nuanced, thoughtful take on the story.

And if you believe that, perhaps I can interest you in my new business: selling snowplows to airports in the tropics. (Don’t laugh. Turns out snowplows are the most efficient way known to humanity for clearing storm debris off of airport runways.)

Anyway, the opening announcement gave quick references to Apple News+, Apple Arcade (later this year), Apple Card (later this summer), and Apple TV+ (this fall). Three of the four are extensions to existing things. The fourth? Dunno about you, but I’m not sure I’m ready to have the credit card reinvented. Didn’t it cause enough trouble the first time it was invented?

Moving on.

tvOS, which powers the Apple TV boxes is getting a facelift with a new homescreen. It’s also going to handle Apple Music, and games in the Apple Arcade will support controllers from your PlayStation 4 and Xbox One. That’s a nice ecumenical gesture on Apple’s part. Gamers can be passionate about the One True Controller, so there’s a lot of goodwill in letting them bring their favorite to an otherwise tightly controlled garden.

Moving on.

Apple Watches are also getting enhancements, of course. New faces. Chimes that include physical taps–I like this idea, actually. It should cut down on the “Who’s phone just rang?” dance. Better audio support–voice memos and audio books. A calculator (really? It took five iterations of the Apple Watch to bring out a calculator?) App Store support, so you can still buy apps even if you leave your phone in your backpack.

Naturally, there are also updates to the health features. Progress tracking over the past ninety days with nags if you’re falling behind on your goals. I’m sure those will be amazingly persuasive to get off our lazy behinds and exercise harder.

Hey, I like this one: Apple Watch will monitor noise levels and alert you if they reach levels that could damage your hearing. An actual use case for those new chimes, since you probably won’t be able to hear the old ones. Good to know my watch will be ready to distract me from the music at the next BABYMETAL concert.

Cycle tracking. That one sounds useful. Useful enough that they’re making it available in iOS so even women without an Apple Watch can get the benefits. It looks like initial features are somewhat limited, but I expect enhancements over the next few iterations of watchOS.

And, of course, it wouldn’t be WWDC without the announcement of new Watch bands–including a Pride edition.

Moving on.

IOS 13 will, of course, be much faster than the ancient iOS 12 that came out last year. Apps will download faster, install faster, and launch faster. One hopes they’ll also run faster once they’re launched, but Apple was curiously quiet about that aspect.

There’s a Dark Mode. For all you fans of Darth Vader, I suppose. Personally, I dislike Dark Mode: I find white text on a black background hard to read. But different strokes. Enjoy.

The keyboard now supports swiping. Only about five years behind Google on that one. But, to be fair, Google’s swiped more than a few tricks from Apple during those five years.

Lots of changes in the default apps around text formatting and image handling. Maps are updated with more detail and more 3D geometry. Integration with street level photographs (more maintenance of feature parity with Google).

More enhancements to privacy. One-time permissions: you can require an app to ask you every time it wants access to your location. (I wonder if that applies to Apple’s own apps, or if it’s only for third-party apps.) If you give it blanket permission, Apple will send you reports on what the app knows. They’re also making it harder for apps to use Bluetooth and Wi-Fi information to figure out your location. That’s a nice improvement that’s going to piss off a lot of app makers who haven’t been able to come up with a good excuse to ask for location data.

Here’s a cool one: Apple is introducing a “Sign in with Apple” feature that uses Face ID to authenticate you to websites and apps. The cool part is that it can create single-use email addresses that you can give to websites that require an address. The site never sees your real email address, and Apple will automatically forward messages from the fake address to the real one. Hopefully it’ll also work the other way, so if you reply to an email from a company, it’ll go out under the fake address.

Homekit now supports handling video (motion detection, alerts, and all the other good stuff) on your device instead of sending everything to the cloud. That’s a big win.

A few more quickies: more flexible memoji, if that’s your thing. Improvements to photo taking and editing. Adding camera filters to video. Automatic categorization of photos and AI-generated displays that try to be context-aware. (I suspect the key word there is “try”.)

Moving on.

More capable Siri in AirPods. Allowing temporary pairing of AirPods (so you can share your audio with somebody for the length of a song or a movie and not have them automatically able to hear everything you do from then on.) Handing audio from iPhone to HomePod and vice-versa. Access to streaming radio stations. HomePod can recognize individuals and give them different experiences.

The big change is that iPads are going to get a customized version of iOS, inevitably called iPadOS. Lots of tweaks to take advantage of the larger screen, like widgets on the home screen. Apps can have multiple windows open at once. I love that idea: being able to have two Word documents open side by side, for example, is a major productivity booster when editing.

Support in the Files app for USB drives and SD cards. That’s great for photos, when you want to import or export just a few images without copying the entire photo roll over Wi-Fi.

Safari on iPads can now get the desktop version of a site instead of the mobile version.

Lots of tweaks to editing as well, mostly around three-finger gestures for copy/paste/undo.

I have to wonder if all these goodies are going to make it onto all the supported iPads–for that matter, will iPadOS be available to older iPads at all?

Moving on.

There’s a new Mac Pro. Hugely powerful and much more expandable than the previous version. And a matching monitor. Would you believe 32-inch, 6016×3384 display? Believe it.

Believe the price tags, too. The Mac Pro starts at $6,000 and goes up from there. Which is actually not out of line for it’s capabilities. Want that lovely monitor (or several of them–supposedly the Pro can use up to six of them at once)? Plan on spending $5,000 for each of those. (Again, not unreasonable for the feature set.) Oh, and don’t forget the $999 for the monitor stand. Now that’s just ridiculous. Yes, the stand can raise and lower the monitor, tilt it, and rotate it to portrait mode. But there are plenty of third-party monitor stands that will do all the same things for a tenth of the price.

New year, new operating system. This year’s version of macOS is “Catalina”.

Thankfully, iTunes is getting broken up into three separate programs. One to handle music, one for podcasts, and one for video. That should make life considerably simpler for anyone who only does music, and it should end the current view of TV programs and movies as music that happens to have an inconvenient video track.

Got an iPad and a Mac? Of course you do; doesn’t everyone? With Catalina, you’ll be able to use the iPad as an external monitor for the Mac. That’s been possible with third-party apps, but now it’ll be built into the OS. And yes, it’ll support all of the iPads’ touch functionality, including Apple Pencil, and it’ll do it over Wi-Fi. Very handy, indeed.

Voice control. Find My Mac. Activation lock. For developers, a path to quickly convert iPad apps to Mac apps.

Actually, quite a lot for developers. Much convergence between iOS and macOS. Though the claims that companies will be able to do apps that support all Apple products without adding specialized developers sound suspect. Maybe they won’t need separate Mac and iPhone teams, but they’re still going to need the people–and my cynical side suggests that any developer savings will be totally wiped out by the need for more QA folk who can test cross-platform.

Bottom line here is that, unlike the last couple of years, Apple has promised some things that sound genuinely exciting. Not necessarily revolutionary, but well worth having if you’re in the Apple infrastructure. Just don’t get your hopes high for a continuation next year. Odds are good that 2020 will be a year of minor tweaks and enhancements to the goodies that show up this fall.

Blinders

They can’t all be winners, I suppose.

Ideas, that is.

Case in point, I’ve been sitting here for the last hour, trying to make something entertaining out of my recent discovery that Google Calendar supports time zones.

The key word there, of course, is “trying”.

It’s a useful feature, especially when dealing with an event that spans multiple time zones (hello, plane flight to Missouri). But entertaining? Not so much.

There’s some minor humor in the fact that the feature has been around for nearly a decade–the oldest references I can find to it date back to 2011–but I only discovered it last week. And you all trust me to be on the cutting edge. Sorry about that.

Maybe it says more about the user interface designers than it does about me. Google does have something of a fetish for hiding controls behind menus, so they can display the actual information in a sea of whitespace.

That’s a fetish they share with Apple, by the way. Which means most of the rest of the tech industry falls in line. Arguably, it’s an improvement over the previous state of affairs, where every possible control was squeezed onto the main screen, or at most, moved one menu level down.

There are still some holdouts in the old style–Microsoft’s Ribbon Bar, I’m looking at you–but I digress.

In any case, I can’t blame the UI here. There’s a prominent “Time Zone” button right next to the date and time fields on the event creation/edit page.

Clearly, there’s a lesson here about willful blindness, seeing only what we expect to see, and the triumph of imagination over reality.

Puts a whole different light on climate change deniers, Trump supporters, and anti-vaccination activists, doesn’t it? It’s not that they’re denying the evidence. They literally don’t see it, even though it’s right in front of them.

Not that that’s a legitimate excuse. The Time Zone button is right there, whether or not I saw it.

Does make me wonder what else I’m missing out on, though.

Google I/O 2019

Welcome to my annual Google I/O Keynote snarkfest.

In years past, I’ve used Ars Technica’s live blog as my info source, but this year it appears they’re not at Google I/O. So all the snark that’s fit to print comes to you courtesy of Gizmodo’s reporting.

My apologies, by the way, for the later-than-usual post. Blame it on Rufus. No, not really. Blame it on Google for scheduling the I/O keynote speech at 10:00. But I did have to duck out to take Rufus to the vet for a checkup. He’s fine. The keynote is over. I’m caught up. Enjoy your post.

First up, Google is bringing augmented reality to search on phones. The demo involves getting 3D models in your search results. You can rotate them to see all sides and you can place them in the real world with an assist from your phone’s camera. Why do I suspect the porn industry is going to be all over this technology?

Seriously, though, it’s part of an expansion of the Google Lens technology we’ve been seeing for the past few years and integrating it into search. Other enhancements to Lens include the ability to highlight popular items on a recipe and displaying videos of recipes being made when you point the camera at a printed recipe.

Does anyone really want these features? If I’m at a restaurant, I’m going to pick the dish that sounds the tastiest, not the one the most people have ordered. My tastes aren’t necessarily yours, after all, and sometimes it’s the odd little dishes tucked away in the corner of the menu that are the most interesting. As for the cooking videos, I try to keep my phone in the case in the kitchen. I’d rather not wind up preparing pixel ‘n’ cheese or nexus stew. Silly of me, I know.

Anyway.

Remember last year’s big feature? Duplex, in case your memory is as short as mine. That’s the feature that let your phone make reservations on your behalf. Did anyone use it? Maybe a few people will try this year’s iteration which can make car reservations and buy movie tickets. I can’t say I’m thrilled at the possibilities this opens up.

Assistant, the voice behind “Hey, Google,” gets an update this year, as well. It’ll be able to figure out what you mean by personal references. Want directions to your mother’s house? Just ask. Because it’s good to know that, when you can’t remember where your relatives live, Google can.

Slightly more useful is a new driving mode, intended to reduce distractions. Speaking as someone who nearly got rear-ended yesterday by someone looking at the phone in her lap, I think the only legitimate “driving mode” would be one that turns the damn phone off as soon as you start the engine. Not that anyone is going to implement that.

Moving on.

Google is very, very sorry for whatever biases their machine learning technology has revealed. They’re working very, very hard to reduce bias.

Let’s be honest here. The problem isn’t the machine learning tools. It’s the humans who select the data that the machines learn from. Fix the developers’ biases and the machines fix themselves.

Onward.

More privacy features. Which seem to boil down to giving people more ability to delete whatever Google knows about them, but precious little to prevent them from learning it in the first place.

Oh, wait, one exception: there’s going to be an incognito mode for Maps, so you can get directions to the doctor’s office without Google being easily able to tie the request to your earlier searches. They’ll still know someone searched for the office and there are a number of ways they could tie it to you, but at least they’ll have to work for the data.

I’m a big fan of incognito mode in the browser, and I hope they roll it out everywhere sooner rather than later–and that’s no snark.

Furthermore.

Generating captions for videos on the fly seems like an interesting, if somewhat niche application. Applying the same technology to phone calls, though… If Google can pull that one off, it’d be a big win for anyone who’s ever tried to take a call in a noisy environment or even just sworn at the lousy speaker in their phone. Yes, and for those whose hearing isn’t the aural equivalent of 20/20 vision.

Looks like there’s a related effort to teach their voice recognition software to understand people with conditions that affect their speech. The basic idea there is good–but Google needs to beware of inappropriate extensions of the technology.

Correctly interpreting the speech of someone who’s had, say, a stroke, is a good thing. Suggesting that someone see a doctor because there are stroke-like elements in their speech is moving into dangerous waters, ethically speaking.

On to Android Q.

Support for folding devices, of course. That was inevitable. Moving apps from one screen to another, either literally or figuratively (when the device is folded and the screen dimensions change, for example).

Improved on-device machine learning, which will let phones do voice recognition themselves without help from Google’s servers. That’s a win for privacy and data usage.

Dark mode. Personally, I dislike dark mode; I find white text on a black background hard to read. But I know others feel differently. So enjoy, those of you who like that kind of thing.

More privacy features, including new controls over which apps have access to location data and when they have it.

OS security updates without a reboot? Would that Windows could do that. It’s a small time-saver, but worthwhile.

Focus Mode–which will also be retrofitted to Android Pie–maybe somewhat less useful: you can select apps to be turned off in bulk when you turn on Focus Mode. If the goal is to get you off your phone, this seems like a fairly useless diversion, because who’s going to put their important apps on the list? It does tie in with expanded parental controls, though, so there’s that.

Moving on.

Like your Nest thermostat? That’s cool. (sorry) Now all of Google’s smart home gear will be sold under the Nest name. I guess they figured with the demise of “Nexus,” there was an opportunity for an “N” name to distinguish itself.

So, no more “Google Home Hub”. Now it’s “Nest Hub”. Expect similar rebranding elsewhere. It looks, for instance, like Chromecast (remember Chromecast?) will be moving to Nest. NestCast? Or something stupid like “Google Chromecast from Nest”?

And, speaking of Pixel–we were, a few paragraphs back–we’re getting cheaper Pixel phones, as expected.

The 3a and 3a XL, starting at a mere $399, and coming in three colors. (Yes, we see what you did there, Google.) The usual black and white, naturally, but also something Google is calling purple. Looking at the photos, I’d say it’s faintly lavender, but maybe it’s the lighting.

Judging by the specs, it sounds like you’ll get roughly Pixel 2 levels of performance, except for the camera, which should be the same as the high end Pixel 3 models.

And, unlike Apple, who preannounce their phones*, the Pixel 3a devices are available online and in stores now.

* Remember signing up to get on the list to pre-order an iPhone?Fun times.

Moving on.

Bottom line: once again, we’re not seeing anything wildly new and different here. Granted, some of the incremental advances over the past year are large, but they’re all still evolutionary, not revolutionary.

And no, there weren’t any hints about what the Q in Android Q stands for.

A Happy Thought

There may actually be some positives to the impending arrival of self-driving cars.

Yeah, I know, that sounds odd coming from me, doesn’t it? But it’s true.

Consider the case for predictability. Many of the concerns I see about self-driving cars boil down to “How do I know what it’s going to do in Situation X?”

How do you know what any driver is going to do in Situation X? In truth, you mostly don’t. You guess, based on your experience, your familiarity with the law, and what you’ve seen of the driver’s behavior. Usually you haven’t seen much of the last. If they’re doing something blatant–speeding, weaving back and forth across the lanes, going the wrong way on a one-way street–you pay special attention to them. But for the majority of drivers, you’re guessing.

How many times have you seen someone sit at a green light because traffic is backed up from the next light, so there’s no place to go? Not very often, at least around here. The default assumption is that two or three cars will move into the intersection and still be sitting there when the light changes. Now the cars on the cross street are blocked. Presto! Instant traffic jam.

And yet, this morning I saw three different cars waiting at green lights for the traffic ahead of them to move. It was startling enough that I took special note.

My point is that over time, we’ll build up a category of experience specific to self-driving cars. We’ll assume they’ll wait at green lights instead of blocking intersections*. And we’ll make that assumption because we’ll see them do it every time. Not just on days when they’re not late for work, didn’t have a fight with a loved one, or feel like being passive-aggressive.

* Assuming they weren’t programmed by Bay Area drivers.

We’ll be able to make better predictions about what they’re going to do than we will about all those human drivers on the road.

That’s a good thing, but here’s an even better one.

Even a tiny number of self-driving cars on the road have the potential to break up traffic jams before they start.

Seriously.

Okay, I’ll admit I’m extrapolating wildly from a study I saw back in the days before journal papers went online. But I’m a science fiction writer; wild extrapolation is part of my job description.

The gist of the study was that one of the common reasons traffic jams develop is that a few drivers slow down. The drivers behind them overreact and slow further, then speed up to close the gap. Errors accumulate, and again, Presto! stop-and-go traffic.

I see this every day. There’s a curve on the freeway where most drivers slow down from 65 (or whatever faster-than-the-limit speed they were going) to 60. Outside of commute hours, it doesn’t matter. Everyone slows a bit, then resumes speed on the next straight patch. But at rush hour, that curve always turns into a parking lot.

But the study went further. The investigators found that if a small percentage* of the drivers maintain a constant speed–even if that speed is well below the limit–instead of braking and accelerating, the jam never develops.

* I want to say five percent, but I’m working with twenty-year-old memories, so that may be incorrect. I am sure it was a single digit number.

Self-driving cars, if properly programmed, aren’t going to slow down for a curve they can safely negotiate at the speed limit. More to the point, if they get proper information about traffic conditions ahead of them, they won’t get into the slower/faster/slower/faster cycle that causes jams. They’ll just slow to the maximum speed that won’t result in a collision.

Maybe that doesn’t sound like such a big deal to those of you outside the Bay Area and other commute-infested regions. But not sitting in stationary traffic on that one single stretch of freeway would trim my morning commute by ten minutes. And there are two other spots on my normal route where traffic behaves the same way.

Saving half an hour a day and however much gas the car burns idling in traffic sounds like a very good deal to me.

 

Construction Ahead

Here’s a question for you. No, it’s not a poll, and I don’t insist you answer in the comments. And I’m not sure there is a right answer .

Suppose you’re in the left lane of a three lane road. You pass a sign warning that, due to construction, the two left lanes are closed ahead.

Do you:

  1. Immediately start working your way over to the right lane,
  2. Wait until you can see the lighted arrows where the closure begins, then move to the right,
  3. Stay in your lane until you reach the point where it’s closed, then merge to the right?

As you might have guessed, I’ve got strong feelings about this one.

Remember the Richmond-San Rafael bridge? The one I use to get to and from work? The one where they’re busily replacing the expansion joints? The one where two lanes are closed in each direction for hours at a stretch so the construction can be done safely? Yeah, that one.

The backups are, to put it mildly, horrific.

Once everyone has gotten into a single lane, traffic moves at almost normal speeds. The problem is in getting to that point. Within minutes of the cones and signs going up, all three lanes are filled for miles leading up to the bottleneck.

It’s easy to blame the tie-up on the people who picked the third answer. After all, they’ve taken the “me first” approach. Sure, going all the way up to the point where they have to merge may save time for the first few people who do it, but when they stop and wait for a chance to merge across, they trigger a cascade of stopped cars in all the lanes.

On the other hand, one could just as easily point fingers at the people who were already in the right lane or who moved into it at the first warning sign. If they were more willing to allow late movers to merge, the delays would take longer to develop.

The rule of the road–written or otherwise–used to be “take turns, one from each lane”. That seems to have been kicked to the curb.

The people I don’t understand are the ones who picked the second answer. Do they think the first warning signs are a prank? Do they have to get stuck in the miles-long parking lot before they believe the signs are real? It seems like waiting but not going all the way to the final merge point just gets you the worst of the other two possibilities. But maybe I’m missing something. I await enlightenment.

As I said originally, I’m not sure there’s a right answer to the question, though I’m fairly certain that the second choice is the wrong answer.

But I hope we can all agree that the folks who repeatedly lane-hop into whichever lane is moving fastest and the ones who drive up the shoulder are the absolute worst.

Bridging the Gap

Speaking of the Richmond-San Rafael Bridge (as I was last week) maybe you’ve heard that it’s joined the Bay Area’s roster of troublesome infrastructure?

The problems aren’t as severe as the Bay Bridge’s issues, nor as expensive to resolve as BART’s shortcomings, but they’re still an interesting little tale of terror.

Okay, maybe “terror” is excessive. Trauma, though…that works.

The story, or at least the current phase of it, started earlier this month–but let me give you some background first. The bridge is double-decked. The top deck is for westbound traffic (Richmond to San Rafael). There are two lanes and a wide shoulder, part of which is currently being converted into a bike and pedestrian path. The lower, eastbound deck, also has two lanes and a wide shoulder. As I explained in that earlier post, the shoulder is used as a third lane during the evening commute.

The bridge opened in 1956 and has been updated several times since, including undergoing a seismic retrofit in the early 2000s. Of particular note, the majority of the bridge’s joints–795 of 856–were rebuilt during the retrofit. The remaining 61 have been in place since the bridge opened.

Which brings us to February 7 of this year. At approximately 10:30, the California Highway Patrol received a report that chunks of concrete falling onto the lower deck. Specifically, someone told them a rock had fallen onto the hood of their car, denting it severely. Inspection showed that concrete was falling from around one of the expansion joints on the upper deck. Yes, one of the Original Sixty-One. At 11:20, give or take a few minutes, Caltrans closed the bridge in both directions.

Fortunately, the morning rush hour was mostly over by the time the bridge closed. And, for the curious, yes, I had driven over the bridge that morning, headed for San Rafael. And no, my car did not knock loose the chunk of concrete that was the cause of the CHP being called in. I’d passed that part of the bridge about fifteen minutes before the caller’s hood was crushed. Not guilty.

Without the bridge, there really isn’t a good way to get from San Rafael to the East Bay. You can use the Bay Bridge, but that means going through San Francisco, which is a nightmare of a commute even in the best of circumstances. Or you can go around to the north, via Novato, Vallejo, and Crockett, which involves a long stretch on the one-lane-in-each-direction Highway 37.

The bridge remained closed until shortly before 3:00. By then, of course, the evening commute was totally snarled. Opening one lane in either direction didn’t help much, and when more concrete fell, those lanes were closed again. (Again, I lucked out: I left work at three and made it across just before the 3:45 re-closure.)

After that, the upper deck stayed closed. A single lane on the lower deck opened around 4:30, but by then any commute anywhere in the Bay Area was a multi-hour affair.

Caltrans got a temporary patch in place–metal plates on the top and bottom of the upper deck–and reopened the bridge around 8:30. Amazingly, the congestion had all cleared by the following morning, and my commute to work was no worse than usual, aside from the jolt to my car’s suspension going over the temporary patch.

The upshot is that the Original Sixty-One are now being replaced. At least in theory. It’s been too wet for actual repairs to be carried out, which means the planned completion date of March 5 is totally out the window. The repairs and the delays to the repairs also means the bike lane is going to be delayed by at least two months.

To be fair, the rain is hardly Caltrans’ fault. And, as far as I can tell, the delay isn’t going to raise the cost of the repairs (about $10,000,000 for the 31 joints on the upper deck; the 30 on the lower deck were actually planned for replacement later this year in a separate rehabilitation project.)

But I doubt there are many Bay Area commuters looking forward to weeks or months of overnight lane closures.

And, even though there’s no evidence of problems at any of the other commuter bridges–and yes, that include the Golden Gate–I doubt I’m the only person who has second thoughts about driving on the Carquinez, San Mateo, or Dumbarton Bridges.

I mean, really, how much bridge luck can I reasonably expect to have?

Unfolding Before Your Eyes

The future is here–or will be on April 26–and it ain’t cheap.

Unless someone sneaks out a surprise, two months from now, Samsung will have the first folding phone commercially available in the US: the Galaxy Fold.

Though that’s actually a bit of a misnomer. When the device is folded, it looks like a fairly standard high-end phone, albeit one with an unusually narrow screen (1960×840) and really, really wide bezels.

Unfold it and it’s not really a phone anymore. The phone screen winds up on the back (here’s hoping they disable that screen when the device is unfolded) and you get a front-facing seven-inch tablet with a more-than-decent 2152×1536 resolution.

So what do you call it? Ars is saying “phone-tablet hybrid” but that’s a bit of a mouthful. Phablet is already in use and tablone isn’t very inspiring–and it sounds too much like Toblerone.

There’s been a lot of speculation about how well Android is going to handle folding screens, but largely in the context of a screen that folds into a different size and shape. In this case, you’re either using one screen or the other with no on-the-fly reconfiguration. Though, to be fair, it sounds like there’s some communication between screens. That’s a slightly different situation, however, and one that developers already know something about.

Frankly, I can’t see this gaining much traction, even among the early adopters who need every new thing that comes along. It looks prone to breakage (remember Apple’s butterfly keyboard?) and, because the folding screen can’t have a glass cover, likely to scratch easily.

Personally, I think a seven-inch tablet is exactly the right size, but by and large, the market doesn’t agree with me. Fans of eight to ten inch tablets are going to find the Fold’s tablet mode cramped, especially if they try to multitask. Samsung is saying you can display three apps at once, but how large are they going to be when they’ve divvied up those seven inches? I can’t be the only person who’s worried that text will be either too small to read or too large to fit well on a phone-optimized UI.

More important, however, is the price tag. At a whisker short of $2000, there aren’t a whole of people who’ll pick one up on impulse. And, as the iPhone X has shown, even Apple is having trouble convincing the general public to shell out four figures for a phone, no matter how large its screen may be.

When you can pick up a good phone and decent tablet for half the price of the Fold, two grand is going to be a hard sell. That folding screen has to deliver some solid value as a display or it’s going to come off as a gimmick.

Don’t get me wrong. I love the idea of a folding display. A tablet I could legitimately fold up and tuck in a pocket sounds like a winning idea.

I just don’t think the Galaxy Fold is the right implementation. Even if I had $2000 to spend on a phone or table right now (I don’t), I’d sit back and see what other phone makers come up with. And I suspect a big chunk of Samsung’s potential market will too.

Follow the Leader

Can we talk about self-driving cars again? Oh, good. Thanks.

It occurred to me the other day that the public press (as opposed to the technical press) isn’t paying much attention to one particular aspect of autonomous vehicles: interoperation.

Every article I’ve seen that touches on the subject makes mention of emerging standards and the need for inter-vehicle communication, but they all seem to assume that having standards is the solution to all the potential problems.

Believe me, it ain’t. For one thing, there’s the ever-popular catchphrase “the great thing about standards is that there are so many of them”. Just because a car implements a particular standard, that doesn’t mean it implements every standard. And which version of the standard? They do evolve as the technology changes. Will a car that’s compliant with version 1.2 of the car standard for signaling a left turn recognize the intention of the oncoming truck that’s still using version 1.1 of the truck standard?

Lest you think I’m exaggerating the problem, consider the rules (not a standard, but similar in intent and function) for the noise-making apparatus in electric vehicles. (I talked about it several years ago.) That one document runs to 370 pages. Do you really think there are no errors that will require updates? Or a significant amendment to cover cars made in other countries? Or a missing subsection for retrofitting the technology to older electric cars released before the rules were finalized?

And, speaking of those 370 pages, that brings us to the second problem. Even assuming the best will in the world, no spec is ever totally unambiguous. Consider web browsers. Remember back around the turn of the century, when we had Internet Explorer, Netscape, and AOL’s customized versions of IE? All theoretically compliant with web standards, all delivering different user experiences–rendering pages slightly–or extremely–differently.

Nor do they do anything to prevent developers from introducing non-standard extensions. Do we really want some latter-day Netscape-wannabe from coming up with an automotive blink tag while their competitors over at Microsoft-like Motors are pushing their equivalent of the scrolling marquee tag?

But I digress slightly.

What started this train of thought was wondering how autonomous vehicle developers are going to handle weird, one-off situations. We know some of them are working up plans for turning control over to remote drivers (like OnStar on steroids). But how well is that going to work at 60 MPH?

Case in point: The Richmond-San Rafael has a part-time lane. For most of the day, it’s actually the shoulder on the eastbound part of the bridge. But during the afternoon rush hour, it becomes a traffic lane. There are lights to signal when it’s open to traffic–and the open hours are scheduled–but it can be taken out of service when necessary. That means developers can’t count on programming open times. Cars may or may not be able to read the signal lights. Maybe there’s a standard compliant (for some standard or other) radio signal as well.

But the critical point here is that the lane markings are, well, weird. There’s a diagonal stripe that cuts across the lane; when the lane is open, drivers are expected to ignore the line, but at other times, they’re supposed to follow it in merging into the next lane over.

How is the car supposed to know when to follow the line? (Come to think of it, how do current lane assist technologies handle that stretch of road?) How are the programmers prioritizing lane markings versus other signals?

Maybe, I thought, in ambiguous situations, the rule could be “follow the car in front of you”. That could work. Sooner or later, the chain of cars deferring to the next one forward will reach a human-driven car which can resolve the conflict. Hopefully that driver is experienced enough to get it right and neither drunk nor distracted by their cell phone.

But how are the cars going to know if the car in front of them is trustworthy–i.e. is following the same “follow the car in front of me” rule? Is your Toyota going to trust that Ford in front of it? Or will it only follow other Japanese manufactured vehicles? Maybe the standard can include a “I’m following the car in front of me” signal. But what if the signal changes in version 2.2a of the specification?

There’s a classic short story* in which cars and trucks have evolved from trains. Each manufacturer’s vehicles require a different shape of track and a different width between tracks. Some are nearly compatible, able to use a competitor’s tracks under certain special circumstances. As you might imagine, the roads are a mess, with multiple tracks on every street, except where a city has signed an exclusive deal with one manufacturer.

* Whose title and author currently escape me, darn it. If you recognize it, please let me know in the comments.

The story is an allegory on the early personal computer industry with its plethora of competing standards and almost-compatible hardware, but I can’t help wondering if we’re about to see it play out in real life on our roads.

A Modern Headache

Need a break? Too much going on in your life, and you just need to veg out for a while? Kick back, turn on the TV. You pick the channel, it doesn’t matter.

Because your relaxation will be interrupted. Probably by a telemarketer–but that’s a subject for a different post. No, I’m talking about the commercials. Specifically, the drug commercials.

Annoying as all get-out, aren’t they? Most likely you don’t have the condition the drug they’re touting is intended to cure. Even if you do, the list of side effects would make any rational person flee in terror.

I’m especially confused by the ads that say “Don’t take this if you’re allergic to it.” How are you supposed to know you’re allergic to it unless you’re already taking it?.

But I digress.

What really puzzles me about the whole phenomenon is how many people think this is new.

It’s not. Consider Allan Sherman’s classic paen to one class of medical ads from 1963:

Sounds familiar, doesn’t it? Disturbing scenes of body parts you’d rather not see. Appeals to bypass authority. Untested claims of efficacy.

Replace “Bayer Aspirin” with “Otezla” and the only way the audience could tell the difference between the 1960 commercial and the 2019 commercial would be that the older one is in black and white*.

* Anyone else remember seeing “The following program is brought to you in glorious, living color” on a black and white TV set?

Bottom line, this kind of ad has built more than fifty years of inertia. That means they must work, or the advertisers would have tried something different. And that means they’re not going away, no matter how many people scream for legislation.

Let’s face it: Allan had it right. The only way to ensure you’ll never be bothered by a drug ad again is to eat your TV.