AI AI AI/O 2023

As promised, a few thoughts about Google’s I/O announcements. But first, I want to offer congratulations to the Kraken.

Making the playoffs in their second season. Getting through the first round, pushing the second round to seven games–and coming within inches of forcing overtime in that seventh game. Nice job, gang, nice job. Not the outcome we all wanted, of course, but on the up-side, it gives something to build on next year. Thank you for the excitement.

Now, Google. AI is, of course, the flavor of the month, and Google has been binging on it. As many commentators have pointed out, “AI” appeared in every product announcement–nearly every sentence. Oddly, as someone (my apologies to them for forgetting who it was) the only product conspicuously missing is the one that would seem the natural spot for an AI touch-up: Google Assistant. I can’t imagine GA is going to vanish, but the lack of mention at I/O does make one wonder if its days as a separate product are limited.

Anyway, the first notable announcements were “Help Me Write” in Gmail (and later, in Google Docs) and the “Magic Editor” in the Camera app (and possibly as a standalone, presumably web-based, application).

Last year’s “Magic Eraser” worked well, within limits, so adding additional tools to help with photo editing seems the logical next step. Once you’ve selected an object, why limit yourself to deleting it? Move it around, change colors (the enhanced version of the “camouflage” function we already had), resize it–all logical. Sure, you’re rewriting history, but your memory does that anyway.

Similarly, given that Gmail already has suggested responses and autocorrect/as-you-write predictions, “Help Me Write” isn’t exactly a major cognitive stretch. Feed your AI a few words to suggest where you want to go, and watch it throw something together for you. How long before it starts arguing with you when you make changes to its “suggestions”? Think I’m kidding? Have you ever had your GPS get ticked off at you when you don’t follow its preferred route?

Those last couple of paragraphs sound pretty negative. In all seriousness, I think both tools could be useful, used correctly. But how many people are going to use them to improve what they create–and how many are going to hand the controls over to them entirely? (Case in point: if your phone is set to use one of the features to automatically pick the “best” picture–HDR, “Top Shot”, and so on–how often do you overrule it, or even look at the alternatives it rejected?)

Then there’s that “AI Prompts” feature for Google Docs. I can sort of see the utility of something that sees you’re stuck and pops up with a helpful suggestion or two. But it seems like that’s going to be much too easily abused. First it suggests something to get you unstuck, then it offers to write something to match the suggestion, and the next thing you know, it’s written your whole term paper/research article/novel. And, frankly, how is it going to know you’re stuck? Half the time when I’m not typing, it’s because I’m staring at the ceiling trying to find just the right word to come next and the other half I’ve gone to the bathroom/down to the kitchen for some tea/otherwise away from the computer. Either way, having that suggestion pop up isn’t likely to help much. Hopefully the feature can be turned off.

Naturally, AI is going to fuel Google’s traditional core business: search. It will, we’re told, allow for more complex searches that currently would require multiple searches and a manual combination of the results. The example we got was asking which vacation destination would be better for a family with kids and a dog; currently you would need to ask about the destinations independently, and figure out their individual kid- and dog-friendliness. It should also allow for chaining searches together implicitly. After you finish asking about the vacation destinations, if you then search for flights, Google might prefill the destination search based on the vacation query, and maybe even limit the search to airlines that allow pets and trim out the flights that require transfers. All automAtIcally.

That’s another one that sounds nice, raises concerns. How clear will it be what the source of the search results is. Will state or local tourist boards try to bias results to favor their regions as vacation destinations. (Yes, that’s a rhetorical question.) How long will Google retain search information? Some questions have a much longer shelf life than others. Will I have to tell the AI I don’t care about the trip I took last summer, now that it’s Christmas time? Or remind it about the search I did last year on alternative energy sources so it knows to prioritize anything new?

Then, of course, there’s that whole business about AI-generated art. Google says anything created by their AI will have metadata that reveals that fact. That’s nice, but metadata is easy to remove or alter. Heck, I do it on almost every picture I post to the blog: I strip out the GPS coordinates, the camera details, and pretty much everything else, and I add a copyright statement. Takes all of five seconds with a command line tool. If I can do that much, image what someone who knows what they’re doing could accomplish!

They’re also planning an “About This Image” feature that will, among other things, tell you where else a picture has appeared. That’s nice. But I have to say, I’ve never been very impressed with Google’s picture search functionality–TinEye works much better, in my opinion. And if Google’s

AI generated images will have metadata that clearly says so. So? Stripping metadata is easy. And the “About this image” bit to show where the image has appeared is iffy–especially if it relies on metadata. Google’s reverse image search has never worked especially well for me. And if the feature relies on metadata, well, see the previous paragraph.

Other items: Why does everyone assume I’m happy to let them use my Bluetooth, battery, and cellular data to help find other people’s lost keys? Just because Apple is doing it doesn’t mean Google has to. Yet, here we are.

Emoji wallpapers? Who the hell asked for this?

The cinematic wallpapers are less annoying, but does it really improve your life to have the picture on your desktop/home page appear in simulated 3D?

Then, of course, there’s the hardware. There weren’t any unleaked surprises, but just to touch the high and low points.

The Pixel 7a is, as best I can tell, essentially a Pixel 6 with a few upgrades–which does not include the camera, which is for many people a major selling point. Why would you get a 7a for roughly the same price as a 6?

The Pixel Tablet. Did we really need another device with an 11 inch screen? That’s my major complaint about iPads: the screen looks nice, but it sucks when it comes to portability. It won’t fit in a pocket, even a large jacket pocket. For that matter, tablets in the 10+ inch range weigh too darn much for kicking back in bed and vegging out. Google’s trying for the value-add by including the base with its (presumably) decent speakers, charger, and the tablet’s Hub mode. So now you have a device that can sit on a table and act as a TV (but much smaller), photo frame, and smart controller for your home automation gadgets. How much of an improvement over the current Google Home experience does that really amount to?

The only device I actually liked the sound of is the Pixel Fold. It’s a phone when you need one (like, say, to make a call, or just shove it in a pocket), but it unfolds into a 7.6 inch tablet. As I said Google I/O 2016, “I strongly feel that seven inches is exactly the right size for a light entertainment device–something that fits into the space between a phone you can hold to your ear and a TV you watch from across the room. I’m deeply disappointed to learn that Google apparently doesn’t see that as a viable niche.” I’m delighted that it only took Google seven years to come to the right conclusion.

They hyped the “use it as its own tripod” feature, which amuses me highly, considering that Samsung got there first. But regardless of who invented it, it’s a useful tool, especially since it lets you use the high quality “rear” camera for selfies.

The only down side I see is the price. $1800? Ouch. For that price, you could superglue three Pixel 7a phones together with hinges from your local hardware store, and have a truly humongous folding phone. Still, it’s the same price as Samsung’s latest foldable phone–and we’re already seeing discounts. Google’s own Fi store is offering $700 off (over two years), which puts it in the same ballpark as an iPhone 14 or Samsung S23. Or if you trade in the right phone, they’ll knock off a full thousand bucks over the same two years.

Google I/O 2022

And here we go.

Historically, Google I/O has been light on hardware announcements–fair enough, given that it’s really intended as a forum to alert developers to what’s coming. This year, though, Google did twin keynotes, one for developers, and one for the hardware enthusiasts. Unless you’re writing code for Android devices, you’re probably not interested in the former at all, and if you are interested, you don’t need me to explain what’s what. So let’s just take a look at the keynote for the rest of us.

Twenty-four new languages in Google Translate. Handy, especially if you’re planning a trip or doing business internationally. And if you’re not, the new languages won’t get in your way; language packs for Translate are optional add-ons.

I have a dubious about the forthcoming “immersive view” for Maps. Putting you at the center of a 3D reproduction of the city you’re navigating–complete with machine-generated interior views of restaurants–sounds both entertaining and fascinating. I can only hope, however, that it only works when you’re on foot. Way too distracting when driving. And Google hasn’t yet figured out how to tell if you’re a passenger or driver without relying on self-reporting.

A bunch of enhancements to YouTube, Google Meet, and Search around video quality and pictures. All cool stuff, and especially the part about incorporating skin tone data to improve video quality and searching while avoiding future iterations of the infamous “gorilla” effect.

Over on the Android side of things, we will, of course, be getting Android 13. The most interesting thing I see coming there is the ability to set different languages for different apps. Mind you, I didn’t say “useful”, though I’m sure many people will find it so. But from the standpoint of being able to customize your device to work better with the way you think, it’s a fascinating tweak. I’m sure Google will be collecting data on how it’s used–what apps most often get set to a different language than the device’s default, for example. I rather hope some of that information gets shared.

Moving on to hardware.

The Pixel 6a, a budget version of the Pixel 6, will be out around the end of July. $449 gets you essentially the same camera as the 6, the same 5G capabilities, and, of course, the same photographic abilities. Including that improved skin tone management as it rolls out across the entire line of Google devices and software.

Improved earbuds, inevitably tagged “Pixel Buds Pro” will also be out at the end of July. Active noise cancellation, of course, and the now-mandatory “transparency mode” to let some outside sound in so you don’t get run over crossing the street. If you remember to turn it on…

Looking a bit further out, probably just in time for the Christmas season, we’ll apparently be getting the Pixel 7. Not much in the way of details on that; I imagine we won’t hear anything much officially until after the Pixel 6a launch. Don’t want to cannibalize the market, after all.

And around the same time, we’ll be getting a Pixel Watch. As rumored. Sounds like Google will be folding much of the Fitbit’s functionality into the watch. That’s a no-brainer if it’s going to compete with the Apple Watch. And, no surprise, tap-to-pay functionality and the ability to control smart devices around the house. Reminder: tap gently. Smart watches are more fragile than the manufacturers would have you believe, and they’re expensive to repair.

Even further in the future–sometime in 2023–Google is going to release a new tablet. Interestingly, they’re not positioning it as a standalone device, but rather as a “companion” to a Pixel phone. Whether that means it’ll primarily act as a large replacement for the phone’s screen or if it will “intelligently” display contextual information to enhance whatever you’re doing on the phone remains to be seen. The former strikes me as rather a niche market; the latter could be very handy. Imagine pulling up a map search for restaurants on the phone and having the tablet immediately start showing menus, reviews, and online ordering, while the phone stays focused on where the places are and how to get there.

Bottom line: Google is innovating. Not in big, “revolutionary” ways, but in little ways. It would be a bit unfair to call what’s coming “evolutionary”, but it’s certainly closer to evolution than revolution. Recent years have seen a lot of “catching up with Apple”. This year seems to be declaring that a done deal and trying some different things to see what sticks.

Google I/O 2021

Sorry about the late post. I’ve been trying to get a handle on all of the critical news coming out of yesterday’s Google I/O keynote. Unlike years past, nobody’s done a live-blog of the presentation–not unreasonable, considering that it’s online for anyone to watch, but annoying for those of us who don’t want to be limited to realtime speed. (I don’t know about you, but I read a lot faster than I watch, and I don’t have much patience for videos that are, in effect, advertising.)

Anyway.

Google wants to be helpful. Well, Google wants to make lots of money, but if they can do it by being helpful, why not? So they’re introducing features like Google Maps routings that are optimized for fuel efficiency or weather-and-traffic safety. Seems like a useful initiative. I don’t have any more qualms about it than I do over any use Google makes of the information they’re gathering about us.

Google Workspace–the business-oriented set of tools that includes Docs, GMail, Meet, and so on–is getting something called “Smart Canvas”. It’s supposed to allow for better collaboration. For example, having a Meet video conference while collaboratively editing a Sheet document. Again, useful, at least for that subset of the world that needs it. And not hugely more intrusive than any of the individual tools alone.

Here’s one that really looks good: Google is adding an automatic password change feature to Chrome. This builds on the existing feature that alerts you if your password has been exposed in a data breach. Now it’ll have an option to take you to the site and walk through the password change process for you. I do wonder if you’ll be able to use it to change your Google password periodically; that’s something you should do anyway, and especially if you’re using the password manager built into Chrome. Speaking of which, that Chrome password manager will soon be able to import passwords from other password managers. Handy if you’re using Chrome everywhere, but those of us who use other browsers occasionally will probably want to stick with a third-party manager.

Some privacy additions here and there. A locked folder for Google Photos is a natural; we should have had that years ago.

Then there are the counter-privacy features. The keynote highlighted updates to Google Lens that will let you take a picture of somebody and find out where their shoes are sold. Because that’s not creepy at all.

What else?

New look and feel in Android 12–which will probably be released in the Fall–and some iOS-catchup features. Showing on-screen indicators when an app is using the camera or microphone is worthwhile, but they’re also introducing one of my least-favorite iOS features: using your phone as your car key. That’s coming first to BMW, which somehow doesn’t surprise me a bit. I suspect Lexus won’t be far behind. But I digress.

One useful update (for a small subset of users) is the ability to use your phone as a TV remote control. Mind you, it’ll only work with devices running Android TV OS. That does include the Chromecast with Google TV and NVidia Shield devices, so there’s a decent pool of users, but it’s unlikely to replace all your current remotes. There’s an opportunity here for somebody to step up and fill the universal remote void that Logitech’s decision to stop making Harmony remotes is leaving.

And that’s pretty much it. No hardware announcements. We’ll probably get those sometime in late summer, when we get close to the Android 12 release.

Google I/O 2019

Welcome to my annual Google I/O Keynote snarkfest.

In years past, I’ve used Ars Technica’s live blog as my info source, but this year it appears they’re not at Google I/O. So all the snark that’s fit to print comes to you courtesy of Gizmodo’s reporting.

My apologies, by the way, for the later-than-usual post. Blame it on Rufus. No, not really. Blame it on Google for scheduling the I/O keynote speech at 10:00. But I did have to duck out to take Rufus to the vet for a checkup. He’s fine. The keynote is over. I’m caught up. Enjoy your post.

First up, Google is bringing augmented reality to search on phones. The demo involves getting 3D models in your search results. You can rotate them to see all sides and you can place them in the real world with an assist from your phone’s camera. Why do I suspect the porn industry is going to be all over this technology?

Seriously, though, it’s part of an expansion of the Google Lens technology we’ve been seeing for the past few years and integrating it into search. Other enhancements to Lens include the ability to highlight popular items on a recipe and displaying videos of recipes being made when you point the camera at a printed recipe.

Does anyone really want these features? If I’m at a restaurant, I’m going to pick the dish that sounds the tastiest, not the one the most people have ordered. My tastes aren’t necessarily yours, after all, and sometimes it’s the odd little dishes tucked away in the corner of the menu that are the most interesting. As for the cooking videos, I try to keep my phone in the case in the kitchen. I’d rather not wind up preparing pixel ‘n’ cheese or nexus stew. Silly of me, I know.

Anyway.

Remember last year’s big feature? Duplex, in case your memory is as short as mine. That’s the feature that let your phone make reservations on your behalf. Did anyone use it? Maybe a few people will try this year’s iteration which can make car reservations and buy movie tickets. I can’t say I’m thrilled at the possibilities this opens up.

Assistant, the voice behind “Hey, Google,” gets an update this year, as well. It’ll be able to figure out what you mean by personal references. Want directions to your mother’s house? Just ask. Because it’s good to know that, when you can’t remember where your relatives live, Google can.

Slightly more useful is a new driving mode, intended to reduce distractions. Speaking as someone who nearly got rear-ended yesterday by someone looking at the phone in her lap, I think the only legitimate “driving mode” would be one that turns the damn phone off as soon as you start the engine. Not that anyone is going to implement that.

Moving on.

Google is very, very sorry for whatever biases their machine learning technology has revealed. They’re working very, very hard to reduce bias.

Let’s be honest here. The problem isn’t the machine learning tools. It’s the humans who select the data that the machines learn from. Fix the developers’ biases and the machines fix themselves.

Onward.

More privacy features. Which seem to boil down to giving people more ability to delete whatever Google knows about them, but precious little to prevent them from learning it in the first place.

Oh, wait, one exception: there’s going to be an incognito mode for Maps, so you can get directions to the doctor’s office without Google being easily able to tie the request to your earlier searches. They’ll still know someone searched for the office and there are a number of ways they could tie it to you, but at least they’ll have to work for the data.

I’m a big fan of incognito mode in the browser, and I hope they roll it out everywhere sooner rather than later–and that’s no snark.

Furthermore.

Generating captions for videos on the fly seems like an interesting, if somewhat niche application. Applying the same technology to phone calls, though… If Google can pull that one off, it’d be a big win for anyone who’s ever tried to take a call in a noisy environment or even just sworn at the lousy speaker in their phone. Yes, and for those whose hearing isn’t the aural equivalent of 20/20 vision.

Looks like there’s a related effort to teach their voice recognition software to understand people with conditions that affect their speech. The basic idea there is good–but Google needs to beware of inappropriate extensions of the technology.

Correctly interpreting the speech of someone who’s had, say, a stroke, is a good thing. Suggesting that someone see a doctor because there are stroke-like elements in their speech is moving into dangerous waters, ethically speaking.

On to Android Q.

Support for folding devices, of course. That was inevitable. Moving apps from one screen to another, either literally or figuratively (when the device is folded and the screen dimensions change, for example).

Improved on-device machine learning, which will let phones do voice recognition themselves without help from Google’s servers. That’s a win for privacy and data usage.

Dark mode. Personally, I dislike dark mode; I find white text on a black background hard to read. But I know others feel differently. So enjoy, those of you who like that kind of thing.

More privacy features, including new controls over which apps have access to location data and when they have it.

OS security updates without a reboot? Would that Windows could do that. It’s a small time-saver, but worthwhile.

Focus Mode–which will also be retrofitted to Android Pie–maybe somewhat less useful: you can select apps to be turned off in bulk when you turn on Focus Mode. If the goal is to get you off your phone, this seems like a fairly useless diversion, because who’s going to put their important apps on the list? It does tie in with expanded parental controls, though, so there’s that.

Moving on.

Like your Nest thermostat? That’s cool. (sorry) Now all of Google’s smart home gear will be sold under the Nest name. I guess they figured with the demise of “Nexus,” there was an opportunity for an “N” name to distinguish itself.

So, no more “Google Home Hub”. Now it’s “Nest Hub”. Expect similar rebranding elsewhere. It looks, for instance, like Chromecast (remember Chromecast?) will be moving to Nest. NestCast? Or something stupid like “Google Chromecast from Nest”?

And, speaking of Pixel–we were, a few paragraphs back–we’re getting cheaper Pixel phones, as expected.

The 3a and 3a XL, starting at a mere $399, and coming in three colors. (Yes, we see what you did there, Google.) The usual black and white, naturally, but also something Google is calling purple. Looking at the photos, I’d say it’s faintly lavender, but maybe it’s the lighting.

Judging by the specs, it sounds like you’ll get roughly Pixel 2 levels of performance, except for the camera, which should be the same as the high end Pixel 3 models.

And, unlike Apple, who preannounce their phones*, the Pixel 3a devices are available online and in stores now.

* Remember signing up to get on the list to pre-order an iPhone?Fun times.

Moving on.

Bottom line: once again, we’re not seeing anything wildly new and different here. Granted, some of the incremental advances over the past year are large, but they’re all still evolutionary, not revolutionary.

And no, there weren’t any hints about what the Q in Android Q stands for.

Google I/O 2018

As promised, here’s my usual cynical rundown of all the exciting things Google announced in the I/O keynote. As usual, thanks to Ars for the live stream.

Looks like a great year ahead, doesn’t it? See you Thursday.


Okay, okay. I just had to get that out of my system.

First up, Sundar admitted to Google’s well-publicized failures with the cheeseburger and beer emojis. It’s great that they’ve been fixed and that Google has apologized publicly. But when are they going to apologize for their role in inflicting emojis on us in the first place?

Anyway.

Google has been testing their AI’s ability to diagnose and predict diabetic retinopathy and other health conditions. I’m hoping this is not being done via smartphone. Or, if it is, it’s fully disclosed and opt-in. I’m quite happy with my medical professional, thanks, and I really don’t want my phone to suddenly pop up a notification, “Hey, I think you should see an ophthalmologist ASAP. Want me to book you an appointment?”

I do like the keyboard that accepts morse code input. That’s a nice accessibility win that doesn’t have any glaring detrimental impact on people who don’t need it.

That said, I’m less enthusiastic about “Smart Compose”. I’m not going to turn over writing duties to any AI. Not even in email.

But I do have to wonder: would it improve the grammar and vocabulary of the typical Internet troll, or will it learn to predict the users’ preferences and over time start composing death threats with misspellings, incoherent grammar, and repetitive profanity? Remember what happened with Microsoft’s conversational AI.

And I’ve got mixed feelings about the AI-based features coming to Google Photos. I pointed out the privacy concerns about offering to share photos with the people in them when Google mentioned it last year. Now they’re going to offering the ability to colorize black and white photos. Didn’t Ted Turner get into trouble for doing something of the sort?

More to the point, how many smartphones have black and white cameras? Taking a B&W photo is a conscious decision these days. Why would you want Google to colorize it for you?

Fixing the brightness of a dark photo, though, I could totally get behind.

Moving on.

Google Assistant is getting six new voices, including John Legend’s. Anyone remember when adding new voices to your GPS was the Hot Thing?

More usefully, it’ll remain active for a few seconds after you ask a question so you don’t have to say “Hey, Google,” again. Which is great, as long as it doesn’t keep listening too long.

That said, it’ll help with continuing conversations, where you ask a series of questions or give a sequence of commands; for example, looking up flights, narrowing down the list, and booking tickets.

And, of course, they’re rolling out the obligatory “teach little kids manners by forcing them to say please” module. If it starts responding to “Thank you,” with “No problem,” I will make it my life mission to destroy Google and all its works.

Moving on.

Smart displays–basically, Google Home with a screen–will start coming out in July. I can see the utility in some areas, but I’m not going to be getting one. On the other hand, I haven’t gotten a screenless GH, nor have I enabled Google Assistant on my phone. I just don’t want anything with a network connection listening to me all the time. But if you’re okay with that, you probably ought to look into the smart displays. It will significantly add to the functionality of the home assistant technology.

Good grief! You thought I was joking about your phone offering to make a medical appointment for you? Google isn’t. They’re going to be rolling out experimental tech to do exactly that: your phone will call the doctor’s office and talk to the receptionist on your behalf.

Not just no. Not just hell no. Fuck no! No piece of AI is going to understand my personal constraints about acceptable days and times, the need to coordinate with Maggie’s schedule, and not blocking my best writing times.

Moving on.

Google is rolling out a “digital wellbeing initiative” to encourage users to get off the phone and spend time with human beings.

Just not, apparently, receptionists and customer service representatives.

It’s a worthy cause, but let’s face it: the people who would benefit most won’t use it, either because they don’t recognize the problem, or because being connected 24/7 is a condition of employment. I’m sure I’m not the first to point out that Google employees are likely to be among the most in need of the technology and the least likely to use it.

Moving on.

The new Google News app will use your evolving profile to show you news stories it predicts will interest you. No word on whether it’ll include any attempts to present multiple viewpoints on hot-button topics, or if it’ll just do its best to keep users in their familiar silos. Yes, they do say it’ll give coverage “from multiple sources” but how much is that worth if all the sources have the same political biases bases on your history of searches? Let’s not forget that Google’s current apps with similar functionality allow you to turn off any news source.

Moving on.

Android P (and, as usual, we won’t find out what the P dessert is until the OS is released) will learn your usage patterns so it can be more aggressive about shutting down apps you don’t use.

It’ll offer “App Actions” so you can go straight from the home screen to the function you want instead of launching the app and navigating through it.

Developers can export some of their content to appear in other apps, including your Google searches.

The AI and machine learning functionality will be accessible to developers. Aren’t you thrilled to know that Uber will be able to learn your preferences and proactively offer you a ride to the theater?

And, of course, the much-ballyhooed navigation designed for a single thumb. The “recent apps” button will go away and the “Back” button will only appear when Android thinks it’s needed. And some functionality will be accessible via swipes starting at the “Home” button. Because the “Back” button wasn’t confusing enough already.

I do like the sound of a “shush” mode that triggers when you put the phone face down. I’m using a third-party app to do that with my phone now. Very handy when you want to be able to check in periodically, but don’t want to be interrupted. Sure, you can set the phone to silent, but putting it face down is faster and you don’t have to remember to turn notifications back on.

On to Google Maps.

It’s going to start letting you know about hot and trending places near you and rate them according to how good a fit they are for you. I’ve got serious questions about how well that’s going to work, given the number of times Google’s guessed wrong about which business I’m visiting. If they start telling me about popular Chinese restaurants because there’s a Panda Express next door to the library, I’m gonna be really peeved.

Oh, and businesses will be able to promote themselves in your personalized recommendations. How delightful. Thanks, Google!

Okay, the new walking navigation sounds useful. Hopefully it will learn how quickly you walk so it can give reasonably accurate travel time estimates. Hopefully there’s also a way to get it to make accommodations for handicaps.

Of course, if you don’t want to walk, Google–well, Waymo–will be happy to drive you. Their self-driving program will launch in Phoenix sometime this year. Which seems like a good choice, since they’re unlikely to have to deal with snow this winter.

I guess people in Phoenix will be getting a real preview of Google’s future. Not only will their phones preemptively book their medical appointments, but they’ll also schedule a self-driving car to get them there. Will they also send someone along to help you put on the stylish white jacket with extra-long sleeves and ensure you get into the nice car?

Google I/O 2017

So, yeah, Google I/O again. Are you as thrilled as I am? You’re not? But they’ve announced such exciting things!

Well, OK, when you come right down to it, they really only announced one thing: Google’s focus is changing from “Mobile first to AI first”. And let’s be honest here: that’s pretty much what they said last year, too.

But what does AI first look like?

For starters, Gmail will start doing “Smart Reply”. This is the same idea as in last year’s Allo text messaging app: pre-written, context-sensitive messages. I haven’t used Allo–anyone want to comment on whether the smart replies are any more accurate than the word suggestions when you’re typing?

Potentially more exciting is their application of image recognition technology. Their example is being able to take a picture of a flower and have your phone tell you what kind it is and whether it’s going to trigger your hay fever. Since I’m sitting here sniffling despite massive doses of anti-histamines, I have to admit that actually sounds like a good use of technology. Presumably over time, the tech will learn about non-botanical parts of the world.

Yes, I’m kidding. It can also recognize restaurants and show Yelp reviews. That’s nice, but not nearly as useful. Ooh, and it can translate signs. (Their demo showed Japanese-to-English translation. I want to know if it can handle Corporate-to-English.) If there are dates on the sign–for example, an ad for a concert–it can add the event to your calendar. It can even ask if you want it to buy tickets.

Basically, it’s playing catchup with Alexa–including adding third-party programmable actions and voice calling–with a few little steps ahead of Amazon.

Case in point: Google Assistant, the brains behind “OK, Google” is getting more smarts and the ability to hold a typed conversation. So you’ll get a running record of your interaction, so when you realize you’ve been following one association after another, you can scroll back and check the answer to your original question. Could be handy, especially if you get stuck on TV Tropes.

Moving on.

AI first also means Google Photos is getting added smarts, starting with something Google calls “Suggested sharing”. Yup. It’ll nag you to share your photos with the people in them. 95% of the pictures I take seem to be of the cats. Is it going to create Google accounts for them so I can share the photos? Or do they already have accounts?

More seriously, if Google knows who the people are, but they’re not in my address book, will it still urge me to share the photos? Sounds like that’s an invasion of privacy just waiting to happen.

Moving on.

Android O (no name announced yet, naturally. They’ll undoubtedly wait until release time for that) is getting the usual slew of features and tweaks. Picture-in-picture, notifications on Home screen icons, improved copy/paste. That last will not only let you select an entire address with a single tap, but offer to show it in Maps. I’d rather it offered to add it to my contacts for future reference, but maybe that’s just me.

Google also made a point of stressing that all of these new “AI first” features happen on your device, without any communication back to Google. That’s actually reassuring. I’m sure the results are reported back–your phone will tell Google you were checking on the hay fever potential of that weird flower that appeared in your back yard, but at least the actual picture won’t wind up in Google’s archives waiting for a hacker to drop by.

There’s also going to be an Android O lite. Called Android Go, it’ll be stripped down to work on cheap phones with limited memory. I wonder if that means they’ll start offering it for popular but abandoned devices that can’t handle recent Android versions. Nexus 7, anyone? Nexus 9, for that matter?

Moving again.

Yes, the rumors are true: Google is working with third-parties to launch a VR headset that doesn’t need a separate phone. Hey, anyone remember how big 3D was a few years ago? How long before VR is as critical to the entertainment experience as 3D?

And one last move.

Ever used Google to find out what movies are playing nearby? Soon you’ll be able to use it to find out what jobs are available nearby. Searching by title, date, and commute time. Why do I think the popularity of that last filter is going to be very strongly geographically linked?

Honestly, I’m not seeing anything here that gives me a major “gosh-wow” feeling. Some interesting possibilities and appeals to niche markets, yes, but most of what they’ve announced are obvious extensions of last year’s announcements. We can give them points for consistency, I suppose.

Google I/O 2016

We’re in Google I/O week, so I suppose I should do my annual summation of the keynote and highlight what we can expect to see heading our way.

Google is very excited about “the Google Assistant”. It’s a collection of technologies–natural language processing, voice recognition, geographic awareness, and on and on–intended to provide context-aware help and advice.

From what I can see, a large part of it is the next stage in the evolution of “Google Now” and “Now on Tap”. Ask the assistant about movies, and it’ll give recommendations tailored to your local theaters, what you tell* it (or what it already knows!) about your family and your tastes, and let you buy tickets. All from within the search app.

* Yes, “tell” as in “speak aloud”. Voice recognition, you dig?

Nothing new and earthshaking, but definitely keeping the pressure on Apple and Amazon. Especially Amazon–there’s going to be a “Google Home” device later this year that’s built around the Google Assistant technology. Like Amazon’s Echo–but since it’s from Google, of course it’ll be zillions of times better.

Google Assistant will also be part of two new apps: “Allo” and “Duo”. Allo is the next generation of text messaging, replacing “Hangouts”. The GA will listen in on your exchange of messages, allowing it to pre-write replies for you (presumably going beyond simple “yes” and “no” answers) and letting you to ask it for context-sensitive help. Their example of the latter is giving you restaurant recommendations based on your current location (or an area you’ve been discussing) and food preferences. Oh, and it’s got emoticons and variable font sizes. Yay.

Duo is video chat. Call screening, performs well when bandwidth is tight, switches between wi-fi and cellular as appropriate. What can you say about video chat? Oh, it’s cross-platform, Android and iOS. I doubt any Apple-only conversations will move off of Facetime, but it ought to be nice for integrated families and businesses. (Maybe it doesn’t have GA. If not, look for that at next year’s I/O.)

Moving on.

Google can’t decide what to call Android N. They’re taking suggestions from the Internet. If you’ve got any ideas, go to https://android.com/n/ And no, they’re not offering any prizes. I’d suggest “Nutmeg,” but how would you turn that into a statue for the front lawn? There’s still the possibility of another corporate tie-in. “Nerds,” anybody?

We already know a lot about what’s new in N–new graphics APIs, split screen/multitasking, compiler improvements (and a partial return of the Just-in-Time compiler that was removed in Lollipop. The idea seems to be to provide faster installs by letting apps run with the JIT compiler at first, then compile them in the background, presumably while you’re not using the device for anything else. The user messaging for background compilation failures will be interesting. “Why does it say I need to delete some pictures to install Duo? It’s already installed and working fine!”

Other changes: Encryption will be done at the file level instead of the disk level. Other than developers and the NSA, nobody will notice. Background OS updates: assuming your carrier actually approves an update, your phone will install it in the background, then make it live with a simple reboot. No more half-hour waits for the monthly security patches to install. Assuming you get the patches, of course.

Virtual reality. Yep, as expected, Google is joining the VR craze with support for it baked into Android–on capable devices, naturally. Even some current Nexus phones fall short–Nexus 5X, I’m looking at you.

Android Wear 2.0. Hey, your watch can do more stuff without talking to your phone. Sigh

Instant Apps. It’s not strictly correct in a technical sense, but think of a bundle of web pages packaged as an app that runs on your device without installation. Seems useful, especially if you’ve got limited bandwidth, but unless you’re a developer, you probably won’t even notice when you transition from the Web to an Instant App.

So, some interesting stuff, and–as usual–a lot of “meh”.

The Decline of Civilization–And Google I/O, too.

Today is the first day of Google I/O, the Big G’s annual excuse to shut down a couple of blocks around San Francisco’s Moscone Center. As always, I’ll be giving you my first reactions to their plans for the coming year–at least those plans that they warn us about.

While we’re waiting for the keynote address, though, I wanted to vent about a couple of signs of the encroaching End of Civilization As We Know It. If you’re not in the mood for my curmudgeonly rantings, feel free to skip ahead.

Still here? Good.

According to today’s SF Chronicle, Ross Dress for Less stores has settled a lawsuit brought by 2,400 of their janitors. The suit alleged that Ross and their janitorial contractor, USM Inc., failed to pay the janitors minimum wages and overtime between 2009 and earlier this year.

The settlement? $1 million. That’s right. Each of the janitors will receive a smidgen over $400 to compensate them for as much as five years of missing wages. Rubbing salt in the wound, Ross is also paying $1.3 million to the lawyers who negotiated the settlement.

Two questions: Are Ross and USM facing a criminal investigation into whether they did in fact conspire to cheat their janitors? (The newspaper article doesn’t say anything one way or the other; my guess is no.) And, has anyone checked with the janitors at the lawyers’ offices to see if they’re getting minimum wage and overtime? (Again, my guess is no.)

Moving on.

As I’ve said before, I don’t much care for basketball. Living in the Bay Area, though, it’s hard to avoid getting caught up in the current excitement over the Warriors*. So I watched about fifteen minutes of last night’s game while I was exercising. Mind you, that was about five minutes of actual game time.

* For those of you who don’t have the excuse of headlines screaming “40 YEARS IN THE MAKING” to clue you in, the Warriors are the local professional basketball team. They just made it to the finals, the NBA’s equivalent of the World Series, for the first time since James Naismith crossed the Delaware and brought a burning bush to the basketball-impoverished masses. Or something like that.

The game has changed a lot since I watched it in my misspent youth. Back then, when a team put up a shot, most of the players from both teams converged on the basket to go after a rebound. Today, the offensive team mostly heads for their own basket to play defense, conceding the rebound.

And that’s the other thing that’s changed. Back in my day (Damn kids!), after scoring, the smart teams put pressure on their opponents, making it difficult for them to move the ball into shooting range. Today, they just foul the worst freethrow shooter on the court.

According to the commentators, this is the height of strategy. And why not? It’s the same kind of thinking that figures it’s cheaper to settle a lawsuit than to pay the legally-mandated minimum wage.

Sorry, I don’t buy it. If the rules of the game are structured so that you’re better off breaking the rules than actually playing the game, then your sport needs to be fixed.

It’s an easy fix, too. All you have to do is award five points for a successful free throw. When it’s more expensive to commit a foul than to play the game, teams will stop committing strategic fouls.

Until that happens, though, I won’t be watching any more basketball.

Enough. On to Google I/O.

  • Android M – Lots of bug fixes. Oh, and a few improvements. A couple of them are even interesting.

    Apps will now request permission when they try to do stuff instead of getting blanket permissions when you install them. That means you can block some actions but allow others. Don’t want to let that new game have access to your address book? On the whole, that’s a win for users, but it’ll be interesting to see how developers handle the brave new world where users can block apps’ access to the ad network.

    Apps can “claim” web pages, so if you try to go to a particular website, you’ll get the equivalent app instead. From a user perspective, I think this one’s a step backward. I have the WordPress app installed on my tablet and use it occasionally for managing this blog. That doesn’t mean I want the app to open every time I try to access a WordPress blog–or even my blog.

    Android Pay is getting a facelift. You won’t need to open the app anymore. Whoopie. I hope they’re also improving the reliability. I got so many failures to connect with the terminal that I’ve given up on Android Pay.

    Doze sounds promising: if the tablet doesn’t move for an extended period, it’ll go into a power-saving deep sleep mode. If users can control the timeout, it’ll be big win. And an even bigger one if we can control what happens when it wakes up and all the suspended apps try to grab updates at once…

    Interestingly, the preview of Android M is only available for the Nexus 5, 6, 9, and Player. No Nexus 7. Apparently that “might” come later. Combined with the outrageous delay in bringing Android 5.1 to the Nexus 9, it does suggest that Google’s Android team may be a bit overextended, and that the Nexus 7 is going to be completely unsupported soon.

    I haven’t seen any hints of what the dessert name for M will be. I’d love it to be Marshmallow, if only because I want to see the statue they put on the Google lawn. I suspect we’ll get some hints once people start poking at the developer preview.

  • Brillo & Weave – A slimmed down Android for connected devices and a protocol to tie them together. We’ve talked about the security risks in “Internet of Things” devices before. I’m not sure I really want Google making it easier to create app-enabled locks.
  • Machine Learning/Context Sensitivity – They made a big deal out of this across all their products. Searches that understand pronouns and references to the data you’re looking at. Enhancements to Google Now to be more aware of where you are and what you’re doing–they’re calling it “Now on Tap”. (The example was recognizing that you’ve just landed at the airport and offering a Google Now card to “order an uber”. Given Uber’s recent bad press–quite the antithesis of Google’s “Don’t Be Evil” mantra–is that really a company Google wants users to associate them with?)

    The new Google Photo sounds potentially useful, though. Every picture you store will be automatically tagged so you can search for things like “Photos of my nephew at Folklife last year.” If the recognition works well, the advantages are obvious. If it doesn’t work well, then we’ve got a repeat of Flickr’s recent image tagging fiasco. The fast, simple sharing functions sound good too. As always, the gotchas are in the implementation details (security, security, security!)

  • I’m going to skip most of the rest of the goodies. Many of them are around ease of use. Good to know, but not all that interesting in detail. I did find the announcement that the enhancements to the developers’ tools will include the “Cloud Test Lab”. Google will perform some level of automated tests on your app across multiple devices with different hardware and software configurations. This kind of testing is, IMNSHO, not hugely useful for large, complicated apps, and there are definitely potential security concerns when the app needs to connect back to your corporate network for test data. But it can be useful. If any of my former cow-orkers use the Cloud Test Lab, I’d be interested in hearing how you like it.
  • Of course, Google is also working on a number of other projects: driverless cars, wireless Internet access via balloons, and so on. All part of this nutritious breakfastusing “technology to solve problems for everyone in the world”. That includes a new version of last year’s favorite Google I/O gizmo: Cardboard, the low-cost virtual reality device. The new version supports larger phones and is easier to construct. The software is also supposedly significantly improved. Last year, it took a few days for templates to show up online. If the same holds true this year, all of you with those lovely phablets will have a chance to check out VR on a budget.

Bottom line from my perspective: Google’s making some useful moves, playing some catch-up to Apple, and really only making one dumb move. If Brillo and Weave meet a quick death or get stuck in an endless pre-development stage, I’ll consider this the most worthwhile Google I/O yet.

Google I/O 2014

A couple of weeks ago, I hit the high points of Apple’s WWDC keynote. In the interest of fairness and equal time, here’s a look at the early announcements from Google I/O.

If there’s a unifying theme of Google’s announcements this year, it’s “unification.” A platform for wearable devices (currently a codeword for “watches”) that ties the watch to a phone with shared notifications and alerts; a platform for cars that essentially allows your phone to display information and apps on a dashboard screen; a single card-based design* across all platforms; an “Android TV”; the ability to use a watch as a security fob for a phone or tablet; Android apps running in Chrome OS; cross-platform cloud APIs allowing status to be seamlessly moved among Android, iOS, and desktop applications; mirror any (recent) Android device to Chromecast; health APIs to integrate health data across apps; everything is voice activated and context-aware. I’ve probably missed a few, but you get the idea.

* Does anyone else remember Palm’s card-based UI for PalmOS (later WebOS)? Everything old is new again…

We did see previews of the next version of Android, and we’ll see many more over the next few months. Google is releasing a developers’ preview of the so-called “L release” today, ahead of the public release this fall. We still don’t know the most important piece of information about the release: the food name. Speculation is rampant, with “Lollipop” the leading candidate, but Google remains quiet on the subject, fueling speculation about the possibility of another corporate tie-in. “Laffy Taffy,” anyone? (I hope Google does do a few more corporate tie-ins. I’d love to see Android 7 hit the market in 2016 under the name “Nerds”.)

So everything Google touches can talk to everything else Google touches. They look the same, they talk the same language. For good or bad, this sounds like Apple’s tightly integrated, similar-appearance infrastructure. Google’s variation on the theme relies on third parties for most of the hardware, but the core is the same: once you buy one Google device, it’s much easier for your next device to also be Google.

As with Apple, WWDC announcements, Google has a lot of evolution going on, but nothing truly revolutionary.

The revolution is happening outside of Moscone Center. As it happens, I was in San Francisco yesterday, and happened to go past Moscone shortly before the keynote. Here’s what was happening:
gio

That’s right. You know it’s a serious protest when there’s a brass band! (Ars is reporting that a couple of protesters even managed to briefly interrupt the keynote.)

Apparently Google is solely responsible for San Francisco’s apartment evictions and the world-wide inability of non-tech workers to earn a living wage. According to a flier* the protesters were handing out, and to the bits of the loudspeaker-delivered speech I heard, Google has an obligation to increase wages for employees of other companies, support tenant rights, and (my favorite) “End all tax avoidance schemes.”

* The flier is a bit of a WQTS moment, by the way. The illustration is poorly centered, and three of the five sentences include grammatical errors. My favorite: “Do you have an idea for an app that would alleviate the imbalances in Silicon Valley or have other thoughts to share?” Wouldn’t it be nice if somebody could write an app that would have thoughts to share?

Guys, Google may be big, but they aren’t that big, and they really have no moral, ethical, or legal obligation to solve all of the world’s problems.

Even if they did, do you really want to live in a world where Google is responsible for setting fare wages and policing housing markets? I don’t, and I’d be surprised if the protesters would either.