The freezing waters underneath Antarctic ice shelves and the underside of the ice itself are of great interest to scientists… but who wants to go down there? Leave it to the robots. They won’t complain! And indeed, a pair of autonomous subs have been nosing around the ice for a full year now, producing data unlike any other expedition ever has.
The mission began way back in 2017, with a grant from the late Paul Allen. With climate change affecting sea ice around the world, precise measurements and study of these frozen climes is more important than ever. And fortunately, robotic exploration technology had reached a point where long-term missions under and around ice shelves were possible.
The project would use a proven autonomous seagoing vehicle called the Seaglider, which has been around for some time but had been redesigned to perform long-term operations in these dark, sealed-over environments. ne of the craft’s co-creators, UW’s Chris Lee, said of the mission at the time: “This is a high-risk, proof-of-concept test of using robotic technology in a very risky marine environment.”
The risks seem to have paid off, as an update on the project shows. The modified craft have traveled hundreds of miles during a year straight of autonomous operation.
It’s not easy to stick around for a long time on the Antarctic coast for a lot of reasons. But leaving robots behind to work while you go relax elsewhere for a month or two is definitely doable.
“This is the first time we’ve been able to maintain a persistent presence over the span of an entire year,” Lee said in a UW news release today. “Gliders were able to navigate at will to survey the cavity interior… This is the first time any of the modern, long-endurance platforms have made sustained measurements under an ice shelf.”
You can see the paths of the robotic platforms below as they scout around near the edge of the ice and then dive under in trips of increasing length and complexity:
They navigate in the dark by monitoring their position with regard to a pair of underwater acoustic beacons fixed in place by cables. The blue dots are floats that go along with the natural currents to travel long distances on little or no power. Both are equipped with sensors to monitor the shape of the ice above, the temperature of the water, and other interesting data points.
It isn’t the first robotic expedition under the ice shelves by a long shot, but it’s definitely the longest term and potentially the most fruitful. The Seagliders are smaller, lighter, and better equipped for long-term missions. One went 87 miles in a single trip!
The mission continues, and two of the three initial Seagliders are still operational and ready to continue their work.
Gmail emoji reactions below and email (left) and the “add emoji” bar on the right.
Google
Finally, the feature everyone has been asking for: Gmail 👏 emoji 👏 reactions 👏.
You can now reply to an email just like it’s an instant messaging chat, tacking on a “crying laughing” emoji to an email instead of replying. Google has a whole support article detailing the new feature, which allows you to “express yourself and quickly respond to emails with emojis.” Like a messaging app, a row of emoji reaction counts will appear below your email now, and other people on the thread can tap to add to the reaction count. Currently, it’s only on the Android Gmail app, but it’s presumably coming to other Gmail clients.
Of course, email is from the 1970s and does not natively support emoji reactions. That makes this a Gmail-proprietary feature, which is a problem for federated emails that are expected to work with a million different clients and providers. If you send an emoji reaction and someone on the email chain is not using an official Gmail client, they will get a new, additional email containing your singular reactive emoji. Google is not messing with the email standard, so people not using Gmail will be the most affected.
Another weird quirk is that because emoji reactions are just emails (that Gmail sends to other clients and hides for itself), any emoji reactions you send can’t be taken back. There’s only Gmail’s “Undo send” feature for taking back reactions, which delays sending emails for about 30 seconds, so you can second-guess yourself. After that, you’re creating a permanent emoji reaction paper trail.
Thankfully, there are some limits on this. It won’t work on business or school accounts, so you can’t respond to your boss’s email with a poop emoji. Emoji reactions are only for casual emails that people apparently send to friends. (Do these people not have group chats?) Emoji reactions also aren’t available for group email lists, messages with more than 20 recipients, emails on which you’re BCC’d, encrypted emails, and emails where the sender has a custom reply-to address.
If the idea of emoji reactions to email has you selecting the puke emoji, as far as we can tell, there’s no way to just turn this off.
Enlarge / Claus Scholz is offered tea and moral encouragement by his robots, MM7 and MM8, also known as “Psychotrons,” in 1950 Vienna. This could be us, but many home automation platforms are only playing at being helpful.
Gamma-Keystone via Getty Images
Google today released a new Android OS with some modest improvements, a smartwatch with an old-but-still-newer chip, and a Pixel 8 whose biggest new feature is seven years of updates. But buried inside all the Google news this week is something that could be genuinely, actually helpful to the humans who get into this kind of gear—help for people setting up automations in their homes.
It’s easy to buy smart home gear, and it’s occasionally easy to set it up, but figuring out all the ways that devices can work with one another can be daunting. Even smart home systems with robust scripting abilities mostly let users develop great ideas for connecting two or more devices. That’s where, according to Google, AI can help.
Google says it will use AI (the company’s broad definition of AI, at least) at two different levels. At an app level, Google Home can start condensing all the notifications from cameras, sensors, and other devices into a streamlined summary, patched together by generative AI, and which you can respond to with natural language.
Google’s Rick Osterloh describing an AI-flavored feature to help build home automation routines.
Enlarge / Screenshot from Google Home demonstration, showing Google Home suggesting package delivery automations.
Google
What caught my attention was not the fact that your doorbell camera can recognize a package or that you can ask about it in English text—that’s a pretty standard Nest/Google/AI feat by now. What’s neat is that the Home app will now suggest automations that can follow from recognizing that package. In Google’s example, you could have certain lights in your home blink three times and have speakers play a chime—but only if somebody is home. (Presumably, you could set up an alternate notification solution for when you’re away.)
Earlier this week, Google announced another way that AI could help even seasoned smart home enthusiasts get more control. “Help me script” is a code automation tool that turns natural language—like, “When I arrive home and the garage door closes, turn on the downstairs lights”—into Google Home scripts. You might not have known that Google Home has a script editor or a Web interface, but it does, at least in a “Public Preview.”
“Help me script” is due to arrive “later this year in Public Preview,” while the app-based AI routine starters are an “experimental feature” that will be “rolling out” to (presumably Nest) subscribers next year. Google’s presentation, as is typical of Google generally, has fuzzy timing and availability details, so it’s hard to say whether the app-based automation AI will remain a subscriber-only feature.
It would be great to see Google—or any major hub maker in the smart home space—push automation and routine discovery forward, be it through generative AI or just smart code. Buying a light bulb that can be controlled by Bluetooth, Wi-Fi, Zigbee, or even Thread is something you can do at Home Depot. The same goes for motion sensors, sprinkler controllers, and many other gadgets. Hooking them up to Google Home, Alexa, Apple’s Home, or Home Assistant varies by device and system but should be achievable. Matter, which promised to make that last bit easier, hasn’t done so, but maybe give it more time.
Once you’ve got a bunch of things that you can toggle and control from a phone or a speaker, what then? What should these things do when you’re not looking? What would be the most helpful routine you might not have thought of—perhaps one that owners of similar devices have set up?
I thought of this recently when a few friends visited my house. I set up a motion sensor in my entryway, had a smart deadbolt in the door, and replaced the bulbs in two recessed fixtures with smart Wi-Fi bulbs. Using Home Assistant, I set up the area with a few rules:
When the door unlocks, the lights turn on for three minutes.
When motion is detected, turn on the lights until a presence is no longer detected
Don’t turn on the lights for motion after 11pm; only door locks (roaming cat rule)
If the lights turn on three times within five minutes, keep them on for 10 minutes
One friend played right to my nerdy ego and expressed admiration for the work. The friend then asked how they could get a similar setup at their house, and perhaps even for their backyard. I listed the brands of gear I’d bought and the particular timings. “Okay, but how do I set all that up without flying you to my house?” my friend asked. I was, again, flattered, but at the same time, I realized how much easier acquisition is than setup these days.
Most home apps—including those from Google, Amazon, and Apple—are annoying to use for automations. Apple’s Home demands you have a HomePod or Apple TV on your network before you can even start messing with automations. Google and Alexa routines tend to lean on you saying things to their assistants and speakers, and they don’t reach into the deeper aspects of most devices for triggers and actions.
The first Automation prompt for Home Assistant.
What are all these things? How do they work? How much time do you have?
Here’s what a working automation looks like when it’s (mostly) working. There’s a lot to unpack inside each bit.
Home Assistant, of course, gives you a blank slate for automations and routines, but it is likely a bit too blank for anyone not willing to do a lot of reading and experimentation. Even with years of experience using it, I regularly hit a wall with some of my ambitions or discover new ways of achieving things that are at once impressive and mystifying. Setting up a “Turn on my porch light at sunset” trigger led to the discovery that, actually, “sunset” is more of a concept involving sun angle, elevation, topography, and other variables, so you should set up that light based on an offset angle of the sun.
There’s a community of blueprint submissions, but these are a loose pile and provided as YAML code for your tinkering. I’ve read a lot of docs, tinkered with entity variables, played with Node-RED, and generally gotten my gear into some useful configurations. But there have to be many ways to make connecting your smart home gear far easier.
You can make home automation easier on yourself in the short term by buying into a customized total-home system, the kind installed by contractors and controlled with wall-mounted tablets. Or you can buy only devices from within one company’s ecosystem. Or you can stick entirely to things that happen to work with your preferred home app provider. But betting on one company to always be there for you is not something we generally recommend.
This is why the idea of Google—or any company—offering help with the deeper and more difficult parts of a smart home setup is so intriguing to me. There’s a lot of variables involved with Google delivering this kind of technology, making it widely available, and sticking to it. But offering any kind of help with automation ideas, discovery, and deeper connections is better than what most people get today.
When Apple released its statement about iPhone 15 Pro overheating issues earlier this week, the company indicated that an iOS update would be able to partially address that issue. That update has arrived today in the form of iOS 17.0.3, an update which claims to address “an issue that may cause iPhone to run warmer than expected,” as well as patching a pair of security exploits.
Apple also said that specific apps like Instagram and Uber were also causing phones to heat up and that it was working with developers on fixes. The iPhonedo YouTube channel recently demonstrated that version 302.0 of the Instagram app running on iOS 17 could also make iPhone 14 Pro phones and even an iPad Pro run hot, confirming that the issue wasn’t unique to the new phones.
Initial reports claimed that the iPhone 15 Pro’s new Apple A17 Pro chip, its new 3 nm manufacturing process, and/or the phone’s new titanium frame could be causing or exacerbating the heat problems. Apple has denied these claims. Even after the fix, you can still expect a new iPhone to run a bit warm during and immediately after initial setup, as it downloads apps and data and performs other background tasks.
The security updates include one patch for a kernel flaw (CVE-2023-42824) that Apple says is being actively exploited but requires local access to your device. A WebRTC bug (CVE-2023-5217) was also fixed, but to Apple’s knowledge, the bug isn’t being actively exploited.
This is the third minor update Apple has released for iOS 17 in the last three weeks. Version 17.0.1 also patched security flaws, while version 17.0.2 resolved a bug that could cause problems for people transferring data from an older iPhone to a new iPhone 15 or iPhone 15 Pro. The 17.0.2 update was initially only released for the iPhone 15 models, but Apple released it for all iPhone and iPad users a few days later.
It’s common for new iPhones to get specific iOS fixes in rapid succession since the new phones and new OS ship around the same time every year. Older devices also get more thorough vetting during the months-long developer and public beta programs, which Apple has made even easier to use in recent releases.
The first major update to iOS 17, version 17.1, is currently in beta testing. So far, it mostly seems to refine a few of iOS 17’s new features, including the StandBy smart display mode—MacRumors has a good roundup of the changes. If Apple follows its usual schedule, the 17.1 update should roll out for all iPhone and iPad users within the next few weeks.