Developments in the self-driving car world can sometimes be a bit dry: a million miles without an accident, a 10 percent increase in pedestrian detection range, and so on. But this research has both an interesting idea behind it and a surprisingly hands-on method of testing: pitting the vehicle against a real racing driver on a course.
To set expectations here, this isn’t some stunt, it’s actually warranted given the nature of the research, and it’s not like they were trading positions, jockeying for entry lines, and generally rubbing bumpers. They went separately, and the researcher, whom I contacted, politely declined to provide the actual lap times. This is science, people. Please!
The question which Nathan Spielberg and his colleagues at Stanford were interested in answering has to do with an autonomous vehicle operating under extreme conditions. The simple fact is that a huge proportion of the miles driven by these systems are at normal speeds, in good conditions. And most obstacle encounters are similarly ordinary.
If the worst should happen and a car needs to exceed these ordinary bounds of handling — specifically friction limits — can it be trusted to do so? And how would you build an AI agent that can do so?
The researchers’ paper, published today in the journal Science Robotics, begins with the assumption that a physics-based model just isn’t adequate for the job. These are computer models that simulate the car’s motion in terms of weight, speed, road surface, and other conditions. But they are necessarily simplified and their assumptions are of the type to produce increasingly inaccurate results as values exceed ordinary limits.
Imagine if such a simulator simplified each wheel to a point or line when during a slide it is highly important which side of the tire is experiencing the most friction. Such detailed simulations are beyond the ability of current hardware to do quickly or accurately enough. But the results of such simulations can be summarized into an input and output, and that data can be fed into a neural network — one that turns out to be remarkably good at taking turns.
The simulation provides the basics of how a car of this make and weight should move when it is going at speed X and needs to turn at angle Y — obviously it’s more complicated than that, but you get the idea. It’s fairly basic. The model then consults its training, but is also informed by the real-world results, which may perhaps differ from theory.
So the car goes into a turn knowing that, theoretically, it should have to move the wheel this much to the left, then this much more at this point, and so on. But the sensors in the car report that despite this, the car is drifting a bit off the intended line — and this input is taken into account, causing the agent to turn the wheel a bit more, or less, or whatever the case may be.
And where does the racing driver come into it, you ask? Well, the researchers needed to compare the car’s performance with a human driver who knows from experience how to control a car at its friction limits, and that’s pretty much the definition of a racer. If your tires aren’t hot, you’re probably going too slow.
The team had the racer (a “champion amateur race car driver,” as they put it) drive around the Thunderhill Raceway Park in California, then sent Shelley — their modified, self-driving 2009 Audi TTS — around as well, ten times each. And it wasn’t a relaxing Sunday ramble. As the paper reads:
Both the automated vehicle and human participant attempted to complete the course in the minimum amount of time. This consisted of driving at accelerations nearing 0.95g while tracking a minimum time racing trajectory at the the physical limits of tire adhesion. At this combined level of longitudinal and lateral acceleration, the vehicle was able to approach speeds of 95 miles per hour (mph) on portions of the track.
Even under these extreme driving conditions, the controller was able to consistently track the racing line with the mean path tracking error below 40 cm everywhere on the track.
In other words, while pulling a G and hitting 95, the self-driving Audi was never more than a foot and a half off its ideal racing line. The human driver had much wider variation, but this is by no means considered an error — they were changing the line for their own reasons.
“We focused on a segment of the track with a variety of turns that provided the comparison we needed and allowed us to gather more data sets,” wrote Spielberg in an email to TechCrunch. “We have done full lap comparisons and the same trends hold. Shelley has an advantage of consistency while the human drivers have the advantage of changing their line as the car changes, something we are currently implementing.”
Shelley showed far lower variation in its times than the racer, but the racer also posted considerably lower times on several laps. The averages for the segments evaluated were about comparable, with a slight edge going to the human.
This is pretty impressive considering the simplicity of the self-driving model. It had very little real-world knowledge going into its systems, mostly the results of a simulation giving it an approximate idea of how it ought to be handling moment by moment. And its feedback was very limited — it didn’t have access to all the advanced telemetry that self-driving systems often use to flesh out the scene.
The conclusion is that this type of approach, with a relatively simple model controlling the car beyond ordinary handling conditions, is promising. It would need to be tweaked for each surface and setup — obviously a rear-wheel-drive car on a dirt road would be different than front-wheel on tarmac. How best to create and test such models is a matter for future investigation, though the team seemed confident it was a mere engineering challenge.
The experiment was undertaken in order to pursue the still-distant goal of self-driving cars being superior to humans on all driving tasks. The results from these early tests are promising, but there’s still a long way to go before an AV can take on a pro head-to-head. But I look forward to the occasion.
Gmail emoji reactions below and email (left) and the “add emoji” bar on the right.
Google
Finally, the feature everyone has been asking for: Gmail 👏 emoji 👏 reactions 👏.
You can now reply to an email just like it’s an instant messaging chat, tacking on a “crying laughing” emoji to an email instead of replying. Google has a whole support article detailing the new feature, which allows you to “express yourself and quickly respond to emails with emojis.” Like a messaging app, a row of emoji reaction counts will appear below your email now, and other people on the thread can tap to add to the reaction count. Currently, it’s only on the Android Gmail app, but it’s presumably coming to other Gmail clients.
Of course, email is from the 1970s and does not natively support emoji reactions. That makes this a Gmail-proprietary feature, which is a problem for federated emails that are expected to work with a million different clients and providers. If you send an emoji reaction and someone on the email chain is not using an official Gmail client, they will get a new, additional email containing your singular reactive emoji. Google is not messing with the email standard, so people not using Gmail will be the most affected.
Another weird quirk is that because emoji reactions are just emails (that Gmail sends to other clients and hides for itself), any emoji reactions you send can’t be taken back. There’s only Gmail’s “Undo send” feature for taking back reactions, which delays sending emails for about 30 seconds, so you can second-guess yourself. After that, you’re creating a permanent emoji reaction paper trail.
Thankfully, there are some limits on this. It won’t work on business or school accounts, so you can’t respond to your boss’s email with a poop emoji. Emoji reactions are only for casual emails that people apparently send to friends. (Do these people not have group chats?) Emoji reactions also aren’t available for group email lists, messages with more than 20 recipients, emails on which you’re BCC’d, encrypted emails, and emails where the sender has a custom reply-to address.
If the idea of emoji reactions to email has you selecting the puke emoji, as far as we can tell, there’s no way to just turn this off.
Enlarge / Claus Scholz is offered tea and moral encouragement by his robots, MM7 and MM8, also known as “Psychotrons,” in 1950 Vienna. This could be us, but many home automation platforms are only playing at being helpful.
Gamma-Keystone via Getty Images
Google today released a new Android OS with some modest improvements, a smartwatch with an old-but-still-newer chip, and a Pixel 8 whose biggest new feature is seven years of updates. But buried inside all the Google news this week is something that could be genuinely, actually helpful to the humans who get into this kind of gear—help for people setting up automations in their homes.
It’s easy to buy smart home gear, and it’s occasionally easy to set it up, but figuring out all the ways that devices can work with one another can be daunting. Even smart home systems with robust scripting abilities mostly let users develop great ideas for connecting two or more devices. That’s where, according to Google, AI can help.
Google says it will use AI (the company’s broad definition of AI, at least) at two different levels. At an app level, Google Home can start condensing all the notifications from cameras, sensors, and other devices into a streamlined summary, patched together by generative AI, and which you can respond to with natural language.
Google’s Rick Osterloh describing an AI-flavored feature to help build home automation routines.
Enlarge / Screenshot from Google Home demonstration, showing Google Home suggesting package delivery automations.
Google
What caught my attention was not the fact that your doorbell camera can recognize a package or that you can ask about it in English text—that’s a pretty standard Nest/Google/AI feat by now. What’s neat is that the Home app will now suggest automations that can follow from recognizing that package. In Google’s example, you could have certain lights in your home blink three times and have speakers play a chime—but only if somebody is home. (Presumably, you could set up an alternate notification solution for when you’re away.)
Earlier this week, Google announced another way that AI could help even seasoned smart home enthusiasts get more control. “Help me script” is a code automation tool that turns natural language—like, “When I arrive home and the garage door closes, turn on the downstairs lights”—into Google Home scripts. You might not have known that Google Home has a script editor or a Web interface, but it does, at least in a “Public Preview.”
“Help me script” is due to arrive “later this year in Public Preview,” while the app-based AI routine starters are an “experimental feature” that will be “rolling out” to (presumably Nest) subscribers next year. Google’s presentation, as is typical of Google generally, has fuzzy timing and availability details, so it’s hard to say whether the app-based automation AI will remain a subscriber-only feature.
It would be great to see Google—or any major hub maker in the smart home space—push automation and routine discovery forward, be it through generative AI or just smart code. Buying a light bulb that can be controlled by Bluetooth, Wi-Fi, Zigbee, or even Thread is something you can do at Home Depot. The same goes for motion sensors, sprinkler controllers, and many other gadgets. Hooking them up to Google Home, Alexa, Apple’s Home, or Home Assistant varies by device and system but should be achievable. Matter, which promised to make that last bit easier, hasn’t done so, but maybe give it more time.
Once you’ve got a bunch of things that you can toggle and control from a phone or a speaker, what then? What should these things do when you’re not looking? What would be the most helpful routine you might not have thought of—perhaps one that owners of similar devices have set up?
I thought of this recently when a few friends visited my house. I set up a motion sensor in my entryway, had a smart deadbolt in the door, and replaced the bulbs in two recessed fixtures with smart Wi-Fi bulbs. Using Home Assistant, I set up the area with a few rules:
When the door unlocks, the lights turn on for three minutes.
When motion is detected, turn on the lights until a presence is no longer detected
Don’t turn on the lights for motion after 11pm; only door locks (roaming cat rule)
If the lights turn on three times within five minutes, keep them on for 10 minutes
One friend played right to my nerdy ego and expressed admiration for the work. The friend then asked how they could get a similar setup at their house, and perhaps even for their backyard. I listed the brands of gear I’d bought and the particular timings. “Okay, but how do I set all that up without flying you to my house?” my friend asked. I was, again, flattered, but at the same time, I realized how much easier acquisition is than setup these days.
Most home apps—including those from Google, Amazon, and Apple—are annoying to use for automations. Apple’s Home demands you have a HomePod or Apple TV on your network before you can even start messing with automations. Google and Alexa routines tend to lean on you saying things to their assistants and speakers, and they don’t reach into the deeper aspects of most devices for triggers and actions.
The first Automation prompt for Home Assistant.
What are all these things? How do they work? How much time do you have?
Here’s what a working automation looks like when it’s (mostly) working. There’s a lot to unpack inside each bit.
Home Assistant, of course, gives you a blank slate for automations and routines, but it is likely a bit too blank for anyone not willing to do a lot of reading and experimentation. Even with years of experience using it, I regularly hit a wall with some of my ambitions or discover new ways of achieving things that are at once impressive and mystifying. Setting up a “Turn on my porch light at sunset” trigger led to the discovery that, actually, “sunset” is more of a concept involving sun angle, elevation, topography, and other variables, so you should set up that light based on an offset angle of the sun.
There’s a community of blueprint submissions, but these are a loose pile and provided as YAML code for your tinkering. I’ve read a lot of docs, tinkered with entity variables, played with Node-RED, and generally gotten my gear into some useful configurations. But there have to be many ways to make connecting your smart home gear far easier.
You can make home automation easier on yourself in the short term by buying into a customized total-home system, the kind installed by contractors and controlled with wall-mounted tablets. Or you can buy only devices from within one company’s ecosystem. Or you can stick entirely to things that happen to work with your preferred home app provider. But betting on one company to always be there for you is not something we generally recommend.
This is why the idea of Google—or any company—offering help with the deeper and more difficult parts of a smart home setup is so intriguing to me. There’s a lot of variables involved with Google delivering this kind of technology, making it widely available, and sticking to it. But offering any kind of help with automation ideas, discovery, and deeper connections is better than what most people get today.
When Apple released its statement about iPhone 15 Pro overheating issues earlier this week, the company indicated that an iOS update would be able to partially address that issue. That update has arrived today in the form of iOS 17.0.3, an update which claims to address “an issue that may cause iPhone to run warmer than expected,” as well as patching a pair of security exploits.
Apple also said that specific apps like Instagram and Uber were also causing phones to heat up and that it was working with developers on fixes. The iPhonedo YouTube channel recently demonstrated that version 302.0 of the Instagram app running on iOS 17 could also make iPhone 14 Pro phones and even an iPad Pro run hot, confirming that the issue wasn’t unique to the new phones.
Initial reports claimed that the iPhone 15 Pro’s new Apple A17 Pro chip, its new 3 nm manufacturing process, and/or the phone’s new titanium frame could be causing or exacerbating the heat problems. Apple has denied these claims. Even after the fix, you can still expect a new iPhone to run a bit warm during and immediately after initial setup, as it downloads apps and data and performs other background tasks.
The security updates include one patch for a kernel flaw (CVE-2023-42824) that Apple says is being actively exploited but requires local access to your device. A WebRTC bug (CVE-2023-5217) was also fixed, but to Apple’s knowledge, the bug isn’t being actively exploited.
This is the third minor update Apple has released for iOS 17 in the last three weeks. Version 17.0.1 also patched security flaws, while version 17.0.2 resolved a bug that could cause problems for people transferring data from an older iPhone to a new iPhone 15 or iPhone 15 Pro. The 17.0.2 update was initially only released for the iPhone 15 models, but Apple released it for all iPhone and iPad users a few days later.
It’s common for new iPhones to get specific iOS fixes in rapid succession since the new phones and new OS ship around the same time every year. Older devices also get more thorough vetting during the months-long developer and public beta programs, which Apple has made even easier to use in recent releases.
The first major update to iOS 17, version 17.1, is currently in beta testing. So far, it mostly seems to refine a few of iOS 17’s new features, including the StandBy smart display mode—MacRumors has a good roundup of the changes. If Apple follows its usual schedule, the 17.1 update should roll out for all iPhone and iPad users within the next few weeks.