Microsoft’s yearly Imagine Cup student startup competition crowned its latest winner today: EasyGlucose, a non-invasive, smartphone-based method for diabetics to test their blood glucose. It and the two other similarly beneficial finalists presented today at Microsoft’s Build developer conference.
The Imagine Cup brings together winners of many local student competitions around the world, with a focus on social good and, of course, Microsoft services like Azure. Last year’s winner was a smart prosthetic forearm that uses a camera in the palm to identify the object it is meant to grasp. (They were on hand today as well, with an improved prototype.)
The three finalists hailed from the U.K., India and the U.S.; EasyGlucose was a one-person team from my alma mater UCLA.
EasyGlucose takes advantage of machine learning’s knack for spotting the signal in noisy data, in this case the tiny details of the eye’s iris. It turns out, as creator Bryan Chiang explained in his presentation, that the iris’s “ridges, crypts and furrows” hide tiny hints as to their owner’s blood glucose levels.
EasyGlucose presents at the Imagine Cup finals
These features aren’t the kind of thing you can see with the naked eye (or rather, on the naked eye), but by clipping a macro lens onto a smartphone camera, Chiang was able to get a clear enough image that his computer vision algorithms were able to analyze them.
The resulting blood glucose measurement is significantly better than any non-invasive measure and more than good enough to serve in place of the most common method used by diabetics: stabbing themselves with a needle every couple of hours. Currently EasyGlucose gets within 7% of the pinprick method, well above what’s needed for “clinical accuracy,” and Chiang is working on closing that gap. No doubt this innovation will be welcomed warmly by the community, as well as the low cost: $10 for the lens adapter, and $20 per month for continued support via the app.
It’s not a home run, or not just yet: Naturally, a technology like this can’t go straight from the lab (or in this case, the dorm) to global deployment. It needs FDA approval first, though it likely won’t have as protracted a review period as, say, a new cancer treatment or surgical device. In the meantime, EasyGlucose has a patent pending, so no one can eat its lunch while it navigates the red tape.
As the winner, Chiang gets $100,000, plus $50,000 in Azure credit, plus the coveted one-on-one mentoring session with Microsoft CEO Satya Nadella.
The other two Imagine Cup finalists also used computer vision (among other things) in service of social good.
Caeli is taking on the issue of air pollution by producing custom high-performance air filter masks intended for people with chronic respiratory conditions who have to live in polluted areas. This is a serious problem in many places that cheap or off-the-shelf filters can’t really solve.
It uses your phone’s front-facing camera to scan your face and pick the mask shape that makes the best seal against your face. What’s the point of a high-tech filter if the unwanted particles just creep in the sides?
Part of the mask is a custom-designed compact nebulizer for anyone who needs medication delivered in mist form, for example someone with asthma. The medicine is delivered automatically according to the dosage and schedule set in the app — which also tracks pollution levels in the area so the user can avoid hot zones.
Finderr is an interesting solution to the problem of visually impaired people being unable to find items they’ve left around their home. By using a custom camera and computer vision algorithm, the service watches the home and tracks the placement of everyday items: keys, bags, groceries and so on. Just don’t lose your phone, as you’ll need that to find the other stuff.
You call up the app and tell it (by speaking) what you’re looking for, then the phone’s camera determines your location relative to the item you’re looking for, giving you audio feedback that guides you to it in a sort of “getting warmer” style, and a big visual indicator for those who can see it.
After their presentations, I asked the creators a few questions about upcoming challenges, since as is usual in the Imagine Cup, these companies are extremely early-stage.
Right now EasyGlucose is working well, but Chiang emphasized that the model still needs lots more data and testing across multiple demographics. It’s trained on 15,000 eye images but many more will be necessary to get the kind of data they’ll need to present to the FDA.
Finderr recognizes all the images in the widely used ImageNet database, but the team’s Ferdinand Loesch pointed out that others can be added very easily with 100 images to train with. As for the upfront cost, the U.K. offers a £500 grant to visually-impaired people for this sort of thing, and they engineered the 360-degree ceiling-mounted camera to minimize the number needed to cover the home.
Caeli noted that the nebulizer, which really is a medical device in its own right, is capable of being sold and promoted on its own, perhaps licensed to medical device manufacturers. There are other smart masks coming out, but he had a pretty low opinion of them (not strange in a competitor, but there isn’t some big market leader they need to dethrone). He also pointed out that in the target market of India (from which they plan to expand later) it isn’t as difficult to get insurance to cover this kind of thing.
While these are early-stage companies, they aren’t hobbies — though, admittedly, many of their founders are working on them between classes. I wouldn’t be surprised to hear more about them and others from Imagine Cup pulling in funding and hiring in the next year.
Gmail emoji reactions below and email (left) and the “add emoji” bar on the right.
Google
Finally, the feature everyone has been asking for: Gmail 👏 emoji 👏 reactions 👏.
You can now reply to an email just like it’s an instant messaging chat, tacking on a “crying laughing” emoji to an email instead of replying. Google has a whole support article detailing the new feature, which allows you to “express yourself and quickly respond to emails with emojis.” Like a messaging app, a row of emoji reaction counts will appear below your email now, and other people on the thread can tap to add to the reaction count. Currently, it’s only on the Android Gmail app, but it’s presumably coming to other Gmail clients.
Of course, email is from the 1970s and does not natively support emoji reactions. That makes this a Gmail-proprietary feature, which is a problem for federated emails that are expected to work with a million different clients and providers. If you send an emoji reaction and someone on the email chain is not using an official Gmail client, they will get a new, additional email containing your singular reactive emoji. Google is not messing with the email standard, so people not using Gmail will be the most affected.
Another weird quirk is that because emoji reactions are just emails (that Gmail sends to other clients and hides for itself), any emoji reactions you send can’t be taken back. There’s only Gmail’s “Undo send” feature for taking back reactions, which delays sending emails for about 30 seconds, so you can second-guess yourself. After that, you’re creating a permanent emoji reaction paper trail.
Thankfully, there are some limits on this. It won’t work on business or school accounts, so you can’t respond to your boss’s email with a poop emoji. Emoji reactions are only for casual emails that people apparently send to friends. (Do these people not have group chats?) Emoji reactions also aren’t available for group email lists, messages with more than 20 recipients, emails on which you’re BCC’d, encrypted emails, and emails where the sender has a custom reply-to address.
If the idea of emoji reactions to email has you selecting the puke emoji, as far as we can tell, there’s no way to just turn this off.
Enlarge / Claus Scholz is offered tea and moral encouragement by his robots, MM7 and MM8, also known as “Psychotrons,” in 1950 Vienna. This could be us, but many home automation platforms are only playing at being helpful.
Gamma-Keystone via Getty Images
Google today released a new Android OS with some modest improvements, a smartwatch with an old-but-still-newer chip, and a Pixel 8 whose biggest new feature is seven years of updates. But buried inside all the Google news this week is something that could be genuinely, actually helpful to the humans who get into this kind of gear—help for people setting up automations in their homes.
It’s easy to buy smart home gear, and it’s occasionally easy to set it up, but figuring out all the ways that devices can work with one another can be daunting. Even smart home systems with robust scripting abilities mostly let users develop great ideas for connecting two or more devices. That’s where, according to Google, AI can help.
Google says it will use AI (the company’s broad definition of AI, at least) at two different levels. At an app level, Google Home can start condensing all the notifications from cameras, sensors, and other devices into a streamlined summary, patched together by generative AI, and which you can respond to with natural language.
Google’s Rick Osterloh describing an AI-flavored feature to help build home automation routines.
Enlarge / Screenshot from Google Home demonstration, showing Google Home suggesting package delivery automations.
Google
What caught my attention was not the fact that your doorbell camera can recognize a package or that you can ask about it in English text—that’s a pretty standard Nest/Google/AI feat by now. What’s neat is that the Home app will now suggest automations that can follow from recognizing that package. In Google’s example, you could have certain lights in your home blink three times and have speakers play a chime—but only if somebody is home. (Presumably, you could set up an alternate notification solution for when you’re away.)
Earlier this week, Google announced another way that AI could help even seasoned smart home enthusiasts get more control. “Help me script” is a code automation tool that turns natural language—like, “When I arrive home and the garage door closes, turn on the downstairs lights”—into Google Home scripts. You might not have known that Google Home has a script editor or a Web interface, but it does, at least in a “Public Preview.”
“Help me script” is due to arrive “later this year in Public Preview,” while the app-based AI routine starters are an “experimental feature” that will be “rolling out” to (presumably Nest) subscribers next year. Google’s presentation, as is typical of Google generally, has fuzzy timing and availability details, so it’s hard to say whether the app-based automation AI will remain a subscriber-only feature.
It would be great to see Google—or any major hub maker in the smart home space—push automation and routine discovery forward, be it through generative AI or just smart code. Buying a light bulb that can be controlled by Bluetooth, Wi-Fi, Zigbee, or even Thread is something you can do at Home Depot. The same goes for motion sensors, sprinkler controllers, and many other gadgets. Hooking them up to Google Home, Alexa, Apple’s Home, or Home Assistant varies by device and system but should be achievable. Matter, which promised to make that last bit easier, hasn’t done so, but maybe give it more time.
Once you’ve got a bunch of things that you can toggle and control from a phone or a speaker, what then? What should these things do when you’re not looking? What would be the most helpful routine you might not have thought of—perhaps one that owners of similar devices have set up?
I thought of this recently when a few friends visited my house. I set up a motion sensor in my entryway, had a smart deadbolt in the door, and replaced the bulbs in two recessed fixtures with smart Wi-Fi bulbs. Using Home Assistant, I set up the area with a few rules:
When the door unlocks, the lights turn on for three minutes.
When motion is detected, turn on the lights until a presence is no longer detected
Don’t turn on the lights for motion after 11pm; only door locks (roaming cat rule)
If the lights turn on three times within five minutes, keep them on for 10 minutes
One friend played right to my nerdy ego and expressed admiration for the work. The friend then asked how they could get a similar setup at their house, and perhaps even for their backyard. I listed the brands of gear I’d bought and the particular timings. “Okay, but how do I set all that up without flying you to my house?” my friend asked. I was, again, flattered, but at the same time, I realized how much easier acquisition is than setup these days.
Most home apps—including those from Google, Amazon, and Apple—are annoying to use for automations. Apple’s Home demands you have a HomePod or Apple TV on your network before you can even start messing with automations. Google and Alexa routines tend to lean on you saying things to their assistants and speakers, and they don’t reach into the deeper aspects of most devices for triggers and actions.
The first Automation prompt for Home Assistant.
What are all these things? How do they work? How much time do you have?
Here’s what a working automation looks like when it’s (mostly) working. There’s a lot to unpack inside each bit.
Home Assistant, of course, gives you a blank slate for automations and routines, but it is likely a bit too blank for anyone not willing to do a lot of reading and experimentation. Even with years of experience using it, I regularly hit a wall with some of my ambitions or discover new ways of achieving things that are at once impressive and mystifying. Setting up a “Turn on my porch light at sunset” trigger led to the discovery that, actually, “sunset” is more of a concept involving sun angle, elevation, topography, and other variables, so you should set up that light based on an offset angle of the sun.
There’s a community of blueprint submissions, but these are a loose pile and provided as YAML code for your tinkering. I’ve read a lot of docs, tinkered with entity variables, played with Node-RED, and generally gotten my gear into some useful configurations. But there have to be many ways to make connecting your smart home gear far easier.
You can make home automation easier on yourself in the short term by buying into a customized total-home system, the kind installed by contractors and controlled with wall-mounted tablets. Or you can buy only devices from within one company’s ecosystem. Or you can stick entirely to things that happen to work with your preferred home app provider. But betting on one company to always be there for you is not something we generally recommend.
This is why the idea of Google—or any company—offering help with the deeper and more difficult parts of a smart home setup is so intriguing to me. There’s a lot of variables involved with Google delivering this kind of technology, making it widely available, and sticking to it. But offering any kind of help with automation ideas, discovery, and deeper connections is better than what most people get today.
When Apple released its statement about iPhone 15 Pro overheating issues earlier this week, the company indicated that an iOS update would be able to partially address that issue. That update has arrived today in the form of iOS 17.0.3, an update which claims to address “an issue that may cause iPhone to run warmer than expected,” as well as patching a pair of security exploits.
Apple also said that specific apps like Instagram and Uber were also causing phones to heat up and that it was working with developers on fixes. The iPhonedo YouTube channel recently demonstrated that version 302.0 of the Instagram app running on iOS 17 could also make iPhone 14 Pro phones and even an iPad Pro run hot, confirming that the issue wasn’t unique to the new phones.
Initial reports claimed that the iPhone 15 Pro’s new Apple A17 Pro chip, its new 3 nm manufacturing process, and/or the phone’s new titanium frame could be causing or exacerbating the heat problems. Apple has denied these claims. Even after the fix, you can still expect a new iPhone to run a bit warm during and immediately after initial setup, as it downloads apps and data and performs other background tasks.
The security updates include one patch for a kernel flaw (CVE-2023-42824) that Apple says is being actively exploited but requires local access to your device. A WebRTC bug (CVE-2023-5217) was also fixed, but to Apple’s knowledge, the bug isn’t being actively exploited.
This is the third minor update Apple has released for iOS 17 in the last three weeks. Version 17.0.1 also patched security flaws, while version 17.0.2 resolved a bug that could cause problems for people transferring data from an older iPhone to a new iPhone 15 or iPhone 15 Pro. The 17.0.2 update was initially only released for the iPhone 15 models, but Apple released it for all iPhone and iPad users a few days later.
It’s common for new iPhones to get specific iOS fixes in rapid succession since the new phones and new OS ship around the same time every year. Older devices also get more thorough vetting during the months-long developer and public beta programs, which Apple has made even easier to use in recent releases.
The first major update to iOS 17, version 17.1, is currently in beta testing. So far, it mostly seems to refine a few of iOS 17’s new features, including the StandBy smart display mode—MacRumors has a good roundup of the changes. If Apple follows its usual schedule, the 17.1 update should roll out for all iPhone and iPad users within the next few weeks.