Connect with us

Gadgets

Facebook confirms it’s building augmented reality glasses – TechCrunch

Published

on

“Yeah! Well of course we’re working on it,” Facebook’s head of augmented reality Ficus Kirkpatrick told me when I asked him at TechCrunch’s AR/VR event in LA if Facebook was building AR glasses. “We are building hardware products. We’re going forward on this . . . We want to see those glasses come into reality, and I think we want to play our part in helping to bring them there.”

This is the clearest confirmation we’ve received yet from Facebook about its plans for AR glasses. The product could be Facebook’s opportunity to own a mainstream computing device on which its software could run after a decade of being beholden to smartphones built, controlled and taxed by Apple and Google.

This month, Facebook launched its first self-branded gadget out of its Building 8 lab, the Portal smart display, and now it’s revving up hardware efforts. For AR, Kirkpatrick told me, “We have no product to announce right now. But we have a lot of very talented people doing really, really compelling cutting-edge research that we hope plays a part in the future of headsets.”

There’s a war brewing here. AR startups like Magic Leap and Thalmic Labs are starting to release their first headsets and glasses. Microsoft is considered a leader thanks to its early HoloLens product, while Google Glass is still being developed for the enterprise. And Apple has acquired AR hardware developers like Akonia Holographics and Vrvana to accelerate development of its own headsets.

Mark Zuckerberg said at F8 2017 that AR glasses were 5 to 7 years away

Technological progress and competition seems to have sped up Facebook’s timetable. Back in April 2017, CEO Mark Zuckerberg said, “We all know where we want this to get eventually, we want glasses,” but explained that “we do not have the science or technology today to build the AR glasses that we want. We may in five years, or seven years.” He explained that “We can’t build the AR product that we want today, so building VR is the path to getting to those AR glasses.” The company’s Oculus division had talked extensively about the potential of AR glasses, yet similarly characterized them as far off.

But a few months later, a Facebook patent application for AR glasses was spotted by Business Insider that detailed using “waveguide display with two-dimensional scanner” to project media onto the lenses. Cheddar’s Alex Heath reports that Facebook is working on Project Sequoia that uses projectors to display AR experiences on top of physical objects like a chess board on a table or a person’s likeness on something for teleconferencing. These indicate Facebook was moving past AR research.

Facebook AR glasses patent application

Last month, The Information spotted four Facebook job listings seeking engineers with experience building custom AR computer chips to join the Facebook Reality Lab (formerly known as Oculus research). And a week later, Oculus’ Chief Scientist Michael Abrash briefly mentioned amidst a half-hour technical keynote at the company’s VR conference that “No off the shelf display technology is good enough for AR, so we had no choice but to develop a new display system. And that system also has the potential to bring VR to a different level.”

But Kirkpatrick clarified that he sees Facebook’s AR efforts not just as a mixed reality feature of VR headsets. “I don’t think we converge to one single device . . . I don’t think we’re going to end up in a Ready Player One future where everyone is just hanging out in VR all the time,” he tells me. “I think we’re still going to have the lives that we have today where you stay at home and you have maybe an escapist, immersive experience or you use VR to transport yourself somewhere else. But I think those things like the people you connect with, the things you’re doing, the state of your apps and everything needs to be carried and portable on-the-go with you as well, and I think that’s going to look more like how we think about AR.”

Oculus Chief Scientist Michael Abrash makes predictions about the future of AR and VR at the Oculus Connect 5 conference

Oculus virtual reality headsets and Facebook augmented reality glasses could share an underlying software layer, though, which might speed up engineering efforts while making the interface more familiar for users. “I think that all this stuff will converge in some way maybe at the software level,” Kirkpatrick said.

The problem for Facebook AR is that it may run into the same privacy concerns that people had about putting a Portal camera inside their homes. While VR headsets generate a fictional world, AR must collect data about your real-world surroundings. That could raise fears about Facebook surveilling not just our homes but everything we do, and using that data to power ad targeting and content recommendations. This brand tax haunts Facebook’s every move.

Startups with a cleaner slate like Magic Leap and giants with a better track record on privacy like Apple could have an easier time getting users to put a camera on their heads. Facebook would likely need a best-in-class gadget that does much that others can’t in order to convince people it deserves to augment their reality.

You can watch our full interview with Facebook’s director of camera and head of augmented reality engineering Ficus Kirkpatrick from our TechCrunch Sessions: AR/VR event in LA:

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Gadgets

Gmail unleashes “email emoji reactions” onto an unsuspecting world

Published

on

Finally, the feature everyone has been asking for: Gmail 👏 emoji 👏 reactions 👏.

You can now reply to an email just like it’s an instant messaging chat, tacking on a “crying laughing” emoji to an email instead of replying. Google has a whole support article detailing the new feature, which allows you to “express yourself and quickly respond to emails with emojis.” Like a messaging app, a row of emoji reaction counts will appear below your email now, and other people on the thread can tap to add to the reaction count. Currently, it’s only on the Android Gmail app, but it’s presumably coming to other Gmail clients.

Of course, email is from the 1970s and does not natively support emoji reactions. That makes this a Gmail-proprietary feature, which is a problem for federated emails that are expected to work with a million different clients and providers. If you send an emoji reaction and someone on the email chain is not using an official Gmail client, they will get a new, additional email containing your singular reactive emoji. Google is not messing with the email standard, so people not using Gmail will be the most affected.

Another weird quirk is that because emoji reactions are just emails (that Gmail sends to other clients and hides for itself), any emoji reactions you send can’t be taken back. There’s only Gmail’s “Undo send” feature for taking back reactions, which delays sending emails for about 30 seconds, so you can second-guess yourself. After that, you’re creating a permanent emoji reaction paper trail.

Thankfully, there are some limits on this. It won’t work on business or school accounts, so you can’t respond to your boss’s email with a poop emoji. Emoji reactions are only for casual emails that people apparently send to friends. (Do these people not have group chats?) Emoji reactions also aren’t available for group email lists, messages with more than 20 recipients, emails on which you’re BCC’d, encrypted emails, and emails where the sender has a custom reply-to address.

If the idea of emoji reactions to email has you selecting the puke emoji, as far as we can tell, there’s no way to just turn this off.

Listing image by Google / Ron Amadeo

Continue Reading

Gadgets

Google might have a great idea for smart home automation—if it sticks to it

Published

on

Enlarge / Claus Scholz is offered tea and moral encouragement by his robots, MM7 and MM8, also known as “Psychotrons,” in 1950 Vienna. This could be us, but many home automation platforms are only playing at being helpful.

Gamma-Keystone via Getty Images

Google today released a new Android OS with some modest improvements, a smartwatch with an old-but-still-newer chip, and a Pixel 8 whose biggest new feature is seven years of updates. But buried inside all the Google news this week is something that could be genuinely, actually helpful to the humans who get into this kind of gear—help for people setting up automations in their homes.

It’s easy to buy smart home gear, and it’s occasionally easy to set it up, but figuring out all the ways that devices can work with one another can be daunting. Even smart home systems with robust scripting abilities mostly let users develop great ideas for connecting two or more devices. That’s where, according to Google, AI can help.

Google says it will use AI (the company’s broad definition of AI, at least) at two different levels. At an app level, Google Home can start condensing all the notifications from cameras, sensors, and other devices into a streamlined summary, patched together by generative AI, and which you can respond to with natural language.

Google’s Rick Osterloh describing an AI-flavored feature to help build home automation routines.

Screenshot from Google Home demonstration, showing Google Home suggesting package delivery automations.
Enlarge / Screenshot from Google Home demonstration, showing Google Home suggesting package delivery automations.

Google

What caught my attention was not the fact that your doorbell camera can recognize a package or that you can ask about it in English text—that’s a pretty standard Nest/Google/AI feat by now. What’s neat is that the Home app will now suggest automations that can follow from recognizing that package. In Google’s example, you could have certain lights in your home blink three times and have speakers play a chime—but only if somebody is home. (Presumably, you could set up an alternate notification solution for when you’re away.)

Earlier this week, Google announced another way that AI could help even seasoned smart home enthusiasts get more control. “Help me script” is a code automation tool that turns natural language—like, “When I arrive home and the garage door closes, turn on the downstairs lights”—into Google Home scripts. You might not have known that Google Home has a script editor or a Web interface, but it does, at least in a “Public Preview.”

“Help me script” is due to arrive “later this year in Public Preview,” while the app-based AI routine starters are an “experimental feature” that will be “rolling out” to (presumably Nest) subscribers next year. Google’s presentation, as is typical of Google generally, has fuzzy timing and availability details, so it’s hard to say whether the app-based automation AI will remain a subscriber-only feature.

It would be great to see Google—or any major hub maker in the smart home space—push automation and routine discovery forward, be it through generative AI or just smart code. Buying a light bulb that can be controlled by Bluetooth, Wi-Fi, Zigbee, or even Thread is something you can do at Home Depot. The same goes for motion sensors, sprinkler controllers, and many other gadgets. Hooking them up to Google Home, Alexa, Apple’s Home, or Home Assistant varies by device and system but should be achievable. Matter, which promised to make that last bit easier, hasn’t done so, but maybe give it more time.

Once you’ve got a bunch of things that you can toggle and control from a phone or a speaker, what then? What should these things do when you’re not looking? What would be the most helpful routine you might not have thought of—perhaps one that owners of similar devices have set up?

I thought of this recently when a few friends visited my house. I set up a motion sensor in my entryway, had a smart deadbolt in the door, and replaced the bulbs in two recessed fixtures with smart Wi-Fi bulbs. Using Home Assistant, I set up the area with a few rules:

  • When the door unlocks, the lights turn on for three minutes.
  • When motion is detected, turn on the lights until a presence is no longer detected
  • Don’t turn on the lights for motion after 11pm; only door locks (roaming cat rule)
  • If the lights turn on three times within five minutes, keep them on for 10 minutes

One friend played right to my nerdy ego and expressed admiration for the work. The friend then asked how they could get a similar setup at their house, and perhaps even for their backyard. I listed the brands of gear I’d bought and the particular timings. “Okay, but how do I set all that up without flying you to my house?” my friend asked. I was, again, flattered, but at the same time, I realized how much easier acquisition is than setup these days.

Most home apps—including those from Google, Amazon, and Apple—are annoying to use for automations. Apple’s Home demands you have a HomePod or Apple TV on your network before you can even start messing with automations. Google and Alexa routines tend to lean on you saying things to their assistants and speakers, and they don’t reach into the deeper aspects of most devices for triggers and actions.

Home Assistant, of course, gives you a blank slate for automations and routines, but it is likely a bit too blank for anyone not willing to do a lot of reading and experimentation. Even with years of experience using it, I regularly hit a wall with some of my ambitions or discover new ways of achieving things that are at once impressive and mystifying. Setting up a “Turn on my porch light at sunset” trigger led to the discovery that, actually, “sunset” is more of a concept involving sun angle, elevation, topography, and other variables, so you should set up that light based on an offset angle of the sun.

There’s a community of blueprint submissions, but these are a loose pile and provided as YAML code for your tinkering. I’ve read a lot of docs, tinkered with entity variables, played with Node-RED, and generally gotten my gear into some useful configurations. But there have to be many ways to make connecting your smart home gear far easier.

You can make home automation easier on yourself in the short term by buying into a customized total-home system, the kind installed by contractors and controlled with wall-mounted tablets. Or you can buy only devices from within one company’s ecosystem. Or you can stick entirely to things that happen to work with your preferred home app provider. But betting on one company to always be there for you is not something we generally recommend.

This is why the idea of Google—or any company—offering help with the deeper and more difficult parts of a smart home setup is so intriguing to me. There’s a lot of variables involved with Google delivering this kind of technology, making it widely available, and sticking to it. But offering any kind of help with automation ideas, discovery, and deeper connections is better than what most people get today.

Listing image by Gamma-Keystone via Getty Images

Continue Reading

Gadgets

Apple fixes overheating problems and 0-day security flaw with iOS 17.0.3 update

Published

on

Enlarge / iPhones running iOS 17.

Apple

When Apple released its statement about iPhone 15 Pro overheating issues earlier this week, the company indicated that an iOS update would be able to partially address that issue. That update has arrived today in the form of iOS 17.0.3, an update which claims to address “an issue that may cause iPhone to run warmer than expected,” as well as patching a pair of security exploits.

Apple also said that specific apps like Instagram and Uber were also causing phones to heat up and that it was working with developers on fixes. The iPhonedo YouTube channel recently demonstrated that version 302.0 of the Instagram app running on iOS 17 could also make iPhone 14 Pro phones and even an iPad Pro run hot, confirming that the issue wasn’t unique to the new phones.

Initial reports claimed that the iPhone 15 Pro’s new Apple A17 Pro chip, its new 3 nm manufacturing process, and/or the phone’s new titanium frame could be causing or exacerbating the heat problems. Apple has denied these claims. Even after the fix, you can still expect a new iPhone to run a bit warm during and immediately after initial setup, as it downloads apps and data and performs other background tasks.

The security updates include one patch for a kernel flaw (CVE-2023-42824) that Apple says is being actively exploited but requires local access to your device. A WebRTC bug (CVE-2023-5217) was also fixed, but to Apple’s knowledge, the bug isn’t being actively exploited.

This is the third minor update Apple has released for iOS 17 in the last three weeks. Version 17.0.1 also patched security flaws, while version 17.0.2 resolved a bug that could cause problems for people transferring data from an older iPhone to a new iPhone 15 or iPhone 15 Pro. The 17.0.2 update was initially only released for the iPhone 15 models, but Apple released it for all iPhone and iPad users a few days later.

It’s common for new iPhones to get specific iOS fixes in rapid succession since the new phones and new OS ship around the same time every year. Older devices also get more thorough vetting during the months-long developer and public beta programs, which Apple has made even easier to use in recent releases.

The first major update to iOS 17, version 17.1, is currently in beta testing. So far, it mostly seems to refine a few of iOS 17’s new features, including the StandBy smart display mode—MacRumors has a good roundup of the changes. If Apple follows its usual schedule, the 17.1 update should roll out for all iPhone and iPad users within the next few weeks.

Continue Reading

Trending