Connect with us

Gadgets

Robots learn to grab and scramble with new levels of agility – TechCrunch

Published

on

Robots are amazing things, but outside of their specific domains they are incredibly limited. So flexibility — not physical, but mental — is a constant area of research. A trio of new robotic setups demonstrate ways they can evolve to accommodate novel situations: using both “hands,” getting up after a fall, and understanding visual instructions they’ve never seen before.

The robots, all developed independently, are gathered together today in a special issue of the journal Science Robotics dedicated to learning. Each shows an interesting new way in which robots can improve their interactions with the real world.

On the other hand…

First there is the question of using the right tool for a job. As humans with multi-purpose grippers on the ends of our arms, we’re pretty experienced with this. We understand from a lifetime of touching stuff that we need to use this grip to pick this up, we need to use tools for that, this will be light, that heavy, and so on.

Robots, of course, have no inherent knowledge of this, which can make things difficult; it may not understand that it can’t pick up something of a given size, shape, or texture. A new system from Berkeley roboticists acts as a rudimentary decision-making process, classifying objects as able to be grabbed either by an ordinary pincer grip or with a suction cup grip.

A robot, wielding both simultaneously, decides on the fly (using depth-based imagery) what items to grab and with which tool; the result is extremely high reliability even on piles of objects it’s never seen before.

It’s done with a neural network that consumed millions of data points on items, arrangements, and attempts to grab them. If you attempted to pick up a teddy bear with a suction cup and it didn’t work the first ten thousand times, would you keep on trying? This system learned to make that kind of determination, and as you can imagine such a thing is potentially very important for tasks like warehouse picking for which robots are being groomed.

Interestingly, because of the “black box” nature of complex neural networks, it’s difficult to tell what exactly Dex-Net 4.0 is actually basing its choices on, although there are some obvious preferences, explained Berkeley’s  Ken Goldberg in an email.

“We can try to infer some intuition but the two networks are inscrutable in that we can’t extract understandable ‘policies,’ ” he wrote. “We empirically find that smooth planar surfaces away from edges generally score well on the suction model and pairs of antipodal points generally score well for the gripper.”

Now that reliability and versatility are high, the next step is speed; Goldberg said that the team is “working on an exciting new approach” to reduce computation time for the network, to be documented, no doubt, in a future paper.

ANYmal’s new tricks

Quadrupedal robots are already flexible in that they can handle all kinds of terrain confidently, even recovering from slips (and of course cruel kicks). But when they fall, they fall hard. And generally speaking they don’t get up.

The way these robots have their legs configured makes it difficult to do things in anything other than an upright position. But ANYmal, a robot developed by ETH Zurich (and which you may recall from its little trip to the sewer recently), has a more versatile setup that gives its legs extra degrees of freedom.

What could you do with that extra movement? All kinds of things. But it’s incredibly difficult to figure out the exact best way for the robot to move in order to maximize speed or stability. So why not use a simulation to test thousands of ANYmals trying different things at once, and use the results from that in the real world?

This simulation-based learning doesn’t always work, because it isn’t possible right now to accurately simulate all the physics involved. But it can produce extremely novel behaviors or streamline ones humans thought were already optimal.

At any rate that’s what the researchers did here, and not only did they arrive at a faster trot for the bot (above), but taught it an amazing new trick: getting up from a fall. Any fall. Watch this:

It’s extraordinary that the robot has come up with essentially a single technique to get on its feet from nearly any likely fall position, as long as it has room and the use of all its legs. Remember, people didn’t design this — the simulation and evolutionary algorithms came up with it by trying thousands of different behaviors over and over and keeping the ones that worked.

Ikea assembly is the killer app

Let’s say you were given three bowls, with red and green balls in the center one. Then you’re given this on a sheet of paper:

As a human with a brain, you take this paper for instructions, and you understand that the green and red circles represent balls of those colors, and that red ones need to go to the left, while green ones go to the right.

This is one of those things where humans apply vast amounts of knowledge and intuitive understanding without even realizing it. How did you choose to decide the circles represent the balls? Because of the shape? Then why don’t the arrows refer to “real” arrows? How do you know how far to go to the right or left? How do you know the paper even refers to these items at all? All questions you would resolve in a fraction of a second, and any of which might stump a robot.

Researchers have taken some baby steps towards being able to connect abstract representations like the above with the real world, a task that involves a significant amount of what amounts to a sort of machine creativity or imagination.

Making the connection between a green dot on a white background in a diagram and a greenish roundish thing on a black background in the real world isn’t obvious, but the “visual cognitive computer” created by Miguel Lázaro-Gredilla and his colleagues at Vicarious AI seems to be doing pretty well at it.

It’s still very primitive, of course, but in theory it’s the same toolset that one uses to, for example, assemble a piece of Ikea furniture: look at an abstract representation, connect it to real-world objects, then manipulate those objects according to the instructions. We’re years away from that, but it wasn’t long ago that we were years away from a robot getting up from a fall or deciding a suction cup or pincer would work better to pick something up.

The papers and videos demonstrating all the concepts above should be available at the Science Robotics site.

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Gadgets

Gmail unleashes “email emoji reactions” onto an unsuspecting world

Published

on

Finally, the feature everyone has been asking for: Gmail 👏 emoji 👏 reactions 👏.

You can now reply to an email just like it’s an instant messaging chat, tacking on a “crying laughing” emoji to an email instead of replying. Google has a whole support article detailing the new feature, which allows you to “express yourself and quickly respond to emails with emojis.” Like a messaging app, a row of emoji reaction counts will appear below your email now, and other people on the thread can tap to add to the reaction count. Currently, it’s only on the Android Gmail app, but it’s presumably coming to other Gmail clients.

Of course, email is from the 1970s and does not natively support emoji reactions. That makes this a Gmail-proprietary feature, which is a problem for federated emails that are expected to work with a million different clients and providers. If you send an emoji reaction and someone on the email chain is not using an official Gmail client, they will get a new, additional email containing your singular reactive emoji. Google is not messing with the email standard, so people not using Gmail will be the most affected.

Another weird quirk is that because emoji reactions are just emails (that Gmail sends to other clients and hides for itself), any emoji reactions you send can’t be taken back. There’s only Gmail’s “Undo send” feature for taking back reactions, which delays sending emails for about 30 seconds, so you can second-guess yourself. After that, you’re creating a permanent emoji reaction paper trail.

Thankfully, there are some limits on this. It won’t work on business or school accounts, so you can’t respond to your boss’s email with a poop emoji. Emoji reactions are only for casual emails that people apparently send to friends. (Do these people not have group chats?) Emoji reactions also aren’t available for group email lists, messages with more than 20 recipients, emails on which you’re BCC’d, encrypted emails, and emails where the sender has a custom reply-to address.

If the idea of emoji reactions to email has you selecting the puke emoji, as far as we can tell, there’s no way to just turn this off.

Listing image by Google / Ron Amadeo

Continue Reading

Gadgets

Google might have a great idea for smart home automation—if it sticks to it

Published

on

Enlarge / Claus Scholz is offered tea and moral encouragement by his robots, MM7 and MM8, also known as “Psychotrons,” in 1950 Vienna. This could be us, but many home automation platforms are only playing at being helpful.

Gamma-Keystone via Getty Images

Google today released a new Android OS with some modest improvements, a smartwatch with an old-but-still-newer chip, and a Pixel 8 whose biggest new feature is seven years of updates. But buried inside all the Google news this week is something that could be genuinely, actually helpful to the humans who get into this kind of gear—help for people setting up automations in their homes.

It’s easy to buy smart home gear, and it’s occasionally easy to set it up, but figuring out all the ways that devices can work with one another can be daunting. Even smart home systems with robust scripting abilities mostly let users develop great ideas for connecting two or more devices. That’s where, according to Google, AI can help.

Google says it will use AI (the company’s broad definition of AI, at least) at two different levels. At an app level, Google Home can start condensing all the notifications from cameras, sensors, and other devices into a streamlined summary, patched together by generative AI, and which you can respond to with natural language.

Google’s Rick Osterloh describing an AI-flavored feature to help build home automation routines.

Screenshot from Google Home demonstration, showing Google Home suggesting package delivery automations.
Enlarge / Screenshot from Google Home demonstration, showing Google Home suggesting package delivery automations.

Google

What caught my attention was not the fact that your doorbell camera can recognize a package or that you can ask about it in English text—that’s a pretty standard Nest/Google/AI feat by now. What’s neat is that the Home app will now suggest automations that can follow from recognizing that package. In Google’s example, you could have certain lights in your home blink three times and have speakers play a chime—but only if somebody is home. (Presumably, you could set up an alternate notification solution for when you’re away.)

Earlier this week, Google announced another way that AI could help even seasoned smart home enthusiasts get more control. “Help me script” is a code automation tool that turns natural language—like, “When I arrive home and the garage door closes, turn on the downstairs lights”—into Google Home scripts. You might not have known that Google Home has a script editor or a Web interface, but it does, at least in a “Public Preview.”

“Help me script” is due to arrive “later this year in Public Preview,” while the app-based AI routine starters are an “experimental feature” that will be “rolling out” to (presumably Nest) subscribers next year. Google’s presentation, as is typical of Google generally, has fuzzy timing and availability details, so it’s hard to say whether the app-based automation AI will remain a subscriber-only feature.

It would be great to see Google—or any major hub maker in the smart home space—push automation and routine discovery forward, be it through generative AI or just smart code. Buying a light bulb that can be controlled by Bluetooth, Wi-Fi, Zigbee, or even Thread is something you can do at Home Depot. The same goes for motion sensors, sprinkler controllers, and many other gadgets. Hooking them up to Google Home, Alexa, Apple’s Home, or Home Assistant varies by device and system but should be achievable. Matter, which promised to make that last bit easier, hasn’t done so, but maybe give it more time.

Once you’ve got a bunch of things that you can toggle and control from a phone or a speaker, what then? What should these things do when you’re not looking? What would be the most helpful routine you might not have thought of—perhaps one that owners of similar devices have set up?

I thought of this recently when a few friends visited my house. I set up a motion sensor in my entryway, had a smart deadbolt in the door, and replaced the bulbs in two recessed fixtures with smart Wi-Fi bulbs. Using Home Assistant, I set up the area with a few rules:

  • When the door unlocks, the lights turn on for three minutes.
  • When motion is detected, turn on the lights until a presence is no longer detected
  • Don’t turn on the lights for motion after 11pm; only door locks (roaming cat rule)
  • If the lights turn on three times within five minutes, keep them on for 10 minutes

One friend played right to my nerdy ego and expressed admiration for the work. The friend then asked how they could get a similar setup at their house, and perhaps even for their backyard. I listed the brands of gear I’d bought and the particular timings. “Okay, but how do I set all that up without flying you to my house?” my friend asked. I was, again, flattered, but at the same time, I realized how much easier acquisition is than setup these days.

Most home apps—including those from Google, Amazon, and Apple—are annoying to use for automations. Apple’s Home demands you have a HomePod or Apple TV on your network before you can even start messing with automations. Google and Alexa routines tend to lean on you saying things to their assistants and speakers, and they don’t reach into the deeper aspects of most devices for triggers and actions.

Home Assistant, of course, gives you a blank slate for automations and routines, but it is likely a bit too blank for anyone not willing to do a lot of reading and experimentation. Even with years of experience using it, I regularly hit a wall with some of my ambitions or discover new ways of achieving things that are at once impressive and mystifying. Setting up a “Turn on my porch light at sunset” trigger led to the discovery that, actually, “sunset” is more of a concept involving sun angle, elevation, topography, and other variables, so you should set up that light based on an offset angle of the sun.

There’s a community of blueprint submissions, but these are a loose pile and provided as YAML code for your tinkering. I’ve read a lot of docs, tinkered with entity variables, played with Node-RED, and generally gotten my gear into some useful configurations. But there have to be many ways to make connecting your smart home gear far easier.

You can make home automation easier on yourself in the short term by buying into a customized total-home system, the kind installed by contractors and controlled with wall-mounted tablets. Or you can buy only devices from within one company’s ecosystem. Or you can stick entirely to things that happen to work with your preferred home app provider. But betting on one company to always be there for you is not something we generally recommend.

This is why the idea of Google—or any company—offering help with the deeper and more difficult parts of a smart home setup is so intriguing to me. There’s a lot of variables involved with Google delivering this kind of technology, making it widely available, and sticking to it. But offering any kind of help with automation ideas, discovery, and deeper connections is better than what most people get today.

Listing image by Gamma-Keystone via Getty Images

Continue Reading

Gadgets

Apple fixes overheating problems and 0-day security flaw with iOS 17.0.3 update

Published

on

Enlarge / iPhones running iOS 17.

Apple

When Apple released its statement about iPhone 15 Pro overheating issues earlier this week, the company indicated that an iOS update would be able to partially address that issue. That update has arrived today in the form of iOS 17.0.3, an update which claims to address “an issue that may cause iPhone to run warmer than expected,” as well as patching a pair of security exploits.

Apple also said that specific apps like Instagram and Uber were also causing phones to heat up and that it was working with developers on fixes. The iPhonedo YouTube channel recently demonstrated that version 302.0 of the Instagram app running on iOS 17 could also make iPhone 14 Pro phones and even an iPad Pro run hot, confirming that the issue wasn’t unique to the new phones.

Initial reports claimed that the iPhone 15 Pro’s new Apple A17 Pro chip, its new 3 nm manufacturing process, and/or the phone’s new titanium frame could be causing or exacerbating the heat problems. Apple has denied these claims. Even after the fix, you can still expect a new iPhone to run a bit warm during and immediately after initial setup, as it downloads apps and data and performs other background tasks.

The security updates include one patch for a kernel flaw (CVE-2023-42824) that Apple says is being actively exploited but requires local access to your device. A WebRTC bug (CVE-2023-5217) was also fixed, but to Apple’s knowledge, the bug isn’t being actively exploited.

This is the third minor update Apple has released for iOS 17 in the last three weeks. Version 17.0.1 also patched security flaws, while version 17.0.2 resolved a bug that could cause problems for people transferring data from an older iPhone to a new iPhone 15 or iPhone 15 Pro. The 17.0.2 update was initially only released for the iPhone 15 models, but Apple released it for all iPhone and iPad users a few days later.

It’s common for new iPhones to get specific iOS fixes in rapid succession since the new phones and new OS ship around the same time every year. Older devices also get more thorough vetting during the months-long developer and public beta programs, which Apple has made even easier to use in recent releases.

The first major update to iOS 17, version 17.1, is currently in beta testing. So far, it mostly seems to refine a few of iOS 17’s new features, including the StandBy smart display mode—MacRumors has a good roundup of the changes. If Apple follows its usual schedule, the 17.1 update should roll out for all iPhone and iPad users within the next few weeks.

Continue Reading

Trending