Connect with us

Gadgets

Robots learn to grab and scramble with new levels of agility – TechCrunch

Published

on

Robots are amazing things, but outside of their specific domains they are incredibly limited. So flexibility — not physical, but mental — is a constant area of research. A trio of new robotic setups demonstrate ways they can evolve to accommodate novel situations: using both “hands,” getting up after a fall, and understanding visual instructions they’ve never seen before.

The robots, all developed independently, are gathered together today in a special issue of the journal Science Robotics dedicated to learning. Each shows an interesting new way in which robots can improve their interactions with the real world.

On the other hand…

First there is the question of using the right tool for a job. As humans with multi-purpose grippers on the ends of our arms, we’re pretty experienced with this. We understand from a lifetime of touching stuff that we need to use this grip to pick this up, we need to use tools for that, this will be light, that heavy, and so on.

Robots, of course, have no inherent knowledge of this, which can make things difficult; it may not understand that it can’t pick up something of a given size, shape, or texture. A new system from Berkeley roboticists acts as a rudimentary decision-making process, classifying objects as able to be grabbed either by an ordinary pincer grip or with a suction cup grip.

A robot, wielding both simultaneously, decides on the fly (using depth-based imagery) what items to grab and with which tool; the result is extremely high reliability even on piles of objects it’s never seen before.

It’s done with a neural network that consumed millions of data points on items, arrangements, and attempts to grab them. If you attempted to pick up a teddy bear with a suction cup and it didn’t work the first ten thousand times, would you keep on trying? This system learned to make that kind of determination, and as you can imagine such a thing is potentially very important for tasks like warehouse picking for which robots are being groomed.

Interestingly, because of the “black box” nature of complex neural networks, it’s difficult to tell what exactly Dex-Net 4.0 is actually basing its choices on, although there are some obvious preferences, explained Berkeley’s  Ken Goldberg in an email.

“We can try to infer some intuition but the two networks are inscrutable in that we can’t extract understandable ‘policies,’ ” he wrote. “We empirically find that smooth planar surfaces away from edges generally score well on the suction model and pairs of antipodal points generally score well for the gripper.”

Now that reliability and versatility are high, the next step is speed; Goldberg said that the team is “working on an exciting new approach” to reduce computation time for the network, to be documented, no doubt, in a future paper.

ANYmal’s new tricks

Quadrupedal robots are already flexible in that they can handle all kinds of terrain confidently, even recovering from slips (and of course cruel kicks). But when they fall, they fall hard. And generally speaking they don’t get up.

The way these robots have their legs configured makes it difficult to do things in anything other than an upright position. But ANYmal, a robot developed by ETH Zurich (and which you may recall from its little trip to the sewer recently), has a more versatile setup that gives its legs extra degrees of freedom.

What could you do with that extra movement? All kinds of things. But it’s incredibly difficult to figure out the exact best way for the robot to move in order to maximize speed or stability. So why not use a simulation to test thousands of ANYmals trying different things at once, and use the results from that in the real world?

This simulation-based learning doesn’t always work, because it isn’t possible right now to accurately simulate all the physics involved. But it can produce extremely novel behaviors or streamline ones humans thought were already optimal.

At any rate that’s what the researchers did here, and not only did they arrive at a faster trot for the bot (above), but taught it an amazing new trick: getting up from a fall. Any fall. Watch this:

It’s extraordinary that the robot has come up with essentially a single technique to get on its feet from nearly any likely fall position, as long as it has room and the use of all its legs. Remember, people didn’t design this — the simulation and evolutionary algorithms came up with it by trying thousands of different behaviors over and over and keeping the ones that worked.

Ikea assembly is the killer app

Let’s say you were given three bowls, with red and green balls in the center one. Then you’re given this on a sheet of paper:

As a human with a brain, you take this paper for instructions, and you understand that the green and red circles represent balls of those colors, and that red ones need to go to the left, while green ones go to the right.

This is one of those things where humans apply vast amounts of knowledge and intuitive understanding without even realizing it. How did you choose to decide the circles represent the balls? Because of the shape? Then why don’t the arrows refer to “real” arrows? How do you know how far to go to the right or left? How do you know the paper even refers to these items at all? All questions you would resolve in a fraction of a second, and any of which might stump a robot.

Researchers have taken some baby steps towards being able to connect abstract representations like the above with the real world, a task that involves a significant amount of what amounts to a sort of machine creativity or imagination.

Making the connection between a green dot on a white background in a diagram and a greenish roundish thing on a black background in the real world isn’t obvious, but the “visual cognitive computer” created by Miguel Lázaro-Gredilla and his colleagues at Vicarious AI seems to be doing pretty well at it.

It’s still very primitive, of course, but in theory it’s the same toolset that one uses to, for example, assemble a piece of Ikea furniture: look at an abstract representation, connect it to real-world objects, then manipulate those objects according to the instructions. We’re years away from that, but it wasn’t long ago that we were years away from a robot getting up from a fall or deciding a suction cup or pincer would work better to pick something up.

The papers and videos demonstrating all the concepts above should be available at the Science Robotics site.

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Gadgets

Google will soon default to blurring explicit image search results

Published

on

Enlarge / Google’s new “Blur” setting for SafeSearch will soon be the default, blurring explicit images unless you’re logged in and over 18.

Aurich Lawson

Google has debuted a new default SafeSearch setting, somewhere between “on” and “off,” that automatically blurs explicit images in search results for most people.

In a blog post timed to Safer Internet Day, Google outlined a number of measures it plans to implement to “protect democracies worldwide,” secure high-risk individuals, improve password management, and protect credit card numbers. Tucked into a series of small-to-medium announcements is a notable change to search results, Google’s second core product after advertising.

A new setting, rolling out “in the coming months,” “will blur explicit imagery if it appears in Search results when SafeSearch filtering isn’t turned on,” writes Google’s Jen Fitzpatrick, senior vice president of Core Systems & Experiences. “This setting will be the new default for people who don’t already have the SafeSearch filter turned on, with the option to adjust settings at any time.”

Google’s explanatory image (seen above) shows someone logged in and searching for images of “Injury.” A notice shows that “Google turned on SafeSearch blurring,” which “blurs explicit images in your search results.” One of the example image results—”Dismounted Complex Blast Injury (DCBI)” from ResearchGate—is indeed quite explicit, as far as human viscera and musculature goes. Google provides one last check if you click on that blurred image: “This image may contain explicit content. SafeSearch blurring is on.”

Explicit images, such as the "blast injury" shown in Google's example, will be blurred by default in Google search images, unless a user is over 18, signs in, and turns it off.
Enlarge / Explicit images, such as the “blast injury” shown in Google’s example, will be blurred by default in Google search images, unless a user is over 18, signs in, and turns it off.

If you click “View image,” you see life’s frail nature. If you click “Manage setting,” you can choose between three settings: Filter (where explicit results don’t show up at all), Blur (where both blurring and are-you-sure clicks occur), and Off (where you see “all relevant results, even if they’re explicit”).

Signed-in users under the age of 18 automatically have SafeSearch enabled, blocking content including “pornography, violence, and gore.” With this change, Google will automatically be blurring explicit content for everybody using Google who doesn’t log in, stay logged in, and specifically ask to show it instead. It’s a way to prevent children from getting access to explicit images, but also, notably, a means of ensuring people are logged in to Google if they’re looking for something… very specific. An incognito window, it seems, just won’t do.

Google turned on SafeSearch as its default for under-18 users in August 2021, having been pressured by Congress to better protect children across its services, including search and YouTube.

Continue Reading

Gadgets

OnePlus takes on the iPad with the OnePlus Pad

Published

on

Android tablets are on their way back, and one of Android’s biggest manufacturers (we’re talking about OnePlus parent company BBK) is bringing an Android tablet to the US for the first time. Say hello to the OnePlus Pad, an 11.61-inch tablet with an optional keyboard and stylus. We don’t know how much it costs, so don’t ask. There’s also no hard release date, but preorders start in April.

What we do know are the specs. The 11.61-inch display is a 144 Hz LCD, with a resolution of 2800×2000. That’s an aspect ratio of 7:5, or a bit wider than a 4:3 display, which OnePlus claims is a “book-like” aspect ratio. The SoC is a MediaTek Dimensity 9000. That’s a rarity in the US, but it’s basically a generic ARM design for 2022 flagship phones, with one 3.05 GHz ARM Cortex X2 CPU, three A710 CPUs, and four A510 CPUs. It’s a 4 nm chip with an ARM Mali-G710 MC10. You also get 8GB of RAM (there’s an option for 12GB), 128GB of UFS 3.1 storage, and a 9510 mAh battery. This is not in the super-flagship tablet territory and should (hopefully) come with an affordable price tag.

As always, OnePlus’ trademark quick-charging is here, and it’s 67 W. On a tiny phone battery, that kind of charging will usually take a phone from 0-100 in around a half hour, but with a big tablet battery, a full charge still takes “just over 60 minutes.” In the fine print, OnePlus actually gives a warning against any repair attempts, saying, “The battery has been especially encrypted for safety purposes. Please go to an official OnePlus service center to repair your battery or get a genuine replacement battery.” I’ve never heard of a battery being “encrypted” before, but I think they mean there is a serial number check in the firmware somewhere and that it will presumably refuse to work if you replace it. As for the possibility of an “official OnePlus service center” actually existing, there is a business finder on the OnePlus India website, but not one in the US, so it’s looking like mail-in service only.

The tablet is made up of an aluminum unibody that weighs 555 g. The sides are rounded over, which should make it feel comfortable to hold. It comes with four speakers, a USB-C port on the right side, and a set of three pogo pins on the bottom for the keyboard. The back has a circular camera bump that makes it look like a close cousin of the OnePlus 11, and it holds a single 13 MP camera. We also hope you like green, because that appears to be the only color.

There’s no fingerprint sensor at all. There is a cutout that looks like it might be a fingerprint sensor, but we guess that’s just a radio signal window. There’s also no GPS listed on the spec sheet. We know next to nothing about the “OnePlus Magnetic Keyboard” and “OnePlus Stylo” pen. The keyboard has a small trackpad that supports swiping. The pen has a 2 ms response time, which sounds pretty good. That’s about it. Presumably we’ll know more in April.

Listing image by OnePlus

Continue Reading

Gadgets

Report: Sonos’ next flagship speaker will be the spatial audio-focused Era 300

Published

on

Enlarge / Sonos One smart speaker.

Sonos will release a new flagship speaker “in the coming months,” according to a report Monday from The Verge. The publication said this will be called the Era 300 and that Sonos is prioritizing the device’s spatial audio capabilities.

The Verge claimed that Sonos is aiming for the Era 300 to be its most accurate speaker yet. It pointed to a heightened focus on making Dolby Atmos content shine, as well as improving music using spatial audio. According to The Verge, the Era 300 will be a “multidirectional speaker built to get the most from spatial audio” by way of a “completely re-architected acoustic design.”

We don’t have deeper details, like specs or pricing. However, Wi-Fi 6 and a USB-C port are apparently “likely,” and Bluetooth support is also possible. According to The Verge, Sonos has at least looked into including all these features on the Era 300.

The Verge first started reporting about the Era 300, codenamed Optimo 2, in August. This week, it identified more evidence of the speaker’s development in the form of two recent documents from TV mount-maker Sanus that name the Era 300.

In August, The Verge, citing “early, work-in-progress images” it reportedly viewed, said that Sonos’ upcoming flagship speaker would include “an arsenal of drivers, including several that fire in different directions from beneath the shell between the front speaker grille and backplate.” It also suggested a more beefed-up product, with twice the RAM and eight times the flash memory as the highest-specced Sonos speaker today.

The Verge also claimed this week that Sonos is working on a lower-priced Era 100, suggesting that it could include Dolby Atmos support and serve as a follow-up to the Sonos One, which has a $179 MSRP as of writing.

Should the Era 300 truly debut soon, it will face competition from Apple’s recent $299, full-sized HomePod revival, which supports spatial audio with Dolby Atmos with Apple apps and Apple TV 4K. Besides superior audio quality, a new Sonos flagship could score points with shoppers by playing better with non-Apple devices, such as by including Bluetooth and by besting the Apple speaker’s Wi-Fi 4 support.

Continue Reading

Trending