Connect with us

Tech News

This clever AI hid data from its creators to cheat at its appointed task – TechCrunch

Published

on

Depending on how paranoid you are, this research from Stanford and Google will be either terrifying or fascinating. A machine learning agent intended to transform aerial images into street maps and back was found to be cheating by hiding information it would need later in “a nearly imperceptible, high-frequency signal.” Clever girl!

This occurrence reveals a problem with computers that has existed since they were invented: they do exactly what you tell them to do.

The intention of the researchers was, as you might guess, to accelerate and improve the process of turning satellite imagery into Google’s famously accurate maps. To that end the team was working with what’s called a CycleGAN — a neural network that learns to transform images of type X and Y into one another, as efficiently yet accurately as possible, through a great deal of experimentation.

In some early results, the agent was doing well — suspiciously well. What tipped the team off was that, when the agent reconstructed aerial photographs from its street maps, there were lots of details that didn’t seem to be on the latter at all. For instance, skylights on a roof that were eliminated in the process of creating the street map would magically reappear when they asked the agent to do the reverse process:

The original map, left; the street map generated from the original, center; and the aerial map generated only from the street map. Note the presence of dots on both aerial maps not represented on the street map.

Although it is very difficult to peer into the inner workings of a neural network’s processes, the team could easily audit the data it was generating. And with a little experimentation, they found that the CycleGAN had indeed pulled a fast one.

The intention was for the agent to be able to interpret the features of either type of map and match them to the correct features of the other. But what the agent was actually being graded on (among other things) was how close an aerial map was to the original, and the clarity of the street map.

So it didn’t learn how to make one from the other. It learned how to subtly encode the features of one into the noise patterns of the other. The details of the aerial map are secretly written into the actual visual data of the street map: thousands of tiny changes in color that the human eye wouldn’t notice, but that the computer can easily detect.

In fact, the computer is so good at slipping these details into the street maps that it had learned to encode any aerial map into any street map! It doesn’t even have to pay attention to the “real” street map — all the data needed for reconstructing the aerial photo can be superimposed harmlessly on a completely different street map, as the researchers confirmed:

The map at right was encoded into the maps at left with no significant visual changes.

The colorful maps in (c) are a visualization of the slight differences the computer systematically introduced. You can see that they form the general shape of the aerial map, but you’d never notice it unless it was carefully highlighted and exaggerated like this.

This practice of encoding data into images isn’t new; it’s an established science called steganography, and it’s used all the time to, say, watermark images or add metadata like camera settings. But a computer creating its own steganographic method to evade having to actually learn to perform the task at hand is rather new. (Well, the research came out last year, so it isn’t new new, but it’s pretty novel.)

One could easily take this as a step in the “the machines are getting smarter” narrative, but the truth is it’s almost the opposite. The machine, not smart enough to do the actual difficult job of converting these sophisticated image types to each other, found a way to cheat that humans are bad at detecting. This could be avoided with more stringent evaluation of the agent’s results, and no doubt the researchers went on to do that.

As always, computers do exactly what they are asked, so you have to be very specific in what you ask them. In this case the computer’s solution was an interesting one that shed light on a possible weakness of this type of neural network — that the computer, if not explicitly prevented from doing so, will essentially find a way to transmit details to itself in the interest of solving a given problem quickly and easily.

This is really just a lesson in the oldest adage in computing: PEBKAC. “Problem exists between keyboard and computer.” Or as HAL put it: “It can only be attributable to human error.”

The paper, “CycleGAN, a Master of Steganography,” was presented at the Neural Information Processing Systems conference in 2017. Thanks to Fiora Esoterica and Reddit for bringing this old but interesting paper to my attention.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech News

Here’s Why The Cantilever Aero Bullet Is Considered The Worst Planes Ever Built

Published

on

The Wrights were engineers all over the world trading notes and testing prototypes with the shared goal of powered flight. Alberto Santos-Dumont flew a manned airship in a neat circle around the Eiffel Tower in 1901. Wilhelm Kress’s Drachenflieger might have etched its name in the Austrian sky in the same year, had its power-to-weight ratio not been thrown off by errors at a fledgling engine builder called Daimler.

All that seems to have sounded too much like work for Christmas. He did not study aerial flight. He carried out no experiments. He decided to skip to the part where people would pay him and a flying machine would appear. To that end, he founded the Christmas Aeroplane Company in 1909. In 1918, it would be known as the Cantilever Aero Company.

Christmas had nothing to sell but a story to the Continental Aircraft Corporation and New York Senator James Wolcott Wadsworth when World War I broke out.

[Featured image by Flight Archive at FlightGlobal via Wikimedia Commons | Cropped and scaled | CC BY-SA 3.0 ]

Continue Reading

Tech News

Samsung SmartThings Station Review: One-Button Connected Home Control

Published

on

The SmartThings Station looks very similar in size and shape to Samsung’s Galaxy 15W Wireless Charger, with a couple of key extras. First, the “Smart Button” on the top panel lets you trigger up to three automated sequences involving any of your connected smart home devices. And two indicator lights on the front face of the unit show the status of the wireless charger and the status of the Station as a smart hub, such as: working normally, restarting, can’t connect to the Internet, or scanning for new devices to add to SmartThings.

The unit I tested came with a USB-C to USB-C cable, and an AC power adapter. There is also a lower-priced SKU that does not include the power adapter, but be wary of that, as many online commenters complained that it did not work with their third-party power adapters. 

Once I plugged in the SmartThings Station, and it booted up for the first time, a pop-up on my Samsung Galaxy S22 Ultra phone prompted me to go to the SmartThings app, where I connected the Station to the same Wi-Fi network as the phone. You can opt to save the Station’s network connectivity info to Samsung’s SmartThings cloud while you’re at it.

After setup, the app shows the Station device info, such as its location (My home, My office, etc.) and room (living room, bedroom, kitchen, and so forth).

Continue Reading

Tech News

Reasons To Like An Affordable Electric Pony

Published

on

All of the settings are accessed through Ford’s oversized infotainment screen, a 15.5-inch portrait aspect touchscreen floating within easy reach of the driver. Ford has trimmed physical controls to a minimum, though there’s a volume knob integrated into the touchscreen — and which can also adjust temperature and other settings, depending on mode — plus a drive mode selector knob, and steering wheel controls.

SYNC 4A, Ford’s infotainment system, generally makes good use of that screen real estate, though it can take a little familiarizing as there are a lot of menus, slide-down trays, and different views. The core HVAC controls are persistent across the bottom, while buttons at the top jump into the settings, a wireless Apple CarPlay or Android Auto connection, pull up the cameras, or trigger Amazon Alexa.

It’s all fast and reasonably slick — Ford has pushed out a number of updates to the UI since the EV first launched — and the rest of the Mustang Mach-E’s cabin holds up, too. Select models do without some of the fancier trim and materials, but it still feels sturdy and spacious. Even this base model gets a wireless phone charger and multiple USB ports in both A and C flavors, and while the color scheme may not be exactly colorful, it feels like it could hold up to family use.

The same goes for the storage. Alongside plenty of cabin cubbies, there’s a 29.7 cu-ft trunk, which expands to 59.7 cu-ft with the rear split seats folded. Under the hood is a further 4.7 cu-ft of space, both waterproof and with a useful drainage plug if you need to hose it down after storing muddy boots there.

Continue Reading

Trending