Connect with us

Cars

Google ponders the shortcomings of machine learning

Published

on

Critics of the current mode of artificial intelligence technology have grown louder in the last couple of years, and this week, Google, one of the biggest commercial beneficiaries of the current vogue, offered a response, if, perhaps, not an answer, to the critics.

In a paper published by the Google Brain and the Deep Mind units of Google, researchers address shortcomings of the field and offer some techniques they hope will bring machine learning farther along the path to what would be “artificial general intelligence,” something more like human reasoning.

The research acknowledges that current “deep learning” approaches to AI have failed to achieve the ability to even approach human cognitive skills. Without dumping all that’s been achieved with things such as “convolutional neural networks,” or CNNs, the shining success of machine learning, they propose ways to impart broader reasoning skills.

Also: Google Brain, Microsoft plumb the mysteries of networks with AI

The paper, “Relational inductive biases, deep learning, and graph networks,” posted on the arXiv pre-print service, is authored by Peter W. Battaglia of Google’s DeepMind unit, along with colleagues from Google Brain, MIT, and the University of Edinburgh. It proposes the use of network “graphs” as a means to better generalize from one instance of a problem to another.

Battaglia and colleagues, calling their work “part position paper, part review, and part unification,” observe that AI “has undergone a renaissance recently,” thanks to “cheap data and cheap compute resources.”

However, “many defining characteristics of human intelligence, which developed under much different pressures, remain out of reach for current approaches,” especially “generalizing beyond one’s experiences.”

Hence, “A vast gap between human and machine intelligence remains, especially with respect to efficient, generalizable learning.”

The authors cite some prominent critics of AI, such as NYU professor Gary Marcus.

In response, they argue for “blending powerful deep learning approaches with structured representations,” and their solution is something called a “graph network.” These are models of collections of objects, or entities, whose relationships are explicitly mapped out as “edges” connecting the objects.

“Human cognition makes the strong assumption that the world is composed of objects and relations,” they write, “and because GNs [graph networks] make a similar assumption, their behavior tends to be more interpretable.”

Also: Google Next 2018: A deeper dive on AI and machine learning advances

The paper explicitly draws upon work for more than a decade now on “graph neural networks.” It also echoes some of the recent interest by the Google Brain folks in using neural nets to figure out network structure.

But unlike that prior work, the authors make the surprising assertion that their work doesn’t need to use neural networks, per se.

Rather, modeling the relationships of objects is something that not only spans all the various machine learning models — CNNs, recurrent neural networks (RNNs), long-short-term memory (LSTM) systems, etc. — but also other approaches that are not neural nets, such as set theory.

The Google AI researchers reason that many things one would like to be able to reason about broadly — particles, sentences, objects in an image — come down to graphs of relationships among entities.


Google Brain, Deep Mind, MIT, University of Edinburgh.

The idea is that graph networks are bigger than any one machine-learning approach. Graphs bring an ability to generalize about structure that the individual neural nets don’t have.

The authors write, “Graphs, generally, are a representation which supports arbitrary (pairwise) relational structure, and computations over graphs afford a strong relational inductive bias beyond that which convolutional and recurrent layers can provide.”

A benefit of the graphs would also appear to be that they’re potentially more “sample efficient,” meaning, they don’t require as much raw data as strict neural net approaches.

To let you try it out at home, the authors this week offered up a software toolkit for graph networks, to be used with Google’s TensorFlow AI framework, posted on Github.

Also: Google preps TPU 3.0 for AI, machine learning, model training

Lest you think the authors think they’ve got it all figured out, the paper lists some lingering shortcomings. Battaglia & Co. pose the big question, “Where do the graphs come from that graph networks operate over?”

Deep learning, they note, just absorbs lots of unstructured data, such as raw pixel information. That data may not correspond to any particular entities in the world. So they conclude that it’s going to be an “exciting challenge” to find a method that “can reliably extract discrete entities from sensory data.”

They also concede that graphs are not able to express everything: “notions like recursion, control flow, and conditional iteration are not straightforward to represent with graphs, and, minimally, require additional assumptions.”

Other structural forms might be needed, such as, perhaps, imitations of computer-based structures, including “registers, memory I/O controllers, stacks, queues” and others.

Previous and related coverage:

What is AI? Everything you need to know

An executive guide to artificial intelligence, from machine learning and general AI to neural networks.

What is deep learning? Everything you need to know

The lowdown on deep learning: from how it relates to the wider field of machine learning through to how to get started with it.

What is machine learning? Everything you need to know

This guide explains what machine learning is, how it is related to artificial intelligence, how it works and why it matters.

What is cloud computing? Everything you need to know about

An introduction to cloud computing right from the basics up to IaaS and PaaS, hybrid, public, and private cloud.

Related stories:



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Cars

Check out the 2+2 Chevrolet Corvette that never was

Published

on

The 60s was an iconic era in the automotive realm in the United States, with some incredibly popular cars getting their start then Vehicles like the Ford Mustang, Chevrolet Camaro, Chevrolet Corvette, and Dodge Charger, to name a few. Sometimes it takes one vehicle to change the industry and spawn many similar products from the other automakers. Case in point is Ford and its Mustang, which kicked off the pony car era eliciting responses with other iconic vehicles.

Another of the iconic Ford vehicles in the era that sold extremely well was the Thunderbird. The Thunderbird routinely outsold the Chevrolet Corvette. Early in its production, the Thunderbird was a two-seat sports car very similar to the Corvette. It grew in later generations, becoming a 2+2, offering a back seat to carry more passengers. The vehicle in the image above looks like the iconic 60s split-window Corvettes that are so valuable today, but there’s a key difference.

The difference is readily apparent when you look at the side view image in the Instagram post below, where General Motors Design shared photos of a one-off design buck. A design buck is essentially the shell of the vehicle used by automotive designers of the day to get the vehicle’s design just right. This particular example was never powered and never cruised the streets.

The car was a response to the Thunderbird, adding backseats to the Corvette in 1962. Sadly, the 2+2 Corvette was never built, and reports indicate the design buck was later crushed. Another interesting tidbit is that GM reportedly brought in a Ferrari to help with the styling and proportions of the car.

As for what finally became of the project, a GM executive named Bunkie Knudsen, who was part of the styling team but wasn’t a fan of the project, reportedly worked to get the project scrapped. He believed it would taint the Corvette brand and wouldn’t sell in large enough numbers to justify building it. The only Corvettes ever sold by GM have all been two-seat sports cars.

Continue Reading

Cars

Alpha Motors Superwolf is a completely decked out electric pickup

Published

on

Alpha Motors unveiled a new version of its all-electric pickup called the Superwolf. The difference between this particular version of the truck and the ones that have been shown before is that the Superwolf is completely decked out with all sorts of accessories you might expect to find only on the aftermarket. One of the more interesting accessories seen on the truck is tube doors similar to what you commonly see on Jeeps.

Superwolf also has custom KMC wheels with large off-road tires, a custom front bumper with tow rings and skid plates, as well as a complete roof rack featuring an LED light bar and large locking case. In the bed of the truck is a rack that adds more style to the truck and supports the roof basket.

Under the doors are also compact step rails that look like they are intended to protect the vehicle’s body while off-roading. The truck also features wide fender flares and looks fantastic in general. Other interesting features of the truck include a bed cover that appears to be made out of aluminum and a rack that spans the bed allowing for items to be attached on top of the bed itself.

Several other accessories are available for the truck, including a bed extension and more. Other than the accessories, Superwolf features a driving range of up to 300 miles per charge. It has two motors for four-wheel drive and can reach 60 mph in 6.5 seconds. The truck has a tow rating of 6724 pounds and features a rapid charger with battery cooling and heating.

The truck’s interior can hold four passengers and has a digital display for the driver along with the wide-format center display. Bluetooth connectivity and premium sound are also featured. Superwolf can be reserved now with a starting MSRP listed at between $48,000 and $56,000.

Continue Reading

Cars

Classic 1967 Chevrolet Camaro Z/28 Trans Am racer heads to auction

Published

on

When it comes to muscle cars of the 60s, one of the most iconic is the Chevrolet Camaro. The value of a normal Chevrolet Camaro from the era is often very high. The value of this 1967 Chevrolet Camaro Z/28 Trans Am is even higher as it’s an actual successful racing car from the era. This vehicle is the first of six Sunoco Trans Am Camaros that Penske Racing built.

This particular car has an extensive racing history with drivers Mark Donohue and George Follmer behind the wheel. The car has been completely restored by Kevin McKay in its iconic Sunoco racing livery. The car is said to be one of the most significant Chevrolet-powered racing cars ever built. Because of its rarity and racing pedigree, the car is expected to bring as much as $2 million at auction in Pebble Beach.

The car features a 302 cubic inch overhead valve V-8 engine and a single four-barrel carburetor. It’s estimated to produce 450 horsepower and has a four-speed manual gearbox along with four-wheel hydraulic disc brakes. The front suspension is independent wishbone with coil springs, while the rear has a live axle with leaf springs, a setup common in the era.

The racing series the car was built for required a 302 cubic-inch engine. The Z/28 was born due to the need to produce examples for homologation. The Z/28 became the Camaro performance production model, with 602 examples being built in 1967. The first 25 of those cars off the assembly line were sent to racers. This particular car was the 14th produced and was sent to Roger Penske.

This car is the first of only six Penske Camaros built between 1967 and 1969. The auction house says that over $330,000 was spent to restore the iconic car completely. The car comes with a file documenting its extensive racing history and photos of the car as it was discovered and during its restoration.

Continue Reading

Trending