Connect with us

Cars

Google ponders the shortcomings of machine learning

Published

on

Critics of the current mode of artificial intelligence technology have grown louder in the last couple of years, and this week, Google, one of the biggest commercial beneficiaries of the current vogue, offered a response, if, perhaps, not an answer, to the critics.

In a paper published by the Google Brain and the Deep Mind units of Google, researchers address shortcomings of the field and offer some techniques they hope will bring machine learning farther along the path to what would be “artificial general intelligence,” something more like human reasoning.

The research acknowledges that current “deep learning” approaches to AI have failed to achieve the ability to even approach human cognitive skills. Without dumping all that’s been achieved with things such as “convolutional neural networks,” or CNNs, the shining success of machine learning, they propose ways to impart broader reasoning skills.

Also: Google Brain, Microsoft plumb the mysteries of networks with AI

The paper, “Relational inductive biases, deep learning, and graph networks,” posted on the arXiv pre-print service, is authored by Peter W. Battaglia of Google’s DeepMind unit, along with colleagues from Google Brain, MIT, and the University of Edinburgh. It proposes the use of network “graphs” as a means to better generalize from one instance of a problem to another.

Battaglia and colleagues, calling their work “part position paper, part review, and part unification,” observe that AI “has undergone a renaissance recently,” thanks to “cheap data and cheap compute resources.”

However, “many defining characteristics of human intelligence, which developed under much different pressures, remain out of reach for current approaches,” especially “generalizing beyond one’s experiences.”

Hence, “A vast gap between human and machine intelligence remains, especially with respect to efficient, generalizable learning.”

The authors cite some prominent critics of AI, such as NYU professor Gary Marcus.

In response, they argue for “blending powerful deep learning approaches with structured representations,” and their solution is something called a “graph network.” These are models of collections of objects, or entities, whose relationships are explicitly mapped out as “edges” connecting the objects.

“Human cognition makes the strong assumption that the world is composed of objects and relations,” they write, “and because GNs [graph networks] make a similar assumption, their behavior tends to be more interpretable.”

Also: Google Next 2018: A deeper dive on AI and machine learning advances

The paper explicitly draws upon work for more than a decade now on “graph neural networks.” It also echoes some of the recent interest by the Google Brain folks in using neural nets to figure out network structure.

But unlike that prior work, the authors make the surprising assertion that their work doesn’t need to use neural networks, per se.

Rather, modeling the relationships of objects is something that not only spans all the various machine learning models — CNNs, recurrent neural networks (RNNs), long-short-term memory (LSTM) systems, etc. — but also other approaches that are not neural nets, such as set theory.

The Google AI researchers reason that many things one would like to be able to reason about broadly — particles, sentences, objects in an image — come down to graphs of relationships among entities.


Google Brain, Deep Mind, MIT, University of Edinburgh.

The idea is that graph networks are bigger than any one machine-learning approach. Graphs bring an ability to generalize about structure that the individual neural nets don’t have.

The authors write, “Graphs, generally, are a representation which supports arbitrary (pairwise) relational structure, and computations over graphs afford a strong relational inductive bias beyond that which convolutional and recurrent layers can provide.”

A benefit of the graphs would also appear to be that they’re potentially more “sample efficient,” meaning, they don’t require as much raw data as strict neural net approaches.

To let you try it out at home, the authors this week offered up a software toolkit for graph networks, to be used with Google’s TensorFlow AI framework, posted on Github.

Also: Google preps TPU 3.0 for AI, machine learning, model training

Lest you think the authors think they’ve got it all figured out, the paper lists some lingering shortcomings. Battaglia & Co. pose the big question, “Where do the graphs come from that graph networks operate over?”

Deep learning, they note, just absorbs lots of unstructured data, such as raw pixel information. That data may not correspond to any particular entities in the world. So they conclude that it’s going to be an “exciting challenge” to find a method that “can reliably extract discrete entities from sensory data.”

They also concede that graphs are not able to express everything: “notions like recursion, control flow, and conditional iteration are not straightforward to represent with graphs, and, minimally, require additional assumptions.”

Other structural forms might be needed, such as, perhaps, imitations of computer-based structures, including “registers, memory I/O controllers, stacks, queues” and others.

Previous and related coverage:

What is AI? Everything you need to know

An executive guide to artificial intelligence, from machine learning and general AI to neural networks.

What is deep learning? Everything you need to know

The lowdown on deep learning: from how it relates to the wider field of machine learning through to how to get started with it.

What is machine learning? Everything you need to know

This guide explains what machine learning is, how it is related to artificial intelligence, how it works and why it matters.

What is cloud computing? Everything you need to know about

An introduction to cloud computing right from the basics up to IaaS and PaaS, hybrid, public, and private cloud.

Related stories:



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Cars

Waymo recreated fatal crashes putting its software at the wheel – Here’s how it did

Published

on

Waymo is tackling the safety issue of autonomous vehicles head-on, using simulations to replay fatal crashes but replacing the human driver involved with the Alphabet company’s software, to show what the Waymo Driver would’ve done differently. The research looked at every fatal accident recorded in Chandler, Arizona – where the Waymo One driverless car-hailing service currently operates – between 2008 and 2017.

“We excluded crashes that didn’t match situations that the Waymo Driver would face in the real world today, such as when crashes occurred outside of our current operating domain,” Trent Victor, Director of Safety Research and Best Practices at Waymo, explains. “Then, the data was used to carefully reconstruct each crash using best-practice methods. Once we had the reconstructions, we simulated how the Waymo Driver might have performed in each scenario.”

In total, there were 72 different simulations that the system needed to handle. In those where there were two cars involved, Waymo modeled each in two ways. First, where the Waymo Driver was in control of the “initiator” vehicle, which initiated the crash, and then again with it as the “responder” vehicle, which responds to the initiator’s actions. That took the total to 91 simulations.

The Waymo Driver avoided every crash as initiator – a total of 52 simulations – Waymo says. That was mainly down to the computer following the rules of the road that human drivers in the actual crashes did not, such as avoiding speeding, maintaining a gap with other traffic, and not running through red lights or failing to yield appropriately.

On the flip side, where the Waymo Driver was the responder, it managed to avoid 82-percent of the crashes in the simulations. According to Waymo’s Victor, “in the vast majority of events, it did so with smooth, consistent driving – without the need to brake hard or make an urgent evasive response.”

In a further 10-percent of the simulations, the Waymo Driver was able to take action to mitigate the crash’s severity. There, the driver was 1.3-15x less likely to sustain a serious injury, Waymo calculates.

Finally, in the remaining 8-percent of crashes simulated, the Waymo Driver was unable to mitigate or avoid the impact. They were all situations where a human-operated vehicle struck the back of a Waymo vehicle that was stationary or moving at a constant speed, this “giving the Waymo Driver little opportunity to respond,” Victor explains.

That is equally important, Waymo argues, because when they finally launch in any significant number, autonomous vehicles are going to have to coexist with human drivers on the road for some time to come. Those human drivers can’t be counted on to follow the same rules as stringently as Waymo’s software demands.

Waymo has released a paper, detailing its findings. Part of the challenge for assessing autonomous vehicles, it argues, is that high-severity collisions are thankfully relatively rare in the real world. As such, “evaluating effectiveness in these scenarios through public road driving alone is not practical given the gradual nature of ADS deployments.”

Continue Reading

Cars

2022 Genesis G70 Launch Edition previews sport sedan refresh

Published

on

Genesis has revealed the new 2022 G70 Launch Edition, the first of the refreshed versions of its compact sports sedan to land in the US, looking handsome with the automaker’s striking new design language. Announced last October, Genesis’ smallest sedan will debut initially in the form of the limited-production 2022 G70 Launch Edition, with only 500 expected to be offered.

Where the old G70 had a squared-off fascia, this updated version is a lot softer in its angles. The bottom edge of the oversized shield-shaped front grille now comes to a point in the lower fascia, rather than being flat, while that lower grille section is more muscular and contoured.

It’s the headlamps, though, which are the biggest departure. They get Genesis’ new signature quad-LED element, with dual horizontal daytime running lamp lines on each side. It’s something we’ve seen the automaker put to good use on its larger sedans, and on SUVs like the new GV80.

Genesis says the new G70 is lower and wider at the front end, while the profile of the sedan is sharper, too. At the rear, the trunk lid has been smoothed out, with a more distinctive integrated spoiler. The taillamp clusters, meanwhile, have a more angular appearance, echoing the quad LED light signature at the front. Altogether it looks tidier and more focused than the outgoing car.

Inside, meanwhile, the changes are more subtle. The dashboard shape in general has been carried over, with dedicated HVAC control knobs, a physical transmission shifter, and a multifunction steering wheel. However there’s now a new 10.25-inch HD display atop the dashboard, replacing the old 8-inch version.

That gets the graphics from Genesis’ more recent models, a huge improvement compared to the Hyundai-donated software UI in the last-gen G70. There’s both Apple CarPlay and Android Auto, and the driver gets an 8-inch HD digital gauge cluster flanked by analog dials.

As for what’s under the hood, don’t expect a departure from the existing engines. That includes the optional 3.3-liter twin-turbo V6, with 365 horsepower. The entry engine is a carry-over of the 2.0-liter turbocharged inline-4, with 252 horsepower. An 8-speed automatic is likely to be standard; the six-speed manual gearbox Genesis once offered won’t be making an appearance.

Genesis will keep the options simple for the Launch Edition: it’ll only offer the sedan in Verbier White or Melbourne Grey matte paint. 19-inch black wheels will be standard, as will a red leather interior. Although you’ll be able to pick RWD or AWD, the G70 Launch Edition will only be offered with the more potent V6 engine, Car & Driver reports.

Pricing is yet to be confirmed, though the current G70 starts at just north of $37k. Reservations for the Launch Edition are open now, with the first cars set to arrive in the US come the spring.

Continue Reading

Cars

GMC Hummer EV SUV reveal dated: Watch the electric pickup go sideways on ice

Published

on

GMC will reveal its second Hummer EV variant in just a few weeks time, with the SUV version of the all-electric super truck promising an alternative body-style to the original pickup. The GMC Hummer EV SUV will be unveiled on April 3, the automaker confirmed today, though this isn’t the first time we’ve heard about the new version.

Back in July 2020, in fact, GMC teased what we could expect from the SUV body. As you might expect, it’s the same bold lines and chunky styling from the front back to roughly the C-pillars.

However unlike the pickup’s roughly 5 foot long bed, the SUV will have an enclosed cargo area. That will allow for a spare wheel to be mounted on the tailgate. We’re still expecting to see removable roof panels, allowing most of the top of the electric truck to be opened up, though final cargo capacity will have to wait until the official reveal.

As for what’s underneath the sheet metal, there we’re unlikely to see GMC straying too far from the architecture of the Hummer EV pickup. Based on GM’s Ultium platform for electric vehicles, that includes up to three motors and 1,000 horsepower in total, depending on trim. Torque vectoring – where power is individually controlled in its delivery to each rear wheel – and a “CrabWalk” mode that allows the trunk to track diagonally at low speeds in off-road or tight parking lot conditions are also supported.

0-60 mph should come in around 3 seconds for the most potent Hummer EV, GMC has said, while range will be up to around 350 miles on a charge. 800V DC fast charging with support for up to 350 kW should mean 100 miles of range added in just 10 minutes.

While GMC is launching the pickup version with the limited-availability 2022 Hummer EV Edition 1 first, it has more affordable versions planned for 2022 and beyond. That’s likely to be the same strategy the automaker takes with the electric SUV, with premium pricing and a heavily constrained supply to begin with. Reservations for the SUV will open on April 3, GMC has said.

As for progress on the electric pickup, GMC says it has been undertaking winter testing in Michigan’s Upper Peninsula, making ample use of the snow and ice to see how the all-wheel drive holds up. That also includes testing of the electronic stability control and traction control.

Production of the 2022 Hummer EV pickup is expected to begin in the fall, GMC says, with initial deliveries before the end of the year.

Continue Reading

Trending