Connect with us

Cars

Google ponders the shortcomings of machine learning

Published

on

Critics of the current mode of artificial intelligence technology have grown louder in the last couple of years, and this week, Google, one of the biggest commercial beneficiaries of the current vogue, offered a response, if, perhaps, not an answer, to the critics.

In a paper published by the Google Brain and the Deep Mind units of Google, researchers address shortcomings of the field and offer some techniques they hope will bring machine learning farther along the path to what would be “artificial general intelligence,” something more like human reasoning.

The research acknowledges that current “deep learning” approaches to AI have failed to achieve the ability to even approach human cognitive skills. Without dumping all that’s been achieved with things such as “convolutional neural networks,” or CNNs, the shining success of machine learning, they propose ways to impart broader reasoning skills.

Also: Google Brain, Microsoft plumb the mysteries of networks with AI

The paper, “Relational inductive biases, deep learning, and graph networks,” posted on the arXiv pre-print service, is authored by Peter W. Battaglia of Google’s DeepMind unit, along with colleagues from Google Brain, MIT, and the University of Edinburgh. It proposes the use of network “graphs” as a means to better generalize from one instance of a problem to another.

Battaglia and colleagues, calling their work “part position paper, part review, and part unification,” observe that AI “has undergone a renaissance recently,” thanks to “cheap data and cheap compute resources.”

However, “many defining characteristics of human intelligence, which developed under much different pressures, remain out of reach for current approaches,” especially “generalizing beyond one’s experiences.”

Hence, “A vast gap between human and machine intelligence remains, especially with respect to efficient, generalizable learning.”

The authors cite some prominent critics of AI, such as NYU professor Gary Marcus.

In response, they argue for “blending powerful deep learning approaches with structured representations,” and their solution is something called a “graph network.” These are models of collections of objects, or entities, whose relationships are explicitly mapped out as “edges” connecting the objects.

“Human cognition makes the strong assumption that the world is composed of objects and relations,” they write, “and because GNs [graph networks] make a similar assumption, their behavior tends to be more interpretable.”

Also: Google Next 2018: A deeper dive on AI and machine learning advances

The paper explicitly draws upon work for more than a decade now on “graph neural networks.” It also echoes some of the recent interest by the Google Brain folks in using neural nets to figure out network structure.

But unlike that prior work, the authors make the surprising assertion that their work doesn’t need to use neural networks, per se.

Rather, modeling the relationships of objects is something that not only spans all the various machine learning models — CNNs, recurrent neural networks (RNNs), long-short-term memory (LSTM) systems, etc. — but also other approaches that are not neural nets, such as set theory.

The Google AI researchers reason that many things one would like to be able to reason about broadly — particles, sentences, objects in an image — come down to graphs of relationships among entities.


Google Brain, Deep Mind, MIT, University of Edinburgh.

The idea is that graph networks are bigger than any one machine-learning approach. Graphs bring an ability to generalize about structure that the individual neural nets don’t have.

The authors write, “Graphs, generally, are a representation which supports arbitrary (pairwise) relational structure, and computations over graphs afford a strong relational inductive bias beyond that which convolutional and recurrent layers can provide.”

A benefit of the graphs would also appear to be that they’re potentially more “sample efficient,” meaning, they don’t require as much raw data as strict neural net approaches.

To let you try it out at home, the authors this week offered up a software toolkit for graph networks, to be used with Google’s TensorFlow AI framework, posted on Github.

Also: Google preps TPU 3.0 for AI, machine learning, model training

Lest you think the authors think they’ve got it all figured out, the paper lists some lingering shortcomings. Battaglia & Co. pose the big question, “Where do the graphs come from that graph networks operate over?”

Deep learning, they note, just absorbs lots of unstructured data, such as raw pixel information. That data may not correspond to any particular entities in the world. So they conclude that it’s going to be an “exciting challenge” to find a method that “can reliably extract discrete entities from sensory data.”

They also concede that graphs are not able to express everything: “notions like recursion, control flow, and conditional iteration are not straightforward to represent with graphs, and, minimally, require additional assumptions.”

Other structural forms might be needed, such as, perhaps, imitations of computer-based structures, including “registers, memory I/O controllers, stacks, queues” and others.

Previous and related coverage:

What is AI? Everything you need to know

An executive guide to artificial intelligence, from machine learning and general AI to neural networks.

What is deep learning? Everything you need to know

The lowdown on deep learning: from how it relates to the wider field of machine learning through to how to get started with it.

What is machine learning? Everything you need to know

This guide explains what machine learning is, how it is related to artificial intelligence, how it works and why it matters.

What is cloud computing? Everything you need to know about

An introduction to cloud computing right from the basics up to IaaS and PaaS, hybrid, public, and private cloud.

Related stories:



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published.

Cars

Galaxy Z Fold 4 Under-Display Camera May Get A Stealthy Makeover

Published

on

According to a tweet from the account @SamsungRydah, which was first spied by SamMobile and has since been removed by Twitter based on a copyright claim (seemingly lending credibility to the leak), the Galaxy Z Fold 4 will rectify the poor invisibility of the UDC. The model will reportedly use a different arrangement of pixels to make it denser, providing a 132ppi circle, up from the Galaxy Z Fold 3 model’s measly 94ppi. The result is that the hole will hopefully be less visible, and text should be less distorted in that area. Unfortunately, it’s not completely invisible, at least not based on the leaked slide.

What isn’t clear, however, is whether Samsung is also upgrading the camera sensor itself to something more than just 4MP. Increasing the sensor’s own pixel count could help offset whatever side effects the UDC panel might have in terms of quality. While the Galaxy Z Fold 3 foldable’s internal camera was moderately usable for video calls, it just didn’t sit well with buyers considering how much they’d paid for the premium phone.

An upgraded internal camera would be in line with upgrades to the other cameras expected for the Galaxy Z Fold 4. These include a 50MP main sensor and a 10MP telephoto with 3x optical zoom. These are moderate upgrades, of course, but Samsung seems to be taking a page from Apple’s book here by improving quality through software and other minor tweaks rather than going all out on what would be a bulky sensor that wouldn’t fit the Galaxy Z Fold 4 model’s slim profile.

Continue Reading

Cars

Today’s Wordle Answer #416 – August 9, 2022 Solution And Hints

Published

on

The answer to today’s Wordle puzzle (#416 – August 9, 2022) is patty. Its meaning varies across cultural contexts — to the British, it’s a small pie or pastry; to North Americans, it’s a small, round, and flat chocolate-covered peppermint sweet. More generally to Americans, it’s a small flat cake of minced or finely chopped food, especially meat (via Merriam-Webster). To Mr. Krabs of SpongeBob, it’s a veggie burger (and a moneymaker). Seeing as the word patty has roots in the French word “pat,e” which means dough, Mr. Krabs obviously knew what he was doing. 

We solved the puzzle in four tries today, just like yesterday and the day before. We began guessing with the word roate, which is an uncommon but excellent first guess (even the WordleBot thought so). After following up with fluid, we hit a lucky strike with catty — only one letter short of the correct answer.

Continue Reading

Cars

The Reason Ford Won’t Build A Mustang GT500 Convertible

Published

on

Ford won’t be making a convertible Mustang GT500 because… it’s too powerful.

Hau Thai-Tang, Ford’s chief product platform, and operations officer confirmed the S550 platform on which the Mustang was built had reached “the top end of the capabilities” (via Muscle Cars & Trucks).

Dave Pericack, former Director Enterprise Product Line Management — Ford Icons, backs up those comments even more bluntly. “The real reason” Ford isn’t making a convertible model is because, by removing the roof, the car would lose all its structure and stiffness in the chassis and body. The power of the GT500 is simply too much for a convertible car to handle.

The only way it could make a convertible model would be to “spend a lot of money in exotic material” to compensate for the loss of the roof and the structural integrity it provides (via Ford Authority). Ford is not prepared to do that, considering the S550 platform is nearing the end of its road. The S650 platform — the seventh generation of Ford Mustangs — is on its way and will, in all likelihood, be the last Mustang with an internal combustion engine.

Fear not Ford faithful. The Blue Oval is already looking to the future and has already built a 900hp electric Mustang to show the world that an EV can also be a muscle car.

Continue Reading

Trending