Connect with us

Cars

Google ponders the shortcomings of machine learning

Published

on

Critics of the current mode of artificial intelligence technology have grown louder in the last couple of years, and this week, Google, one of the biggest commercial beneficiaries of the current vogue, offered a response, if, perhaps, not an answer, to the critics.

In a paper published by the Google Brain and the Deep Mind units of Google, researchers address shortcomings of the field and offer some techniques they hope will bring machine learning farther along the path to what would be “artificial general intelligence,” something more like human reasoning.

The research acknowledges that current “deep learning” approaches to AI have failed to achieve the ability to even approach human cognitive skills. Without dumping all that’s been achieved with things such as “convolutional neural networks,” or CNNs, the shining success of machine learning, they propose ways to impart broader reasoning skills.

Also: Google Brain, Microsoft plumb the mysteries of networks with AI

The paper, “Relational inductive biases, deep learning, and graph networks,” posted on the arXiv pre-print service, is authored by Peter W. Battaglia of Google’s DeepMind unit, along with colleagues from Google Brain, MIT, and the University of Edinburgh. It proposes the use of network “graphs” as a means to better generalize from one instance of a problem to another.

Battaglia and colleagues, calling their work “part position paper, part review, and part unification,” observe that AI “has undergone a renaissance recently,” thanks to “cheap data and cheap compute resources.”

However, “many defining characteristics of human intelligence, which developed under much different pressures, remain out of reach for current approaches,” especially “generalizing beyond one’s experiences.”

Hence, “A vast gap between human and machine intelligence remains, especially with respect to efficient, generalizable learning.”

The authors cite some prominent critics of AI, such as NYU professor Gary Marcus.

In response, they argue for “blending powerful deep learning approaches with structured representations,” and their solution is something called a “graph network.” These are models of collections of objects, or entities, whose relationships are explicitly mapped out as “edges” connecting the objects.

“Human cognition makes the strong assumption that the world is composed of objects and relations,” they write, “and because GNs [graph networks] make a similar assumption, their behavior tends to be more interpretable.”

Also: Google Next 2018: A deeper dive on AI and machine learning advances

The paper explicitly draws upon work for more than a decade now on “graph neural networks.” It also echoes some of the recent interest by the Google Brain folks in using neural nets to figure out network structure.

But unlike that prior work, the authors make the surprising assertion that their work doesn’t need to use neural networks, per se.

Rather, modeling the relationships of objects is something that not only spans all the various machine learning models — CNNs, recurrent neural networks (RNNs), long-short-term memory (LSTM) systems, etc. — but also other approaches that are not neural nets, such as set theory.

The Google AI researchers reason that many things one would like to be able to reason about broadly — particles, sentences, objects in an image — come down to graphs of relationships among entities.


Google Brain, Deep Mind, MIT, University of Edinburgh.

The idea is that graph networks are bigger than any one machine-learning approach. Graphs bring an ability to generalize about structure that the individual neural nets don’t have.

The authors write, “Graphs, generally, are a representation which supports arbitrary (pairwise) relational structure, and computations over graphs afford a strong relational inductive bias beyond that which convolutional and recurrent layers can provide.”

A benefit of the graphs would also appear to be that they’re potentially more “sample efficient,” meaning, they don’t require as much raw data as strict neural net approaches.

To let you try it out at home, the authors this week offered up a software toolkit for graph networks, to be used with Google’s TensorFlow AI framework, posted on Github.

Also: Google preps TPU 3.0 for AI, machine learning, model training

Lest you think the authors think they’ve got it all figured out, the paper lists some lingering shortcomings. Battaglia & Co. pose the big question, “Where do the graphs come from that graph networks operate over?”

Deep learning, they note, just absorbs lots of unstructured data, such as raw pixel information. That data may not correspond to any particular entities in the world. So they conclude that it’s going to be an “exciting challenge” to find a method that “can reliably extract discrete entities from sensory data.”

They also concede that graphs are not able to express everything: “notions like recursion, control flow, and conditional iteration are not straightforward to represent with graphs, and, minimally, require additional assumptions.”

Other structural forms might be needed, such as, perhaps, imitations of computer-based structures, including “registers, memory I/O controllers, stacks, queues” and others.

Previous and related coverage:

What is AI? Everything you need to know

An executive guide to artificial intelligence, from machine learning and general AI to neural networks.

What is deep learning? Everything you need to know

The lowdown on deep learning: from how it relates to the wider field of machine learning through to how to get started with it.

What is machine learning? Everything you need to know

This guide explains what machine learning is, how it is related to artificial intelligence, how it works and why it matters.

What is cloud computing? Everything you need to know about

An introduction to cloud computing right from the basics up to IaaS and PaaS, hybrid, public, and private cloud.

Related stories:



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Cars

Hertz massive Tesla Model 3 EV order sends interest rocketing

Published

on

Hertz today made a whopper of an announcement, revealing that it will be ordering 100,000 Teslas over the next year and adding those EVs to its fleet. Not only that, but Hertz will also install several thousand chargers at its network of locations around the US and Europe. With this move, Hertz says that it will “offer the largest EV rental fleet in North America and one of the largest in the world.”

Specifically, Hertz said today that it would purchase 100,000 Tesla Model 3s and those cars will start being available at airport and neighborhood locations across the US and Europe in November. At first, Hertz says its EV fleet will only be available in major markets within the US and “select cities” in Europe, but it seems the company is looking to expand availability rather quickly.

By the end of 2022, Hertz says it plans to have these Tesla Model 3s available in 65 markets, expanding further to 100 markets by the end of 2023. In all, Hertz plans to install 3,000 Tesla superchargers at various locations in the US and Europe as well. Perhaps unsurprisingly, this announcement sent Tesla’s stock on something of a surge this morning, but even more important is the impact this could have on the public perception of electric vehicles.

Hertz says that EVs will comprise more than 20% of its global rental fleet when the order is complete. In addition to placing this order, Hertz has also hired NFL star Tom Brady for an ad campaign centered around renting, driving, and charging these Tesla Model 3s. On Hertz’s website, we see that charging will be included in EV rentals, but only for a limited time. Beyond February 1st, 2022, it sounds like renters will have to foot the bill for charging.

It’s an ambitious plan, but in today’s announcement, Hertz cautions that the order of 100,000 Tesla Model 3s may be impacted by factors outside of its control. Specifically, the company is talking about the global semiconductor shortage, which could impact the production of computing hardware for some time to come. In any case, look for the option to rent a Tesla Model 3 to start popping up at Hertz locations next month.

Continue Reading

Cars

Bently Mulliner to unveil new bespoke collections at the Fort Lauderdale Boat Show

Published

on

Bentley Mulliner has curated three new bespoke design collections for its American clientele. The Mulliner Nauticus Collection will debut at the Fort Lauderdale International Boat Show this October 27 to 31, 2021. The Nauticus Collection includes four Continental GT V8 Convertible models dressed in a fancy yachting theme.

The Continental GT Nauticus Collection has Aegean Blue and Ghost White paint with custom 22-inch Aegean Blue polished wheels. Wearing Bentley Mulliner’s carbon fiber Styling Specification, each Nauticus Bentley also gets a carbon front splitter, side skirts, rear spoiler, and rear diffuser. Also, the interior is resplendent in Brunel, Linen, and Portland leather in bespoke color splits, while the center console is home to open-pore chevron light veneers.

“This timeless classic further highlights the infinite possibilities and abilities in which to blend art and technology into a unique emotion and experience,” said Peter Brandt, Holmann Automotive Vice President and General Manager of Bentley Fort Lauderdale. “The individuality and distinctiveness of the Mulliner design aesthetics perfectly align with our pursuit of creating unique customer experiences.”

Meanwhile, Bentley of Manhattan commissioned the Mulliner Skyline Collection for the Flying Spur, Continental GT Convertible, and Bentayga luxury SUV. As expected, the Skyline Collection draws inspiration from the breathtaking skyscrapers of the Big Apple.

All members of the Mulliner Skyline Collection will wear a unique Onyx black paint applied by hand and robots, said Bentley. Other noteworthy features include 22-inch black and silver alloy wheels, darker exterior trim, LED welcome lighting, and premium silver interior trim.

Last but not least is the Mulliner Miami Collection, inspired by Miami’s colorful lifeguard stations and pulsating art scene. The collection includes the Flying Spur, Bentayga, and Continental GT finished in bright Orange, Blue, and Lime Green paint. Inside, Bentley’s Miami Collection gets two-tone painted piano veneers, Klein Blue leather upholstery, and bespoke quilting, among many others.

“Our dealers are very involved in each of their local markets and communities with appreciation to maximize the ability to promote Bentley’s craftsmanship,” said Mike Rocco, Vice President of Sales & Operations for Bentley Americas. “The opportunity to expand inventory offering to customers and present a truly unique experience through the Personal Commissioning Guide is remarkable.”

Continue Reading

Cars

Vazirani Ekonk is possibly the world’s lightest electric car

Published

on

Indian electric automaker Vazirani is not as mainstream as Tesla and other legacy EV makers, but what will soon change with the launch of Ekonk performance EV. The Ekonk is a low-drag performance concept that signifies the birth of Vazirani’s assault in the burgeoning EV category. The name ‘Ekonk’ is a derivative from Indian scriptures representing the divine light, “where for the first time design and innovation comes together,” said the company.

From the looks of it, the Ekonk is a dedicated performance contender, and it’s doing it the old school way: By shedding weight. Vazirani claims Ekonk is the world’s lightest electric car, and losing pounds in an EV is hard to do without losing a wheel or two. Tipping the scales at around 1627 pounds (738kg), the Ekonk is almost as lightweight as other three-wheeled EVs like the Arcimoto (590 kg) and Daymak Spiritus (620 kg).

In addition, Ekonk is lighter than the Aptera solar EV (816 kg), and it has four wheels instead of three, so Vazirani may be up to something here. How did they do it? “Biomimicry – studying how animals and humans use breathing to regulate their body temperatures – combined with some ancient Indian manufacturing techniques resulted in the invention of the DiCo technology,” said Chunky Vazirani, CEO of Vazirani.

Vazirani’s proprietary DiCo battery cooling technology “allows the batteries to cool directly with the air, as opposed to needing liquid cooling,” adds Vazirani. The carmaker adds DiCo is the first solid-state direct cooling system that utilizes nanoparticle technology to enhance the range and performance of any EV.

The automaker did not mention the battery capacity, but it did say Ekonk can rush to 60 mph from a dead stop in 2.5-seconds and has a top speed of 192 mph. Also, we have no idea how many motors are hiding underneath Ekonk’s aerodynamically optimized body shell, but Vazirani claims 722 horsepower and mountains of torque.

Additionally, Ekonk is a proper driver’s car. Vazirani said Ekonk has no driver aids, with just the electric motors and the wheels separating the driver from the road. It’s not a bad-looking thing, either, but the absence of a fixed or foldable roof means this car is a genuine track toy. If you don’t like a speedster, Vazirani also has the Shul hyper EV (with two doors and a roof) with the same DiCo battery cooling technology.

Continue Reading

Trending