Connect with us

Gadgets

This self-driving AI faced off against a champion racer (kind of) – TechCrunch

Published

on

Developments in the self-driving car world can sometimes be a bit dry: a million miles without an accident, a 10 percent increase in pedestrian detection range, and so on. But this research has both an interesting idea behind it and a surprisingly hands-on method of testing: pitting the vehicle against a real racing driver on a course.

To set expectations here, this isn’t some stunt, it’s actually warranted given the nature of the research, and it’s not like they were trading positions, jockeying for entry lines, and generally rubbing bumpers. They went separately, and the researcher, whom I contacted, politely declined to provide the actual lap times. This is science, people. Please!

The question which Nathan Spielberg and his colleagues at Stanford were interested in answering has to do with an autonomous vehicle operating under extreme conditions. The simple fact is that a huge proportion of the miles driven by these systems are at normal speeds, in good conditions. And most obstacle encounters are similarly ordinary.

If the worst should happen and a car needs to exceed these ordinary bounds of handling — specifically friction limits — can it be trusted to do so? And how would you build an AI agent that can do so?

The researchers’ paper, published today in the journal Science Robotics, begins with the assumption that a physics-based model just isn’t adequate for the job. These are computer models that simulate the car’s motion in terms of weight, speed, road surface, and other conditions. But they are necessarily simplified and their assumptions are of the type to produce increasingly inaccurate results as values exceed ordinary limits.

Imagine if such a simulator simplified each wheel to a point or line when during a slide it is highly important which side of the tire is experiencing the most friction. Such detailed simulations are beyond the ability of current hardware to do quickly or accurately enough. But the results of such simulations can be summarized into an input and output, and that data can be fed into a neural network — one that turns out to be remarkably good at taking turns.

The simulation provides the basics of how a car of this make and weight should move when it is going at speed X and needs to turn at angle Y — obviously it’s more complicated than that, but you get the idea. It’s fairly basic. The model then consults its training, but is also informed by the real-world results, which may perhaps differ from theory.

So the car goes into a turn knowing that, theoretically, it should have to move the wheel this much to the left, then this much more at this point, and so on. But the sensors in the car report that despite this, the car is drifting a bit off the intended line — and this input is taken into account, causing the agent to turn the wheel a bit more, or less, or whatever the case may be.

And where does the racing driver come into it, you ask? Well, the researchers needed to compare the car’s performance with a human driver who knows from experience how to control a car at its friction limits, and that’s pretty much the definition of a racer. If your tires aren’t hot, you’re probably going too slow.

The team had the racer (a “champion amateur race car driver,” as they put it) drive around the Thunderhill Raceway Park in California, then sent Shelley — their modified, self-driving 2009 Audi TTS — around as well, ten times each. And it wasn’t a relaxing Sunday ramble. As the paper reads:

Both the automated vehicle and human participant attempted to complete the course in the minimum amount of time. This consisted of driving at accelerations nearing 0.95g while tracking a minimum time racing trajectory at the the physical limits of tire adhesion. At this combined level of longitudinal and lateral acceleration, the vehicle was able to approach speeds of 95 miles per hour (mph) on portions of the track.

Even under these extreme driving conditions, the controller was able to consistently track the racing line with the mean path tracking error below 40 cm everywhere on the track.

In other words, while pulling a G and hitting 95, the self-driving Audi was never more than a foot and a half off its ideal racing line. The human driver had much wider variation, but this is by no means considered an error — they were changing the line for their own reasons.

“We focused on a segment of the track with a variety of turns that provided the comparison we needed and allowed us to gather more data sets,” wrote Spielberg in an email to TechCrunch. “We have done full lap comparisons and the same trends hold. Shelley has an advantage of consistency while the human drivers have the advantage of changing their line as the car changes, something we are currently implementing.”

Shelley showed far lower variation in its times than the racer, but the racer also posted considerably lower times on several laps. The averages for the segments evaluated were about comparable, with a slight edge going to the human.

This is pretty impressive considering the simplicity of the self-driving model. It had very little real-world knowledge going into its systems, mostly the results of a simulation giving it an approximate idea of how it ought to be handling moment by moment. And its feedback was very limited — it didn’t have access to all the advanced telemetry that self-driving systems often use to flesh out the scene.

The conclusion is that this type of approach, with a relatively simple model controlling the car beyond ordinary handling conditions, is promising. It would need to be tweaked for each surface and setup — obviously a rear-wheel-drive car on a dirt road would be different than front-wheel on tarmac. How best to create and test such models is a matter for future investigation, though the team seemed confident it was a mere engineering challenge.

The experiment was undertaken in order to pursue the still-distant goal of self-driving cars being superior to humans on all driving tasks. The results from these early tests are promising, but there’s still a long way to go before an AV can take on a pro head-to-head. But I look forward to the occasion.

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Gadgets

Nvidia’s RTX 3050 brings ray tracing and DLSS to $800 laptops

Published

on

Nvidia has added two entry-level GPUs—the GeForce RTX 3050 Ti and RTX 3050—to the RTX 30 laptop line. Nvidia says the chips will be available “this summer” in laptops starting at $799.

Like every other product in the RTX 30 line, these cards are based on the Ampere architecture and are capable of ray tracing and Nvidia’s proprietary “Deep Learning Super Sampling” (DLSS) upscaling tech. As you can probably guess from their names, the cards slot in below the existing RTX 3060 GPU, with cuts across the board. You can dive into Nvidia’s comparison table below, but the short version is that these cheaper GPUs have less memory (4GB) and fewer CUDA, Tensor, and ray-tracing cores.

Nvidia's comparison of its laptop GPU lineup.

Nvidia’s comparison of its laptop GPU lineup.

Nvidia

DLSS lets your GPU render a game at a lower resolution and then uses AI to upscale everything to a higher resolution, helping you hit a higher frame rate than you could at your native resolution. It sounds like AI hocus-pocus, but it actually works—you just need the right Nvidia card and a game that supports it. On a lower-powered laptop, anything that helps boost gaming performance without sacrificing graphical fidelity is welcome.

Intel’s Tiger Lake-H processors were also announced today, and we should see a lot of devices launching with both chips. That’s if there is sufficient supply of the chips to go around. Nvidia is already facing serious video card shortages, and Intel is being hit by the global chip shortage, too. Maybe partner laptops are getting a higher allocation of chips?

Continue Reading

Gadgets

Intel claims its new Tiger Lake-H CPUs for laptops beat AMD’s Ryzen 5000

Published

on

Enlarge / Intel’s new Core i9-11980HK leads the 11th-gen laptop CPU lineup.

Intel today announced 10 new 11th-generation CPUs for high-performance laptops like those made for gamers or content creators. Built on the 10nm SuperFin process, the new chips are in the Core i9, Core i7, Core i5, and Xeon families, and they carry the label “Tiger Lake-H.”

New consumer laptop CPUs include the Core i9-11980HK, Core i9-11900H, Core i7-11800H—all of which have eight cores—plus the Core i5-11400H and Core i5-11260H, which each have six cores.

Naturally, Intel today put the spotlight on the fastest Core i9-11980HK chip. The company claims this CPU is able to beat its predecessor by several percentage points in games like Hitman 3 or Rainbow Six: Siege, depending on the game—anywhere from 5 percent to 21 percent, according to Intel’s own testing.

Intel also claims that the Core i9-11980HK beats AMD’s Ryzen 9 5900HX by anywhere from 11 to 26 percent. Obviously, reviewers will have to put these claims to the test in the coming weeks.

Other features in the new Tiger Lake-H chips include support for Thunderbolt 4 and Wi-Fi 6E.

As is the custom with new Intel CPU launches, numerous OEMs refreshed their laptop lineups with the new chips, including Dell, HP, Lenovo, MSI, Acer, Asus, and others. You can just about bet that if an OEM offered a portable gaming laptop for which these chips are suitable—like the Dell XPS 15, for example—a new version of that laptop was announced today.

Today was a big day for laptop hardware. By no coincidence at all, Nvidia also announced the new GeForce RTX 3050 Ti GPU, which is offered as a configuration option in some of the same laptops that now feature the new Tiger Lake-H CPUs.

In case you’re curious about more information for Intel’s new laptop chips, Intel has more details on its website. The new chips obviously won’t be sold to consumers on their own, but you’ll likely see them in numerous laptops on the market throughout the next year.

Continue Reading

Gadgets

Samsung and AMD will reportedly take on Apple’s M1 SoC later this year

Published

on

Samsung is planning big things for the next release of its Exynos system on a chip. The company has already promised that the “next generation” of its Exynos SoC will feature a GPU from AMD, which inked a partnership with Samsung in June 2019. A new report from The Korea Economic Daily provides more details.

The report says that “the South Korean tech giant will unveil a premium Exynos chip that can be used in laptops as well as smartphones in the second half of this year” and that “the new Exynos chip for laptops will use the graphics processing unit (GPU) jointly developed with US semiconductor company Advanced Micro Devices Inc.”

There’s a bit to unpack here. First, a launch this year would be an acceleration of the normal Samsung schedule. The last Exynos flagship was announced in January 2021, so you would normally pencil in the new Exynos for early next year. Second, the report goes out of its way to specify that the laptop chip will have an AMD GPU, so… not the smartphone chip?

It was always questionable that Samsung was planning to beef up its Exynos smartphone chips, since the company splits its flagship smartphone lineup between Exynos and Qualcomm, depending on the region. Exynos chips are always inferior to Qualcomm chips, but Samsung considers the two products close enough to call the Exynos- and Qualcomm-based phones the same product. If Samsung knocked it out of the park with an AMD GPU, where would that leave the Qualcomm phones? Would Samsung ditch Qualcomm? That’s hard to believe, and it sounds like the easy answer is for the company to just not dramatically change the Exynos smartphone chips.

For laptops, Samsung has to chase down its favorite rival, Apple, which is jumping into ARM laptops with its M1 chip. If Samsung wants its products to have any hope of being competitive with Apple laptops, it would have to launch its own ARM laptop SoC. Getting AMD onboard for this move makes the most sense (it already makes Windows GPUs), and while that would be a good first step, it still doesn’t seem like it would lead to a complete, competitive product.

What about the CPU?

Even if we suppose everything goes right with Samsung’s AMD partnership and the company gets a top-tier SoC GPU, the kind of chip Samsung seems to be producing is not what you would draw up for use in a great laptop. The three big components in an SoC are the CPU, GPU, and modem. It seems like everyone is investing in SoC design, and some companies are better positioned to produce a competitive chip than others.

Of course, everybody is chasing Apple’s M1 SoC, but Apple’s expertise lines up well with what you would want from a laptop. Apple has a world-beating CPU team thanks to years of iPhone work based on the company’s acquisition of PA Semi. Apple started making its own GPUs with the iPhone X in 2017, and the M1 GPU is pretty good. Apple doesn’t have a modem solution on the market yet (its phones use Qualcomm modems), but it bought Intel’s 5G smartphone business in 2019, and it’s working on in-house modem chips. This is a great situation for a laptop chip. You want a strong, efficient CPU and a decent GPU—and you don’t really need a modem.

An AMD GPU is a start for Samsung, but the company does not have a great ARM CPU solution. ARM licenses the ARM CPU instruction set and ARM CPU designs, a bit like if Intel both licensed the x86 architecture and sold Pentium blueprints. Apple goes the more advanced route of licensing the ARM instruction set and designing its own CPUs, while Samsung licenses ARM’s CPU designs. ARM is a generalist and needs to support many different form factors and companies with its CPU designs, so it will never make a chip design that can compete with Apple’s focused designs. By all accounts, Samsung’s Exynos chip will have an inferior CPU. It will also be pretty hard to make a gaming pitch with the AMD GPU since there aren’t any Windows-on-ARM laptop games.

Qualcomm is trying to get into the ARM laptop game, too. Qualcomm’s biggest strength is its modems, which aren’t really relevant in the laptop space. Qualcomm has been in a similar position to Samsung; the company had a decent GPU division thanks to acquiring ATI’s old mobile GPU division, but it was always behind Apple because it used ARM’s CPU designs. Qualcomm’s current laptop chip is the Snapdragon 8cx gen 2, but that chip is not even a best-effort design from the company. The 8cx gen 2 doesn’t just use an ARM CPU design; it uses one that is two generations old: a Cortex A76-based design instead of the Cortex X1 design that a modern phone would use. It’s also a generation behind when it comes to the manufacturing process—7 nm instead of the 5 nm the Snapdragon 888 uses.

Qualcomm seems like it will get serious about laptop chips soon, as it bought CPU design firm Nuvia in January 2021. Nuvia has never made a product, but it was founded by defectors from Apple’s CPU division, including the chief CPU architect. Qualcomm says that with Nuvia, it will be able to ship internally designed CPUs by 2H 2022.

And then there’s Google, which wants to ship its own phone SoC, called “Whitechapel,” in the Pixel 6. Google does not have CPU, GPU, or modem expertise, so we don’t expect much from the company other than a longer OS support window.

And what about Windows?

With no great ARM laptop CPUs out there for non-Apple companies, there isn’t a huge incentive to break up the Wintel (or maybe Winx64?) monopoly. Getting a non-Apple ARM laptop most likely means running Windows for ARM, with whatever questionable app support that system has. Microsoft has been working on x86 and x64 emulation on ARM for a bit. The project entered its “first preview” in December in the Windows dev channel, but it doesn’t sound like it will be a great option for many apps. Microsoft has already said that games are “outside the target” of the company’s first attempt at x64 emulation.

Native apps are also a possibility, though developers don’t seem as interested in Windows ARM support as they do in macOS ARM support. Google was quickly ready with an ARM-native build of Chrome for macOS, but there still isn’t a build of Chrome for ARM for Windows. Adobe took a few months, but Photoshop for M1 Macs hit in March, while the Windows-on-ARM build of Photoshop is still in beta. You can, of course, run Microsoft Office. You’ll probably be stuck with OneDrive for cloud folders, since Dropbox and Google Drive don’t support Windows on ARM.

Continue Reading

Trending