Connect with us

Gadgets

This year’s Computex was a wild ride with dueling chip releases, new laptops and 467 startups – TechCrunch

Published

on

After a relatively quiet show last year, Computex picked up the pace this year, with dueling chip launches by rivals AMD and Intel and a slew of laptop releases from Asus, Qualcomm, Nvidia, Lenovo and other companies.

Founded in 1981, the trade show, which took place last week from May 28 to June 1, is one of the ICT industry’s largest gatherings of OEMs and ODMs. In recent years, the show’s purview has widened, thanks to efforts by its organizers, the Taiwan External Trade Development Council and Taipei Computer Association, to attract two groups: high-end computer customers, such as hardcore gamers, and startups looking for investors and business partners. This makes for a larger, more diverse and livelier show. Computex’s organizers said this year’s event attracted 42,000 international visitors, a new record.

Though the worldwide PC market continues to see slow growth, demand for high-performance computers is still being driven by gamers and the popularity of esports and live-streaming sites like Twitch. Computex, with its large, elaborate booths run by brands like Asus’ Republic of Gaming, is a popular destination for many gamers (the show is open to the public, with tickets costing NTD $200, or about $6.40), and began hosting esport competitions a few years ago.

The timing of the show, formally known as the Taipei International Information Technology Show, at the end of May or beginning of June each year, also gives companies a chance to debut products they teased at CES or preview releases for other shows later in the year, including E3 and IFA.

One difference between Computex now and ten (or maybe even just five) years ago is that the increasing accessibility of high-end PCs means many customers keep a close eye on major announcements by companies like AMD, Intel and Nvidia, not only to see when more powerful processors will be available but also because of potential pricing wars. For example, many gamers hope competition from new graphic processor units from AMD will force Nvidia to bring down prices on its popular but expensive GPUs.

The Battle of the Chips

The biggest news at this year’s Computex was the intense rivalry between AMD and Intel, whose keynote presentations came after a very different twelve months for the two competitors.

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Gadgets

Shipping times for Apple’s $19 Polishing Cloth slip to late November

Published

on

Enlarge / If you wanted to polish your Apple products, bad news: you’ll need to wait at least a month to get Apple’s Polishing Cloth.

Apple

Between ongoing supply chain issues, chip shortages, and pent-up demand, Apple’s new MacBook Pros were always going to be hard to get. They’ve been up for preorder for less than 24 hours, and if you order one now, you probably won’t get it before November or December.

But the new laptops aren’t Apple’s only in-demand product: The shipping times for Apple’s $19 microfiber Polishing Cloth have also already slipped back into mid to late November. Unfortunately, this means that your compatible iPhones, iPads, Macs, Apple Watches, and iPods will need to remain unpolished for at least a month. It’s unclear whether the delays are being caused by low supply, overwhelming demand, or some combination of both.

The Polishing Cloth, folded over in a visually appealing manner. Without testing, we can't say whether the Apple logo is cosmetic or if it meaningfully improves the polishing experience.

The Polishing Cloth, folded over in a visually appealing manner. Without testing, we can’t say whether the Apple logo is cosmetic or if it meaningfully improves the polishing experience.

Apple

The Polishing Cloth boasts support for an impressive range of Apple products, which Apple lists out in detail on the Cloth’s product page. The list includes iPhones as old as 2014’s iPhone 6, every generation of Apple Watch, and even the old iPod nano and iPod shuffle. Without testing, however, we can’t confirm whether the Polishing Cloth will adequately polish older unsupported devices or non-Apple gadgets like Android phones or the Nintendo Switch.

The Polishing Cloth isn’t a new Apple product—it has shipped with the company’s $5,000 Pro Display XDR since that monitor was released back in 2019. But this is the first time that Apple has offered its best, most premium polishing experience to the users of its other devices.

Note: Ars Technica may earn compensation for sales from links on this post through affiliate programs.

Listing image by Apple

Continue Reading

Gadgets

The new MacBook Pro seems to have an HDMI 2.0 port, not 2.1

Published

on

Enlarge / Farthest right: The HDMI port on the MacBook Pro.

Lee Hutchinson

The newly announced 14-inch and 16-inch MacBook Pro models have HDMI ports, but they have a limitation that could be frustrating for many users over the long term, according to Apple’s specs page for both machines and as noted by Paul Haddad on Twitter.

The page says the HDMI port has “support for one display with up to 4K resolution at 60 Hz.” That means users with 4K displays at 120 Hz (or less likely, 8K displays at 60 Hz) won’t be able to tap the full capability of those displays through this port. It implies limited throughput associated with an HDMI 2.0 port instead of the most recent HDMI 2.1 standard, though there are other possible explanations for the limitation besides the port itself, and we don’t yet know which best describes the situation.

There aren’t many monitors and TVs that do 4K at 120 frames per second, and those that do are expensive. But they do exist, and they’re only going to get more common. In fact, it seems a safe bet that after a few years, 4K@120 Hz may become the industry standard.

So while this is an edge-case problem for only certain users with ultra-high-end displays right now, that won’t always be the case. The limitation could become frustrating for a much broader range of users sometime in the lifetime of a new MacBook Pro purchased today.

Of course, 4K@120 Hz is still achievable via the Thunderbolt port, and there are Thunderbolt-to-HDMI and Thunderbolt-to-DisplayPort adapters that will help users sidestep the issue. And the new MacBook Pro itself has a variable refresh rate screen that often refreshes at 120 Hz.

So if you want to connect the new MacBook Pro to a high-end display, no one’s stopping you. It just might cost more money to achieve, and the HDMI port might feel vestigial and useless to a lot of people in four or five years.

Before this week’s update to the MacBook Pro line, Apple went several years without offering HDMI ports on MacBook Pro computers at all, instead using only Thunderbolt. This redesign also saw Apple reintroduce the SD card slot, which was omitted in the last major MacBook Pro redesign in 2016.

Note: Ars Technica may earn compensation for sales from links on this post through affiliate programs.

Continue Reading

Gadgets

The “Google Silicon” team gives us a tour of the Pixel 6’s Tensor SoC

Published

on

Enlarge / A promo image for the Google Tensor SoC.

Google

The Pixel 6 is official, with a wild new camera design, incredible pricing, and the new Android 12 OS. The headline component of the device has to be the Google Tensor “system on chip” (SoC), however. This is Google’s first main SoC in a smartphone, and the chip has a unique CPU core configuration and a strong focus on AI capabilities.

Since when is Google a chip manufacturer, though? What are the goals of Tensor SoC? Why was it designed in its unique way? To get some answers, we sat down with members of the “Google Silicon” team—a name I don’t think we’ve heard before.

Google Silicon is a group responsible for mobile chips from Google. That means the team designed previous Titan M security chips in the Pixel 3 and up, along with the Pixel Visual Core in the Pixel 2 and 3. The group has been working on main SoC development for three or four years, but it remains separate from the Cloud team’s silicon work on things like YouTube transcoding chips and Cloud TPUs.

Phil Carmack is the vice president and general manager of Google Silicon, and Monika Gupta is the senior director on the team. Both were nice enough to tell us a bit more about Google’s secretive chip.

Most mobile SoC vendors license their chip architecture from ARM, which also offers some (optional) guidelines on how to design a chip using its cores. And, apart from Apple, most of these custom designs stick pretty closely to these guidelines. This year, the most common design is a chip with one big ARM Cortex-X1 core, three medium A78 cores, and four slower, lower-power A55 cores for background processing.

Now wrap your mind around what Google is doing with the Google Tensor: the chip still has four A55s for the small cores, but it has two Arm Cortex-X1 CPUs at 2.8 GHz to handle foreground processing duties.

For “medium” cores, we get two 2.25 GHz A76 CPUs. (That’s A76, not the A78 everyone else is using—these A76s are the “big” CPU cores from last year.) When Arm introduced the A78 design, it said that the core—on a 5nm process—offered 20 percent more sustained performance in the same thermal envelope compared to the 7nm A76. Google is now using the A76 design but on a 5nm chip, so, going by ARM’s description, Google’s A76 should put out less heat than an A78 chip. Google is basically spending more thermal budget on having two big cores and less on the medium cores.

So the first question for the Google Silicon team is: what’s up with this core layout?

Carmack’s explanation is that the dual-X1 architecture is a play for efficiency at “medium” workloads. “We focused a lot of our design effort on how the workload is allocated, how the energy is distributed across the chip, and how the processors come into play at various points in time,” Carmack said. “When a heavy workload comes in, Android tends to hit it hard, and that’s how we get responsiveness.”

This is referring to the “rush to sleep” behavior most mobile chipsets exhibit, where something like loading a webpage has everything thrown at it so the task can be done quickly and the device can return to a lower-power state quickly.

“When it’s a steady-state problem where, say, the CPU has a lighter load but it’s still modestly significant, you’ll have the dual X1s running, and at that performance level, that will be the most efficient,” Carmack said.

He gave a camera view as an example of a “medium” workload, saying that you “open up your camera and you have a live view and a lot of really interesting things are happening all at once. You’ve got imaging calculations. You’ve got rendering calculations. You’ve got ML [machine learning] calculations, because maybe Lens is on detecting images or whatever. During situations like that, you have a lot of computation, but it’s heterogeneous.”

A quick aside: “heterogeneous” here means using more bits of the SoC for compute than just the CPU, so in the case of Lens, that means CPU, GPU, ISP (the camera co-processor), and Google’s ML co-processor.

Carmack continued, “You might use the two X1s dialed down in frequency so they’re ultra-efficient, but they’re still at a workload that’s pretty heavy. A workload that you normally would have done with dual A76s, maxed out, is now barely tapping the gas with dual X1s.”

The camera is a great case study, since previous Pixel phones have failed at exactly this kind of task. The Pixel 5 and 5a both regularly overheat after three minutes of 4K recording. I’m not allowed to talk too much about this right now, but I did record a 20 minute, 4K, 60 FPS video on a Pixel 6 with no overheating issues. (I got bored after 20 minutes.)

This is what the phone looks like, if you're wondering.
Enlarge / This is what the phone looks like, if you’re wondering.

Google

So, is Google pushing back on the idea that one big core is a good design? The idea of using one big core has only recently popped up in Arm chips, after all. We used to have four “big” cores and four “little” cores without any of this super-sized, single-core “prime” stuff.

“It all comes down to what you’re trying to accomplish,” Carmack said. “I’ll tell you where one big core versus two wins: when your goal is to win a single-threaded benchmark. You throw as many gates as possible at the one big core to win a single-threaded benchmark… If you want responsiveness, the quickest way to get that, and the most efficient way to get high-performance, is probably two big cores.”

Carmack warned that this “could evolve depending on how efficiency is mapped from one generation to the next,” but for the X1, Google claims that this design is better.

“The single-core performance is 80 percent faster than our previous generation; the GPU performance is 370 percent faster than our previous generation. I say that because people are going to ask that question, but to me, that’s not really the story,” Carmack explained. “I think the one thing you can take away from this part of the story is that although we’re a brand-new entry into the SoC space, we know how to make high-frequency, high-performance circuits that are dense, fast, and capable… Our implementation is rock solid in terms of frequencies, in terms of frequency per watt, all of that stuff. That’s not a reason to build an all-new Tensor SoC.”

Continue Reading

Trending