Back in March came the surprising news that a satellite communications company still more or less in stealth mode had launched several tiny craft into orbit — against the explicit instructions of the FCC. The company, Swarm Technologies, now faces a $900,000 penalty from the agency, as well as extra oversight of its continuing operations.
Swarm’s SpaceBEEs are the beginning of a planned constellation of small satellites with which the company intends to provide low-cost global connectivity.
Unfortunately, the units are so small — about a quarter the size of a standard cubesat, which is already quite tiny — that the FCC felt they would be too difficult to track, and did not approve the launch.
SpaceBEEs are small, as you can see. Credit: Swarm Technologies
Swarm, perhaps thinking it better to ask forgiveness than file the paperwork for permission, launched anyway in January aboard India’s PSLV-C40, which carried more than a dozen other passengers to space as well. (I asked Swarm and the launch provider, Spaceflight, at the time for comment but never heard back.)
The FCC obviously didn’t like this, and began an investigation shortly afterwards. According to an FCC press release:
The investigation found that Swarm had launched the four BEEs using an unaffiliated launch company in India and had unlawfully transmitted signals between earth stations in Georgia and the satellites for over a week. In addition, during the course of its investigation, the FCC discovered that Swarm had also performed unauthorized weather balloon-to-ground station tests and other unauthorized equipment tests prior to the small satellites launch. All these activities require FCC authorization and the company had not received such authorization before the activities occurred.
Not good! As penance, Swarm Technologies will have to pay the aforementioned $900,000, and now has to submit pre-launch reports to the FCC within five days of signing an agreement to launch, and at least 45 days before takeoff.
The company hasn’t been sitting on its hands this whole time. The unauthorized launch was a mistake to be sure, but it has continued its pursuit of a global constellation and launched three more SpaceBEEs into orbit just a few weeks ago aboard a SpaceX Falcon 9.
Swarm has worked to put the concerns about tracking to bed; in fact, the company claims its devices are more trackable than ordinary cubesats, with a larger radar cross section and extra reflectivity thanks to a Van Atta array (ask them). SpaceBEE-1 is about to pass over Italy as I write this — you can check its location live here.
Between ongoing supply chain issues, chip shortages, and pent-up demand, Apple’s new MacBook Pros were always going to be hard to get. They’ve been up for preorder for less than 24 hours, and if you order one now, you probably won’t get it before November or December.
But the new laptops aren’t Apple’s only in-demand product: The shipping times for Apple’s $19 microfiber Polishing Cloth have also already slipped back into mid to late November. Unfortunately, this means that your compatible iPhones, iPads, Macs, Apple Watches, and iPods will need to remain unpolished for at least a month. It’s unclear whether the delays are being caused by low supply, overwhelming demand, or some combination of both.
The Polishing Cloth boasts support for an impressive range of Apple products, which Apple lists out in detail on the Cloth’s product page. The list includes iPhones as old as 2014’s iPhone 6, every generation of Apple Watch, and even the old iPod nano and iPod shuffle. Without testing, however, we can’t confirm whether the Polishing Cloth will adequately polish older unsupported devices or non-Apple gadgets like Android phones or the Nintendo Switch.
The Polishing Cloth isn’t a new Apple product—it has shipped with the company’s $5,000 Pro Display XDR since that monitor was released back in 2019. But this is the first time that Apple has offered its best, most premium polishing experience to the users of its other devices.
Note: Ars Technica may earn compensation for sales from links on this post through affiliate programs.
The newly announced 14-inch and 16-inch MacBook Pro models have HDMI ports, but they have a limitation that could be frustrating for many users over the long term, according to Apple’s specs page for both machines and as noted by Paul Haddad on Twitter.
The page says the HDMI port has “support for one display with up to 4K resolution at 60 Hz.” That means users with 4K displays at 120 Hz (or less likely, 8K displays at 60 Hz) won’t be able to tap the full capability of those displays through this port. It implies limited throughput associated with an HDMI 2.0 port instead of the most recent HDMI 2.1 standard, though there are other possible explanations for the limitation besides the port itself, and we don’t yet know which best describes the situation.
There aren’t many monitors and TVs that do 4K at 120 frames per second, and those that do are expensive. But they do exist, and they’re only going to get more common. In fact, it seems a safe bet that after a few years, 4K@120 Hz may become the industry standard.
So while this is an edge-case problem for only certain users with ultra-high-end displays right now, that won’t always be the case. The limitation could become frustrating for a much broader range of users sometime in the lifetime of a new MacBook Pro purchased today.
Of course, 4K@120 Hz is still achievable via the Thunderbolt port, and there are Thunderbolt-to-HDMI and Thunderbolt-to-DisplayPort adapters that will help users sidestep the issue. And the new MacBook Pro itself has a variable refresh rate screen that often refreshes at 120 Hz.
So if you want to connect the new MacBook Pro to a high-end display, no one’s stopping you. It just might cost more money to achieve, and the HDMI port might feel vestigial and useless to a lot of people in four or five years.
Before this week’s update to the MacBook Pro line, Apple went several years without offering HDMI ports on MacBook Pro computers at all, instead using only Thunderbolt. This redesign also saw Apple reintroduce the SD card slot, which was omitted in the last major MacBook Pro redesign in 2016.
Note: Ars Technica may earn compensation for sales from links on this post through affiliate programs.
The Pixel 6 is official, with a wild new camera design, incredible pricing, and the new Android 12 OS. The headline component of the device has to be the Google Tensor “system on chip” (SoC), however. This is Google’s first main SoC in a smartphone, and the chip has a unique CPU core configuration and a strong focus on AI capabilities.
Since when is Google a chip manufacturer, though? What are the goals of Tensor SoC? Why was it designed in its unique way? To get some answers, we sat down with members of the “Google Silicon” team—a name I don’t think we’ve heard before.
Google Silicon is a group responsible for mobile chips from Google. That means the team designed previous Titan M security chips in the Pixel 3 and up, along with the Pixel Visual Core in the Pixel 2 and 3. The group has been working on main SoC development for three or four years, but it remains separate from the Cloud team’s silicon work on things like YouTube transcoding chips and Cloud TPUs.
Phil Carmack is the vice president and general manager of Google Silicon, and Monika Gupta is the senior director on the team. Both were nice enough to tell us a bit more about Google’s secretive chip.
Most mobile SoC vendors license their chip architecture from ARM, which also offers some (optional) guidelines on how to design a chip using its cores. And, apart from Apple, most of these custom designs stick pretty closely to these guidelines. This year, the most common design is a chip with one big ARM Cortex-X1 core, three medium A78 cores, and four slower, lower-power A55 cores for background processing.
Now wrap your mind around what Google is doing with the Google Tensor: the chip still has four A55s for the small cores, but it has two Arm Cortex-X1 CPUs at 2.8 GHz to handle foreground processing duties.
For “medium” cores, we get two 2.25 GHz A76 CPUs. (That’s A76, not the A78 everyone else is using—these A76s are the “big” CPU cores from last year.) When Arm introduced the A78 design, it said that the core—on a 5nm process—offered 20 percent more sustained performance in the same thermal envelope compared to the 7nm A76. Google is now using the A76 design but on a 5nm chip, so, going by ARM’s description, Google’s A76 should put out less heat than an A78 chip. Google is basically spending more thermal budget on having two big cores and less on the medium cores.
So the first question for the Google Silicon team is: what’s up with this core layout?
Carmack’s explanation is that the dual-X1 architecture is a play for efficiency at “medium” workloads. “We focused a lot of our design effort on how the workload is allocated, how the energy is distributed across the chip, and how the processors come into play at various points in time,” Carmack said. “When a heavy workload comes in, Android tends to hit it hard, and that’s how we get responsiveness.”
This is referring to the “rush to sleep” behavior most mobile chipsets exhibit, where something like loading a webpage has everything thrown at it so the task can be done quickly and the device can return to a lower-power state quickly.
“When it’s a steady-state problem where, say, the CPU has a lighter load but it’s still modestly significant, you’ll have the dual X1s running, and at that performance level, that will be the most efficient,” Carmack said.
He gave a camera view as an example of a “medium” workload, saying that you “open up your camera and you have a live view and a lot of really interesting things are happening all at once. You’ve got imaging calculations. You’ve got rendering calculations. You’ve got ML [machine learning] calculations, because maybe Lens is on detecting images or whatever. During situations like that, you have a lot of computation, but it’s heterogeneous.”
A quick aside: “heterogeneous” here means using more bits of the SoC for compute than just the CPU, so in the case of Lens, that means CPU, GPU, ISP (the camera co-processor), and Google’s ML co-processor.
Carmack continued, “You might use the two X1s dialed down in frequency so they’re ultra-efficient, but they’re still at a workload that’s pretty heavy. A workload that you normally would have done with dual A76s, maxed out, is now barely tapping the gas with dual X1s.”
The camera is a great case study, since previous Pixel phones have failed at exactly this kind of task. The Pixel 5 and 5a both regularly overheat after three minutes of 4K recording. I’m not allowed to talk too much about this right now, but I did record a 20 minute, 4K, 60 FPS video on a Pixel 6 with no overheating issues. (I got bored after 20 minutes.)
So, is Google pushing back on the idea that one big core is a good design? The idea of using one big core has only recently popped up in Arm chips, after all. We used to have four “big” cores and four “little” cores without any of this super-sized, single-core “prime” stuff.
“It all comes down to what you’re trying to accomplish,” Carmack said. “I’ll tell you where one big core versus two wins: when your goal is to win a single-threaded benchmark. You throw as many gates as possible at the one big core to win a single-threaded benchmark… If you want responsiveness, the quickest way to get that, and the most efficient way to get high-performance, is probably two big cores.”
Carmack warned that this “could evolve depending on how efficiency is mapped from one generation to the next,” but for the X1, Google claims that this design is better.
“The single-core performance is 80 percent faster than our previous generation; the GPU performance is 370 percent faster than our previous generation. I say that because people are going to ask that question, but to me, that’s not really the story,” Carmack explained. “I think the one thing you can take away from this part of the story is that although we’re a brand-new entry into the SoC space, we know how to make high-frequency, high-performance circuits that are dense, fast, and capable… Our implementation is rock solid in terms of frequencies, in terms of frequency per watt, all of that stuff. That’s not a reason to build an all-new Tensor SoC.”