Connect with us

Gadgets

Opportunity Mars rover goes to its last rest after extraordinary 14-year mission – TechCrunch

Published

on

Opportunity, one of two rovers sent to Mars in 2004, is officially offline for good, NASA and JPL officials announced today at a special press conference. “I declare the Opportunity mission as complete, and with it the Mars Exploration Rover mission as complete,” said NASA’s Thomas Zurbuchen.

The cause of Opportunity’s demise was a planet-scale sandstorm that obscured its solar panels too completely, and for too long, for its onboard power supply to survive and keep even its most elementary components running. It last communicated on June 10, 2018, but could easily have lasted a few months more as its batteries ran down — a sad picture to be sure. Even a rover designed for the harsh Martian climate can’t handle being trapped under a cake of dust at -100 degrees Celsius for long.

The team has been trying to reach it for months, employing a variety of increasingly desperate techniques to get the rover to at least respond; even if its memory had been wiped clean or instruments knocked out, it could be reprogrammed and refreshed to continue service if only they could set up a bit of radio rapport. But every attempt, from ordinary contact methods to “sweep and beep” ploys, was met with silence. The final transmission from mission control was last night.

Spirit and Opportunity, known together as the Mars Exploration Rovers mission, were launched individually in the summer of 2003 and touched down in January of 2004 — 15 years ago! — in different regions of the planet.

Each was equipped with a panoramic camera, a macro camera, spectrometers for identifying rocks and minerals and a little drill for taking samples. The goal was to operate for 90 days, traveling about 40 meters each day and ultimately covering about a kilometer. Both exceeded those goals by incredible amounts.

Spirit ended up traveling about 7.7 kilometers and lasting about 7 years. But Opportunity outshone its twin, going some 45 kilometers over 14 years — well over a marathon.

And of course both rovers contributed immensely to our knowledge of the Red Planet. It was experiments by these guys that really established a past when Mars not only had water, but bio-friendly liquid water that might have supported life.

Opportunity did a lot of science but always had time for a selfie, such as this one at the edge of Erebus Crater.

It’s always sad when a hard-working craft or robot finally shuts down for good, especially when it’s one that’s been as successful as “Oppy.” The Cassini probe went out in a blaze of glory, and Kepler has quietly gone to sleep. But ultimately these platforms are instruments of science and we should celebrate their extraordinary success as well as mourn their inevitable final days.

“Spirit and Opportunity may be gone, but they leave us a legacy — a new paradigm for solar system exploration,” said JPL head Michael Watkins. “That legacy continues not just in the Curiosity rover, which is currently operating healthily after about 2,300 days on the surface of Mars. But also in our new 2020 rover, which is under construction here at the Jet Propulsion Laboratory.”

“But Spirit and Opportunity did something more than that,” he continued. “They energized the public about the spirit of robotic Mars exploration. The infectious energy and electricity that this mission created was obvious to the public.”

Mars of course is not suddenly without a tenant. The Insight lander touched down last year and has been meticulously setting up its little laboratory and testing its systems. And the Mars 2020 rover is well on its way to launch. It’s a popular planet.

Perhaps some day we’ll scoop up these faithful servants and put them in a Martian museum. For now, let’s look forward to the next mission.

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Gadgets

Shipping times for Apple’s $19 Polishing Cloth slip to late November

Published

on

Enlarge / If you wanted to polish your Apple products, bad news: you’ll need to wait at least a month to get Apple’s Polishing Cloth.

Apple

Between ongoing supply chain issues, chip shortages, and pent-up demand, Apple’s new MacBook Pros were always going to be hard to get. They’ve been up for preorder for less than 24 hours, and if you order one now, you probably won’t get it before November or December.

But the new laptops aren’t Apple’s only in-demand product: The shipping times for Apple’s $19 microfiber Polishing Cloth have also already slipped back into mid to late November. Unfortunately, this means that your compatible iPhones, iPads, Macs, Apple Watches, and iPods will need to remain unpolished for at least a month. It’s unclear whether the delays are being caused by low supply, overwhelming demand, or some combination of both.

The Polishing Cloth, folded over in a visually appealing manner. Without testing, we can't say whether the Apple logo is cosmetic or if it meaningfully improves the polishing experience.

The Polishing Cloth, folded over in a visually appealing manner. Without testing, we can’t say whether the Apple logo is cosmetic or if it meaningfully improves the polishing experience.

Apple

The Polishing Cloth boasts support for an impressive range of Apple products, which Apple lists out in detail on the Cloth’s product page. The list includes iPhones as old as 2014’s iPhone 6, every generation of Apple Watch, and even the old iPod nano and iPod shuffle. Without testing, however, we can’t confirm whether the Polishing Cloth will adequately polish older unsupported devices or non-Apple gadgets like Android phones or the Nintendo Switch.

The Polishing Cloth isn’t a new Apple product—it has shipped with the company’s $5,000 Pro Display XDR since that monitor was released back in 2019. But this is the first time that Apple has offered its best, most premium polishing experience to the users of its other devices.

Note: Ars Technica may earn compensation for sales from links on this post through affiliate programs.

Listing image by Apple

Continue Reading

Gadgets

The new MacBook Pro seems to have an HDMI 2.0 port, not 2.1

Published

on

Enlarge / Farthest right: The HDMI port on the MacBook Pro.

Lee Hutchinson

The newly announced 14-inch and 16-inch MacBook Pro models have HDMI ports, but they have a limitation that could be frustrating for many users over the long term, according to Apple’s specs page for both machines and as noted by Paul Haddad on Twitter.

The page says the HDMI port has “support for one display with up to 4K resolution at 60 Hz.” That means users with 4K displays at 120 Hz (or less likely, 8K displays at 60 Hz) won’t be able to tap the full capability of those displays through this port. It implies limited throughput associated with an HDMI 2.0 port instead of the most recent HDMI 2.1 standard, though there are other possible explanations for the limitation besides the port itself, and we don’t yet know which best describes the situation.

There aren’t many monitors and TVs that do 4K at 120 frames per second, and those that do are expensive. But they do exist, and they’re only going to get more common. In fact, it seems a safe bet that after a few years, 4K@120 Hz may become the industry standard.

So while this is an edge-case problem for only certain users with ultra-high-end displays right now, that won’t always be the case. The limitation could become frustrating for a much broader range of users sometime in the lifetime of a new MacBook Pro purchased today.

Of course, 4K@120 Hz is still achievable via the Thunderbolt port, and there are Thunderbolt-to-HDMI and Thunderbolt-to-DisplayPort adapters that will help users sidestep the issue. And the new MacBook Pro itself has a variable refresh rate screen that often refreshes at 120 Hz.

So if you want to connect the new MacBook Pro to a high-end display, no one’s stopping you. It just might cost more money to achieve, and the HDMI port might feel vestigial and useless to a lot of people in four or five years.

Before this week’s update to the MacBook Pro line, Apple went several years without offering HDMI ports on MacBook Pro computers at all, instead using only Thunderbolt. This redesign also saw Apple reintroduce the SD card slot, which was omitted in the last major MacBook Pro redesign in 2016.

Note: Ars Technica may earn compensation for sales from links on this post through affiliate programs.

Continue Reading

Gadgets

The “Google Silicon” team gives us a tour of the Pixel 6’s Tensor SoC

Published

on

Enlarge / A promo image for the Google Tensor SoC.

Google

The Pixel 6 is official, with a wild new camera design, incredible pricing, and the new Android 12 OS. The headline component of the device has to be the Google Tensor “system on chip” (SoC), however. This is Google’s first main SoC in a smartphone, and the chip has a unique CPU core configuration and a strong focus on AI capabilities.

Since when is Google a chip manufacturer, though? What are the goals of Tensor SoC? Why was it designed in its unique way? To get some answers, we sat down with members of the “Google Silicon” team—a name I don’t think we’ve heard before.

Google Silicon is a group responsible for mobile chips from Google. That means the team designed previous Titan M security chips in the Pixel 3 and up, along with the Pixel Visual Core in the Pixel 2 and 3. The group has been working on main SoC development for three or four years, but it remains separate from the Cloud team’s silicon work on things like YouTube transcoding chips and Cloud TPUs.

Phil Carmack is the vice president and general manager of Google Silicon, and Monika Gupta is the senior director on the team. Both were nice enough to tell us a bit more about Google’s secretive chip.

Most mobile SoC vendors license their chip architecture from ARM, which also offers some (optional) guidelines on how to design a chip using its cores. And, apart from Apple, most of these custom designs stick pretty closely to these guidelines. This year, the most common design is a chip with one big ARM Cortex-X1 core, three medium A78 cores, and four slower, lower-power A55 cores for background processing.

Now wrap your mind around what Google is doing with the Google Tensor: the chip still has four A55s for the small cores, but it has two Arm Cortex-X1 CPUs at 2.8 GHz to handle foreground processing duties.

For “medium” cores, we get two 2.25 GHz A76 CPUs. (That’s A76, not the A78 everyone else is using—these A76s are the “big” CPU cores from last year.) When Arm introduced the A78 design, it said that the core—on a 5nm process—offered 20 percent more sustained performance in the same thermal envelope compared to the 7nm A76. Google is now using the A76 design but on a 5nm chip, so, going by ARM’s description, Google’s A76 should put out less heat than an A78 chip. Google is basically spending more thermal budget on having two big cores and less on the medium cores.

So the first question for the Google Silicon team is: what’s up with this core layout?

Carmack’s explanation is that the dual-X1 architecture is a play for efficiency at “medium” workloads. “We focused a lot of our design effort on how the workload is allocated, how the energy is distributed across the chip, and how the processors come into play at various points in time,” Carmack said. “When a heavy workload comes in, Android tends to hit it hard, and that’s how we get responsiveness.”

This is referring to the “rush to sleep” behavior most mobile chipsets exhibit, where something like loading a webpage has everything thrown at it so the task can be done quickly and the device can return to a lower-power state quickly.

“When it’s a steady-state problem where, say, the CPU has a lighter load but it’s still modestly significant, you’ll have the dual X1s running, and at that performance level, that will be the most efficient,” Carmack said.

He gave a camera view as an example of a “medium” workload, saying that you “open up your camera and you have a live view and a lot of really interesting things are happening all at once. You’ve got imaging calculations. You’ve got rendering calculations. You’ve got ML [machine learning] calculations, because maybe Lens is on detecting images or whatever. During situations like that, you have a lot of computation, but it’s heterogeneous.”

A quick aside: “heterogeneous” here means using more bits of the SoC for compute than just the CPU, so in the case of Lens, that means CPU, GPU, ISP (the camera co-processor), and Google’s ML co-processor.

Carmack continued, “You might use the two X1s dialed down in frequency so they’re ultra-efficient, but they’re still at a workload that’s pretty heavy. A workload that you normally would have done with dual A76s, maxed out, is now barely tapping the gas with dual X1s.”

The camera is a great case study, since previous Pixel phones have failed at exactly this kind of task. The Pixel 5 and 5a both regularly overheat after three minutes of 4K recording. I’m not allowed to talk too much about this right now, but I did record a 20 minute, 4K, 60 FPS video on a Pixel 6 with no overheating issues. (I got bored after 20 minutes.)

This is what the phone looks like, if you're wondering.
Enlarge / This is what the phone looks like, if you’re wondering.

Google

So, is Google pushing back on the idea that one big core is a good design? The idea of using one big core has only recently popped up in Arm chips, after all. We used to have four “big” cores and four “little” cores without any of this super-sized, single-core “prime” stuff.

“It all comes down to what you’re trying to accomplish,” Carmack said. “I’ll tell you where one big core versus two wins: when your goal is to win a single-threaded benchmark. You throw as many gates as possible at the one big core to win a single-threaded benchmark… If you want responsiveness, the quickest way to get that, and the most efficient way to get high-performance, is probably two big cores.”

Carmack warned that this “could evolve depending on how efficiency is mapped from one generation to the next,” but for the X1, Google claims that this design is better.

“The single-core performance is 80 percent faster than our previous generation; the GPU performance is 370 percent faster than our previous generation. I say that because people are going to ask that question, but to me, that’s not really the story,” Carmack explained. “I think the one thing you can take away from this part of the story is that although we’re a brand-new entry into the SoC space, we know how to make high-frequency, high-performance circuits that are dense, fast, and capable… Our implementation is rock solid in terms of frequencies, in terms of frequency per watt, all of that stuff. That’s not a reason to build an all-new Tensor SoC.”

Continue Reading

Trending