Connect with us

Science

After a decade, NASA’s big rocket fails its first real test

Published

on

STENNIS SPACE CENTER, Miss.—For a few moments, it seemed like the Space Launch System saga might have a happy ending. Beneath brilliant blue skies late on Saturday afternoon, NASA’s huge rocket roared to life for the very first time. As its four engines lit, and thrummed, thunder rumbled across these Mississippi lowlands. A giant, beautiful plume of white exhaust billowed away from the test stand.

It was all pretty damn glorious until it stopped suddenly.

About 50 seconds into what was supposed to be an 8-minute test firing, the flight control center called out, “We did get an MCF on Engine 4.” This means there was a “major component failure” with the fourth engine on the vehicle. After a total of about 67 seconds, the hot fire test ended.

During a post-flight news conference, held outside near the test stand, officials offered few details about what had gone wrong. “We don’t know what we don’t know,” said NASA Administrator Jim Bridenstine. “It’s not everything we hoped it would be.”

He and NASA’s program manager for the SLS rocket, John Honeycutt, sought to put a positive spin on the day. They explained that this is why spaceflight hardware is tested. They expressed confidence that this was still the rocket that would launch the Orion spacecraft around the Moon.

And yet it is difficult to say what happened Saturday is anything but a bitter disappointment. This rocket core stage was moved to Stennis from its factory in nearby Louisiana more than one calendar year ago, with months of preparations for this critical test firing.

Honeycutt said before the test, and then again afterward, that NASA had been hoping to get 250 seconds worth of data, if not fire the rocket for the entire duration of its nominal ascent to space. Instead it got a quarter of that.

So what happened?

Perhaps most intriguing, Honeycutt said the engine problem cropped up about 60 seconds into the test, at one of its most dynamic moments. This was when the engines were throttling down from 109 percent of nominal thrust to 95 percent, Honeycutt said. And it is also when they began to gimbal, or move their axis of thrust.

At approximately 60 seconds, engineers noted a “flash” in the area of a thermal protection blanket around Engine 4, Honeycutt said. The engine section is one of the most complex parts of the core stage, and each of the four main engines has thermal protection to limit heating from the other engines.

Now, engineers from NASA, Boeing and the engine manufacturer, Aerojet Rocketdyne, will study data from the test and determine what exactly went wrong. It is not clear how long this will take, or what problems will need to be fixed.

A drone image of NASA’s hot fire test on Saturday.

If there is a serious problem with Engine 4, it could be swapped out. NASA has spare RS-25 engines at Stennis, including backups that are tested and ready, Honeycutt said. Such an engine swap could occur on the test stand itself, over the course of a week or 10 days.

A key question is whether another hot fire test will be required. Bridenstine, whose tenure as NASA Administrator will end next Wednesday, said it was too early to determine what will happen. He expressed hope that some straightforward problem might be found. Even, so, it seems unlikely NASA has enough data from this test to avoid conducting another hot fire test, which would likely require weeks to months of setup time.

All of this casts very serious doubt on NASA’s plans to launch its Artemis I mission—an uncrewed precursor mission to sending humans to the Moon—before the end of this year. Already, the program was on a tight deadline, needing to ship the core stage from Stennis Space Center to Kennedy Space Center in Florida in February to retain any possibility of launching in 2021.

That now seems all but impossible.

What the future holds

The future of NASA’s Space Launch System rocket is not clear. The incoming Biden administration has not released any detailed plans for the space agency. The big rocket’s support has always come from Congress, however, and not the White House. Congress created the booster a decade ago when the Obama Administration wanted to rely more on private companies to provide launch vehicles.

The original deal was cut between two senators, Bill Nelson of Florida and Kay Bailey Hutchison of Texas, but they are both now out of office. In recent years, Alabama Senator Richard Shelby—who chairs the powerful Appropriations Committee—has emerged as the rocket’s most potent backer. This is not a surprise given that the rocket is designed and managed at Marshall Space Flight Center in northern Alabama.

However, with Democrats taking a narrow majority in the Senate, Shelby has lost his chair in the upcoming session of Congress. Although he will retain considerable say, he will no longer be able to effectively dictate NASA’s budget.

The SLS has also enjoyed ample support from the Alabama delegation in the House, but they too have recently lost some of their clout. Perhaps the most outspoken House backer of the rocket was Alabama Congressman Mo Brooks. But he has gained a measure of infamy for speaking at the pro-Trump rally on January 6, helping to incite rioters to march on the U.S. Capitol. As Brooks spread misinformation about the election, he said, “Today is the day Americans start taking down names and kicking ass.” Of Alabama’s seven U.S. representatives, six are Republicans. All of them, including Brooks, voted to overturn the election results after the Capitol insurrection.

Given these recent events, it seemed likely that the SLS program had a future if it began to execute, and delivering on milestones such as Saturday’s test. The weakening political clout of the Alabama delegation may mean that the program has less of a firewall in Congress should it continue to face delays and cost overruns.

Heritage hardware

Congress created the SLS rocket in its 2010 Authorization Act, declaring it to be a “follow-on launch vehicle to the space shuttle.” The law said NASA must extend or modify existing contracts to build the rocket, and ensure the “retention” of critical skills. The legislative intent was clear: keep the shuttle workforce employed.

This led to a design that used modified solid rocket boosters, like those that gave the space shuttle a kick off the launch pad. The SLS rocket would also use the space shuttle main engines, although controversially the expendable rocket would fly the reusable engines just a single time. Eventually, each of the main shuttle contractors got a piece of the SLS rocket.

At the time, proponents of this design argued that relying on space shuttle hardware would keep costs and technical issues to a minimum.

This seemed to make some sense. After all, these engines had flown for three decades. The solid rocket boosters had flown for just as long. This was proven technology. The hardest work would be designing and building large liquid oxygen and hydrogen fuel tanks in the rocket’s core stage. However, liquid hydrogen was hardly a novel fuel to work with. NASA had decades of experience building the shuttle’s large external fuel tank, and U.S. rocket scientists starting with Robert Goddard had been studying the use of liquid hydrogen since before the dawn of the space age.

It has since all gone sideways. By the time Saturday’s test took place, NASA had spent about $17.5 billion developing the rocket, and many billions more on ground systems to launch it. The original launch date was 2016, and now the rocket will likely not fly before 2022. And although much of the hardware has a long heritage, NASA and its contractors have still struggled to integrate it.

Last year, when NASA’s inspector general studied why it had taken so long to develop the SLS rocket, he found that the core stage, booster, and RS-25 engine programs had all experienced technical challenges and performance issues that led to delays and cost overruns.

“We and other oversight entities have consistently identified contractor performance as a primary cause for the SLS Program’s increased costs and schedule delays, and quality control issues continue to plague Boeing as it pushes to complete the rocket’s core stage,” Paul Martin wrote. “Both NASA and contractor officials explained that nearly 50 years have passed since development of the last major space flight program—the Space Shuttle—and the learning curve for new development has been steep as many experienced engineers have retired or moved to other industries.”

So what had been viewed as a strength of the program, using heritage hardware, instead become a liability. Saturday was only the first real hardware test for the rocket. It cannot afford too many more liabilities like those on display.

Listing image by Trevor Mahlmann for Ars

Continue Reading

Science

Report: Tesla is secretly building a giant 100 MW battery in Texas

Published

on

Tesla is best known as an electric car company, but the firm also has a thriving business in battery storage—including utility-scale battery installations to support the electric grid. Bloomberg reports that Tesla is currently building a battery installation in Tesla CEO Elon Musk’s new home state of Texas. The project is in Angleton, about an hour south of Houston.

Tesla hasn’t publicized the project, which is operating under the name of an obscure Tesla subsidiary called Gambit Energy Storage LLC. When a Bloomberg photographer visited, a worker discouraged picture-taking and said the project was “secretive.” The project appears to consist of 20 large banks of batteries that have been covered by white sheets.

A document on the city of Angleton’s website provides some details about the project. It’s listed as being a project of Plus Power but includes a photo of a Tesla battery cabinet. Plus Power counts two former Tesla employees among its executives. Plus confirmed to Bloomberg that it had started the project, then sold it to an undisclosed party.

The installation will use lithium iron phosphate batteries that are expected to last 10 to 20 years. The document says that it will generate around $1 million in property tax revenue for the city of Angleton. The site will be unmanned but will be remotely monitored at all times, according to the document.

Texas has its own electric grid overseen by the Electric Reliability Council of Texas. “Angleton forms an especially volatile ‘node’ on the ERCOT energy grid and the greater system will benefit from the energy balancing properties that the battery can provide,” the document says.

The Texas grid got a lot of attention in February after unusually cold weather left much of the state without power for several days. In a sarcastic tweet last month, Musk wrote that “ERCOT is not earning that R.”

And Musk has a lot of reason to be concerned about the quality of the electric grid in Texas. Not only did Musk recently relocate to the state, but both of his companies—SpaceX and Tesla—are expanding their footprint there. Tesla is building a car factory in the Austin area. SpaceX has long had a testing facility in McGregor, Texas, about halfway between Austin and Dallas. More recently, SpaceX has been pouring resources into its Boca Chica launch facility at the very southern tip of the state.

An ERCOT official told Bloomberg that the secretive installation has a proposed commercial operation date of June 1, so it may be nearing completion.

Back in 2017, we covered Tesla’s construction of a massive battery installation in South Australia. At the time, the 100 MW system was the largest in the world. According to Bloomberg, the new Texas battery system will be at least as big.

In the long run, massive battery facilities will be needed to shift intermittent solar and wind power in time. But a lot of battery installations today don’t have enough capacity to do much of this. Tesla’s South Australia battery, for example, only had enough capacity to supply power for a little over an hour at its full 100 MW power level.

Rather, early utility-scale batteries are being used to smooth out shorter-term fluctuations and keep supply of power perfectly balanced with demand. If a power plant unexpectedly fails or demand suddenly spikes, a utility-scale battery can provide a few minutes of power while electric utilities make necessary adjustments.

Utilities traditionally deal with this by having plants powered by natural gas on standby 24/7. Because these “peaker plants” might only be used for a few hours per year, the energy they produce is extremely expensive on a per-kilowatt basis. Batteries can soak up excess power at times when it’s plentiful, then release it at times of peak demand, allowing electric utilities to retire some of their gas-fired peaker plants without compromising grid reliability.

As batteries get cheaper, it will become economical to install even larger batteries to balance out supply and demand over a 24-hour cycle, enabling utilities to rely more heavily on solar and wind power. This is why analysts expect utility-scale batteries to be a massive growth market over the next decade or two; it’s going to take a lot more storage capacity to de-carbonize the electric grid.

Listing image by Mark Felix/Bloomberg via Getty Images

Continue Reading

Science

Egyptologists translate the oldest-known mummification manual

Published

on

Egyptologists have recently translated the oldest-known mummification manual. Translating it required solving a literal puzzle; the medical text that includes the manual is currently in pieces, with half of what remains in the Louvre Museum in France and half at the University of Copenhagen in Denmark. A few sections are completely missing, but what’s left is a treatise on medicinal herbs and skin diseases, especially the ones that cause swelling. Surprisingly, one section of that text includes a short manual on embalming.

For the text’s ancient audience, that combination might have made sense. The manual includes recipes for resins and unguents used to dry and preserve the body after death, along with explanations for how and when to use bandages of different shapes and materials. Those recipes probably used some of the same ingredients as ointments for living skin, because plants with antimicrobial compounds would have been useful for preventing both infection and decay.

New Kingdom embalming: More complicated than it used to be

The Papyrus Louvre-Carlsberg, as the ancient medical text is now called, is the oldest mummification manual known so far, and it’s one of just three that Egyptologists have ever found. Based on the style of the characters used to write the text, it probably dates to about 1450 BCE, which makes it more than 1,000 years older than the other two known mummification texts. But the embalming compounds it describes are remarkably similar to the ones embalmers used 2,000 years earlier in pre-Dynastic Egypt: a mixture of plant oil, an aromatic plant extract, a gum or sugar, and heated conifer resin.

Although the basic principles of embalming survived for thousands of years in Egypt, the details varied over time. By the New Kingdom, when the Papyrus Louvre-Carlsberg was written, the art of mummification had evolved into an extremely complicated 70-day-long process that might have bemused or even shocked its pre-Dynastic practitioners. And this short manual seems to be written for people who already had a working knowledge of embalming and just needed a handy reference.

“The text reads like a memory aid, so the intended readers must have been specialists who needed to be reminded of these details,” said University of Copenhagen Egyptologist Sofie Schiødt, who recently translated and edited the manual. Some of the most basic steps—like using natron to dry out the body—were skipped entirely, maybe because they would have been so obvious to working embalmers.

On the other hand, the manual includes detailed instructions for embalming techniques that aren’t included in the other two known texts. It lists ingredients for a liquid mixture—mostly aromatic plant substances like resin, along with some binding agents—which is supposed to coat a piece of red linen placed on the dead person’s face. Mummified remains from the same time period have cloth and resin covering their faces in a way that seems to match the description.

Royal treatment

“This process was repeated at four-day intervals,” said Schiødt. In fact, the manual divides the whole embalming process into four-day intervals, with two extra days for rituals afterward. After the first flurry of activity, when embalmers spent a solid four days of work cleaning the body and removing the organs, most of the actual work of embalming happened only every fourth day, with lots of waiting in between. The deceased spent most of that time lying covered in cloth piled with layers of straw and aromatic, insect-repelling plants.

For the first half of the process, the embalmers’ goal was to dry the body with natron, which would have been packed around the outside of the corpse and inside the body cavities. The second half included wrapping the body in bandages, resins, and unguents meant to help prevent decay.

The manual calls for a ritual procession of the mummy every four days to celebrate “restoring the deceased’s corporeal integrity,” as Schiødt put it. That’s a total of 17 processions spread over 68 days, with two solid days of rituals at the end. Of course, most Egyptians didn’t get such elaborate preparation for the afterlife. The full 70-day process described in the Papyrus Louvre-Carlsberg would have been mostly reserved for royalty or extremely wealthy nobles and officials.

A full translation of the papyrus is scheduled for publication in 2022.

Continue Reading

Science

Programmable optical quantum computer arrives late, steals the show

Published

on

Excuse me a moment—I am going to be bombastic, overexcited, and possibly annoying. The race is run, and we have a winner in the future of quantum computing. IBM, Google, and everyone else can turn in their quantum computing cards and take up knitting.

OK, the situation isn’t that cut and dried yet, but a recent paper has described a fully programmable chip-based optical quantum computer. That idea presses all my buttons, and until someone restarts me, I will talk of nothing else.

Love the light

There is no question that quantum computing has come a long way in 20 years. Two decades ago, optical quantum technology looked to be the way forward. Storing information in a photon’s quantum states (as an optical qubit) was easy. Manipulating those states with standard optical elements was also easy, and measuring the outcome was relatively trivial. Quantum computing was just a new application of existing quantum experiments, and those experiments had shown the ease of use of the systems and gave optical technologies the early advantage.

But one key to quantum computing (or any computation, really) is the ability to change a qubit’s state depending on the state of another qubit. This turned out to be doable but cumbersome in optical quantum computing. Typically, a two- (or more) qubit operation is a nonlinear operation, and optical nonlinear processes are very inefficient. Linear two-qubit operations are possible, but they are probabilistic, so you need to repeat your calculation many times to be sure you know which answer is correct.

A second critical feature is programmability. It is not desirable to have to create a new computer for every computation you wish to perform. Here, optical quantum computers really seemed to fall down. An optical quantum computer could be easy to set up and measure, or it could be programmable—but not both.

In the meantime, private companies bet on being able to overcome the challenges faced by superconducting transmon qubits and trapped ion qubits. In the first case, engineers could make use of all their experience from printed circuit board layout and radio-frequency engineering to scale the number and quality of the qubits. In the second, engineers banked on being able to scale the number of qubits, already knowing that the qubits were high-quality and long-lived.

Optical quantum computers seemed doomed.

Future’s so bright

So, what has changed to suddenly make optical quantum computers viable? The last decade has seen a number of developments. One is the appearance of detectors that can resolve the number of photons they receive. All the original work relied on single-photon detectors, which could detect light/not light. It was up to you to ensure that what you were detecting was a single photon and not a whole stream of them.

Because single-photon detectors can’t distinguish between one, two, three, or more photons, quantum computers were limited to single-photon states. Complicated computations would require many single photons that all need to be controlled, set, and read. As the number of operations goes up, the chance of success goes down dramatically. Thus, the same computation would have to be run many many times before you could be sure of the right answer.

By using photon-number-resolving detectors, scientists are no longer limited to states encoded in a single photon. Now, they can make use of states that make use of the photon number. In other words, a single qubit can be in a superposition state of containing a different number of photons zero, one, two and so on, up to some maximum number. Hence, fewer qubits can be used for a computation.

A second key development was integrated optical circuits. Integrated optics have been around for a while, but they have not exactly had the precision and reliability of their electronic counterparts. That has changed. As engineers got more experience in working with the fabrication techniques and with the design requirements for optical circuits, performance has gotten much, much better. Integrated optics are now commonly used in telecommunications industry, with the scale and reliability that that implies.

As a result of these developments, the researchers were simply able to design and order their quantum optical chip from a fab, something unthinkable less than a decade ago. So, in a sense, this is a story that is 20 years in the making of the underlying technology.

Putting the puzzle together

The researchers, from a startup called Xanadu and the National Institute of Standards, have pulled together these technology developments to produce a single integrated optical chip that generates eight qubits. Calculations are performed by passing the photons through a complex circuit made up of Mach-Zehnder interferometers. In the circuit, each qubit interferes with itself and some of the other qubits at each interferometer.

As each qubit exits an interferometer, the direction it takes is determined by the its state and the internal setting of the interferometer. The direction it takes will determine which interferometer it moves to next and, ultimately, where it exits the device.

The internal setting of the interferometer is the knob that the programmer uses to control the computation. In practice, the knob just changes the temperature of individual waveguide segments. But the programmer doesn’t have to worry about these details. Instead, they have an application programming interface (Strawberry Fields Python Library) that takes very normal-looking Python code. This code is then translated by a control system that maintains the correct temperature differentials on the chip.

The company’s description of its technology.

To demonstrate that their chip was flexible, the researchers performed a series of different calculations. The first calculation basically let the computer simulate itself—how many different states can we generate in a given time. (This is the sort of calculation that causes me to grind my teeth because any quantum device can efficiently calculate itself.) However, after that, the researchers got down to business. They calculated the vibrational states of ethylene—two carbon atoms and two hydrogen atoms—and the more complicated phenylvinylacetylene—the favorite child’s name for 2021—successfully. These carefully chosen examples fit beautifully within the eight-qubit space of the quantum computer.

The third computation involved computing graph similarity. I must admit to not understanding graph similarity, but I think it is a pattern-matching exercise, like facial recognition. These graphs were, of course, quite simple, but again, the machine performed well. According to the authors, this was the first such demonstration of graph similarity on a quantum computer.

Is it really done and dusted?

All right, as I warned you, my introduction was exaggerated. However, this is a big step. There are no large barriers to scaling this same computer to a bigger number of qubits. The researchers will have to reduce photon losses in their waveguides, and they will have to reduce the amount of leakage from the laser that drives everything (currently it leaks some light into the computation circuit, which is very undesirable). The thermal management will also have to be scaled. But, unlike previous examples of optical quantum computers, none of these are “new technology goes here” barriers.

What is more, the scaling does not present huge amounts of increased complexity. In superconducting qubits, each qubit is a current loop in a magnetic field. Each qubit generates a field that talks to all the other qubits all the time. Engineers have to take a great deal of trouble to decouple and couple qubits from each other at the right moment. The larger the system, the trickier that task becomes. Ion qubit computers face an analogous problem in their trap modes. There isn’t really an analogous problem in optical systems, and that is their key advantage.

Nature, 2020, DOI: 10.1038/s41586-021-03202-1(About DOIs)

Continue Reading

Trending