Connect with us

Science

Honeywell releases details of its ion trap quantum computer

Published

on

Enlarge / The line down the middle is where the trapped ions reside.

About a year ago, Honeywell announced that it had entered the quantum computing race with a technology that was different from anything else on the market. The company claimed that because the performance of its qubits was so superior to those of its competitors, its computer could do better on a key quantum computing benchmark than quantum computers with far more qubits.

Now, roughly a year later, the company finally released a paper describing the feat in detail. But in the meantime, the competitive landscape has shifted considerably.

It’s a trap!

In contrast to companies like IBM and Google, Honeywell has decided against using superconducting circuitry and in favor of using a technology called “trapped ions.” In general, these use a single ion as a qubit and manipulate its state using lasers. There are different ways to create ion trap computers, however, and Honeywell’s version is distinct from another on the market, made by a competitor called IonQ (which we’ll come back to).

IonQ uses lasers to perform its operations, and by carefully preparing the light, its computer can perform operations on multiple qubits at the same time. This essentially allows any two qubits in its system to perform a single operation and lets IonQ build up a complicated entangled system. It’s a contrast to the behavior of quantum computers that use superconducting circuits, where each qubit is typically only connected directly to its nearest neighbors.

Honeywell’s approach also allows any two qubits to be connected with each other. But it does so by physically moving ions next to each other, allowing a single pulse of light to strike both of them simultaneously.

This works because Honeywell’s ion traps aren’t made from a static arrangement of magnetic fields. Instead, the fields are generated using 192 electrodes that can all be controlled independently. This allows the device to create locations where the magnetic field varies in strength, leading to the creation of a location where the ion is happier to reside—technically termed a “potential well.” By changing the charge in these electrodes, the potential wells can be made to move up and down the linear device, and the ions will simply move with them.

By merging two potential wells, the ions they contain can be brought together, allowing one operation to simultaneously affect them both. When that is done, the well can be split, taking the ions back to their original location.

What’s new in the paper are some hard performance numbers on how well this all works. Honeywell says that the maximal amount of time needed to transport an ion from one end of the trap to the other is 300 microseconds. Errors in transport—sending a qubit to the wrong location, for example—are detected automatically by the system, allowing the whole thing to be reset and calculations to be picked up from the last point where the machine’s state was read. These errors are also extremely rare. In a series of 10,000,000 operations, a transport failure was detected only three times.

Competition at volume

But that isn’t the last of the performance figures documented here. Honeywell also turned to quantum volume, a measure originally defined by IBM that takes into account the number of qubits, how connected they are, and how well they avoid generating errors instead of the intended outcome. If the system can perform operations involving random pairs of its qubits without error two-thirds of the time, its quantum volume is two raised to the power of the qubit count. Higher error rates lower the quantum volume; more qubits raise it.

In this case, the Honeywell team ran tests with two, three, four, and six of the device’s qubits. All of them successfully cleared the hurdle, with error-free operation typically in the area of 75 percent for the different qubit counts. Given the six qubits, that results in a quantum volume of 64, which, at the time the manuscript was submitted for review, was a record high.

But again, at that time. There’s some good news from Honeywell’s perspective, in that the company has added more qubits without increasing the error rate, bringing itself up to a quantum volume of 512. By comparison, IBM only reached Honeywell’s earlier mark of 64 this past summer using a machine with 27 qubits but a higher rate of errors.

But there’s also the other ion trap computing company, IonQ. Previously, it had been in a similar place to IBM: more qubits, but more errors. However, it managed to roughly triple the qubit count at the same time that it raised its qubit quality to be comparable to Honeywell’s. With low errors and the large boost in qubit count, its quantum volume comes in at over 4 million, which is quite a bit higher than 512. And while it took about a year for Honeywell to add two qubits, at the time of its announcement, IonQ said it expects to double its qubit number to 64 within eight months—which is now less than three months away.

Room for improvement

That said, Honeywell has clearly identified where the bottlenecks reside. One problem is the noise in the voltage generators that feed power into the electrodes that control the ions. Another is spontaneous noise in the system. Clean up either of those and the performance goes up.

In addition, moving the ions around imparts some energy to them, requiring them to be constantly cooled down again while the machine is in operation. To prevent the cooling process from disturbing the qubits, Honeywell traps a second ion from a different element at the same time and cools that, turning it into an energy sponge for its partner. This is a major time sink while the machine is in operation, so boosting its efficiency would speed up operations.

Beyond that, the basic control system scales up linearly—literally, but only up to a point. Add more electrodes in line with the rest and you can simply trap more atoms. The point where this scaling ends is when it takes too long to move an atom from one end of the row to the other if needed. It’s not clear when that point will be reached, but Honeywell is already considering ideas like two-dimensional arrays of traps and transferring ions between devices.

In any case, the publication itself is informative in two ways. It takes what was an excited corporate announcement a year ago and finally provides the details needed to fully appreciate what was done, and with the validation of peer review. But, the fact that the system that was used to generate the results has become badly obsolete in the time it took the paper to get through peer review gives us a real sense of how exciting the field has become.

Nature, 2021. DOI: 10.1038/s41586-021-03318-4  (About DOIs).

Continue Reading

Science

Rare, flesh-eating “black fungus” rides COVID’s coattails in India

Published

on

Enlarge / A health worker exits an ambulance outside a quarantine center in the Goregaon suburb of Mumbai, India, on Tuesday, April 27, 2021.

As the pandemic coronavirus continues to ravage India, doctors are reporting a disturbing uptick in cases of a rare, potentially fatal fungal infection among people recovered or recovering from COVID-19.

The infection is called mucormycosis, or sometimes “black fungus” in media reports, and it appears to be attacking COVID-19 patients through the nose and sinuses, where it can aggressively spread to facial bones, the eyes, and even the brain (rhinocerebral mucormycosis). In other cases, the fungus can also attack the lungs, breaks in the skin, and the gastrointestinal system or spread throughout the body in the blood stream.

A classic feature of mucormycosis is tissue necrosis—the death of flesh, essentially—which in the rhinocerebral form of the disease can lead to black, discolored lesions on the face, particularly on the bridge of the nose and the roof of the mouth. Mucormycosis is fatal in around 50 percent of cases.

If the fungus is able to spread to the eyes, patients may develop blurred vision, drooping eyelids, swelling, and vision loss. Patients may even need to have their eyes surgically removed to prevent the infection from spreading further, according to doctors who spoke to the BBC.

Dr. Akshay Nair, a Mumbai-based eye surgeon, told the BBC that he treated 40 patients with mucormycosis in April. Eleven of them needed to have an eye surgically removed.

The total number of mucormycosis cases in India is unclear, but media reports have tallied dozens to hundreds of cases. Dr. Renuka Bradoo, head of the ear, nose, and throat wing of Sion Hospital in Mumbai, told the BBC that doctors there have seen 24 cases of mucormycosis in the past two months. Usually, they see only about six cases in a whole year.

Worse for diabetics

A report in The New York Times out of New Delhi relayed that local news media in the western state of Maharashtra, which includes Mumbai, had tallied around 200 cases. In the western state of Gujarat, state officials have reportedly ordered 5,000 doses of amphotericin B, an antifungal medicine used to treat mucormycosis.

The startling increase in cases may be explained by India’s high number of people with diabetes, coupled with poor hygiene amid the critical COVID-19 surge, doctors speculate. Mucormycosis is known to strike people who have compromised immune systems, especially people with diabetes—and those with poorly controlled diabetes in particular.

Not only does diabetes dampen immune responses, welcoming invasive fungi, it also provides a comfortable environment for the infections. Mucormycosis is caused by mucormycetes, a ubiquitous group of molds that live in soil and decaying organic matter, like wood, leaves, and compost. These molds love iron-rich, acidic environments, and diabetic ketoacidosis—a complication of diabetes that causes the blood to become acidic—is a key risk factor for developing mucormycosis. A literature review published in the New England Journal of Medicine in 1999 estimated that about 50 percent of all cases of rhinocerebral mucormycosis occur in people with diabetes.

India doesn’t have exceptionally high rates of diabetes compared with other countries. But because of its population of over 1.36 billion people, the country has one of the highest raw totals of diabetes cases in the world, estimated to be around 77 million people, second only to China. India also has some of the highest estimated levels of death and disability from diabetes, according to a study published in the journal Scientific Reports last year.

“Triple whammy”

Adding to this problem is the current COVID-19 crisis crippling India’s healthcare system. With hospitals overwhelmed, experts who spoke with the Times noted that many COVID-19 patients are being treated with oxygen at home without proper hygiene. Moreover, many COVID-19 patients are given powerful steroids—which further tamps down the immune system.

“You’ve got a high rate of mucormycosis, you’ve got a lot of steroids—maybe too much—being used, and then you’ve got diabetes, which is not being well controlled or managed,” David Denning, an expert in fungal infections at Manchester University, told the Associated press. It’s a “triple whammy,” he said.

Continue Reading

Science

After many delays, Massachusetts’ Vineyard Wind is finally approved

Published

on

Enlarge / An offshore wind farm in the UK.

After years of delays, the federal government has approved what will be the third offshore wind project in the US—and the largest by far. Vineyard Wind, situated off the coast of Massachusetts, will have a generating capacity of 800 Megawatts, dwarfing Block Island Wind’s 30 MW and the output from two test turbines installed in Virginia.

Vineyard Wind has been approved a number of times but continued to experience delays during the Trump administration, which was openly hostile to renewable energy. But the Biden administration wrapped up an environmental review shortly before announcing a major push to accelerate offshore wind development.

The final hurdle, passed late Monday, was getting the Bureau of Ocean Energy Management to issue an approval for Vineyard Wind’s construction and operating plan. With that complete, the Departments of Commerce and Interior announced what they term the “final federal approval” to install 84 offshore turbines. Vineyard Wind will still have to submit paperwork showing that its construction and operation will be consistent with the approved plan; assuming that the operators can manage that, construction can begin.

Vineyard Wind was controversial from the start, as it’s located less than 15 miles from two islands, Martha’s Vineyard and Nantucket, notable for their expensive vacation homes. But offshore wind will need to play a critical role in US plans to decarbonize electricity production. The densely populated states along the Eastern Seaboard often don’t have good renewable resources, and the shallow continental shelf offshore provides an excellent site for wind turbines.

The approval delays have essentially meant that the US outsourced research and development on offshore wind to Europe, where experience building and operating giant wind farms—some substantially larger than Vineyard Wind—has helped drop the cost of this source of electricity considerably. European companies are now poised to take advantage of the opening US wind market, with several planning to help the Biden administration reach its goal of having 30 GW of capacity installed offshore by 2030.

The many delays faced by Vineyard Wind will hopefully provide lessons that will help ease the approval of future offshore projects. Thanks to the developments in Europe, there is now extensive construction and operational experience with projects of this scale. What’s currently lacking is the infrastructure to build turbines, blades, and support hardware, get them on ships, and support their operations. Vineyard Wind’s construction will help drive the development of this infrastructure, easing the way for future projects.

Continue Reading

Science

Programming a robot to teach itself how to move

Published

on

Enlarge / The robotic train.

Oliveri et. al.

One of the most impressive developments in recent years has been the production of AI systems that can teach themselves to master the rules of a larger system. Notable successes have included experiments with chess and Starcraft. Given that self-teaching capability, it’s tempting to think that computer-controlled systems should be able to teach themselves everything they need to know to operate. Obviously, for a complex system like a self-driving car, we’re not there yet. But it should be much easier with a simpler system, right?

Maybe not. A group of researchers in Amsterdam attempted to take a very simple mobile robot and create a system that would learn to optimize its movement through a learn-by-doing process. While the system the researchers developed was flexible and could be effective, it ran into trouble due to some basic features of the real world, like friction.

Roving robots

The robots in the study were incredibly simple and were formed from a varying number of identical units. Each had an on-board controller, battery, and motion sensor. A pump controlled a piece of inflatable tubing that connected a unit to a neighboring unit. When inflated, the tubing generated a force that pushed the two units apart. When deflated, the tubing would pull the units back together.

Linking these units together created a self-propelled train. Given the proper series of inflation and deflation, individual units could drag and push each other in a coordinated manner, providing a directional movement that pushed the system along like an inchworm. It would be relatively simple to figure out the optimal series of commands sent to the pump that controls the inflation—simple, but not especially interesting. So the researchers behind the new work decided to see if the system could optimize its own movement.

Each unit was allowed to act independently and was given a simple set of rules. Inflation/deflation was set to cycle every two seconds, with the only adjustable parameter being when, within that 2-second window, the pump would turn on (it would stay on for less than a second). Each unit in the chain would choose a start time at random, use it for a few cycles, and then use the system’s on-board sensor to determine how far the robot moved. The start time was chosen randomly during the learning period, and a refinement period followed, during which areas around the best-performing times were sampled.

Critically, each unit in the chain operated completely independently, without knowing what the other units were up to. The coordination needed for forward motion emerged spontaneously.

The researchers started by linking two robots and an inert block into a train and placing the system on a circular track. It only took about 80 seconds for some of the trains to reach the maximum speed possible, a stately pace of just over two millimeters per second. There’s no way for this hardware to go faster, as confirmed by simulations in a model system.

Not so fast

But problems were immediately apparent. Some of the systems got stuck in a local minimum, optimizing a speed that was only a quarter that of the maximum possible. Things went poorly in a different way when the team added a third robot to the train.

Here again, the system took only a few minutes to approach the maximum speed seen in simulations. But once they reached that speed, most systems seemed to start slowing down. That shouldn’t be possible, as the units always saved the cycle start time associated with the maximum velocity they reached. Since they should never intentionally choose a lower velocity, there’s no reason they should slow down, right?

Fortunately, someone on the team noticed that the systems weren’t experiencing a uniform slowdown. Instead, they came to a near-halt at specific locations on the track, suggesting that they were running into issues with friction at those points. Even though the robots kept performing the actions associated with the maximum speed elsewhere on the track, they were doing so in a location where a different series of actions might power through the friction more effectively.

To fix this issue, the researchers did some reprogramming. Originally, the system just looked for the maximum velocity and stored that and the inflation cycle start time associated with it. After the switch, the system always saved the most recent velocity but only updated the start time if the stored velocity was slower than the more recent one. If the system hit a rough spot and slowed down dramatically, it could find an optimal means to power through and then re-optimize for the optimum speed afterward.

This adjustment got the four-car system to move at an average speed of two millimeters per second. Not quite as good as the three-car train, but quite close to it.

More twists

The misadventures between expectations and reality did not end there. To test whether the system could learn to recover from failure, the researchers blocked the release valve in one of the units, forcing it into an always-inflated state. The algorithm re-optimized, but the researchers found that it worked even better when the pump still turned on and off, even if the pump wasn’t pushing any air. Apparently, the vibrations helped limit the friction that might otherwise bog the whole system down.

The refinement system, which tried start times close to the maximum, also turned out to be problematic once a train got long enough. With a seven-car example, the system would regularly reach the maximum speed but quickly slow back down. Apparently, the slight variations tested during refinement could be tolerated when a train was small, but they put too many cars out of sync once the train got long enough.

Still, the overall system was pretty effective, even if used on a simple system. It took two simple properties and turned them into a self-learning system that could respond to environmental changes like friction. The system was scalable in that it worked well for systems with a variety of train lengths. And it was robust to damage, such as when the researchers blocked a valve. In a different experiment, the researchers cut the train in half, and both halves re-optimized their speeds.

While simple, the system provides some insights into how we might think about self-teaching systems. And the experiment reminds us that the real world will throw even the best self-teaching system a few curves.

PNAS, 2021. DOI: 10.1073/pnas.2017015118  (About DOIs).

Continue Reading

Trending