Connect with us

Science

KDE launches updated Slimbook II Linux laptops with faster Intel Core processors

Published

on


KDE Slimbook II Linux laptop

A little more than a year ago, Linux developers KDE and a Spanish hardware manufacturer joined forces to offer the KDE Slimbook, a 13.3-inch laptop running a Ubuntu-based OS with mid-range specs and a mid-range price. Now KDE is back with the Slimbook II, which, like many notebook sequels, is a little bit faster, a little bit thinner, and a little bit lighter than its predecessor.

The original Slimbook wasn’t a performance powerhouse, but it wasn’t a slouch, either. It used sixth-generation (a.k.a. Skylake) Intel Core i5 or i7 processors and offered up to 16 gigs of RAM, 500GB of solid-state storage, and a 1080p HD display. Its successor jumps to the seventh generation of Core i5 and i7 chips, which also results in a leap to DDR4 RAM, resulting in a moderate performance gain over the first Slimbook.

Other hardware upgrades include a 1TB SSD option, a more powerful Wi-Fi antenna, and a trackpad with improved tactile feedback. The Slimbook II is also about an ounce lighter and a tenth of an inch thinner than the 3-pound, 0.6-inch thick original Slimbook.

But the biggest advantage of the Slimbook II (as with its predecessor) is that the hardware meshes with the pre-installed Linux build, rather than a user taking a Windows machine and converting it to Linux. That means no driver installs and compatibility issues, among other potential headaches. KDE neon is built on the Ubuntu Linux flavor, and the Slimbook II includes KDE’s productivity apps such as Kontact (email and calendar), DigiKam (image processing), and Kdenlive (video editing).

Despite the open-source ethos of the Slimbook II, it’s not exactly a budget-friendly system. Like the original Slimbook, the new Core i5 edition is priced at 699 euros ($856), while the Core i7 model costs 799 euros ($978). But compared to Dell’s Ubuntu-powered XPS 13 Developer Edition, with a $1,400 starting price, it might seem like a bargain to a Linux laptop lover.

[Via Liliputing]

Source link



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Science

Lone high-energy neutrino likely came from shredded star in distant galaxy

Published

on

Enlarge / The remains of a shredded star formed an accretion disk around the black hole whose powerful tidal forces ripped it apart. This created a cosmic particle accelerator spewing out fast subatomic particles.

Roughly 700 million years ago, a tiny subatomic particle was born in a galaxy far, far away and began its journey across the vast expanses of our universe. That neutrino finally reached the Earth’s South Pole last October, setting off detectors buried deep beneath the Antarctic ice. A few months earlier, a telescope in California had recorded a bright glow emanating from the friction of that same distant galaxy—evidence of a so-called “tidal disruption event” (TDE), most likely the result of a star being shredded by a supermassive black hole.

According to two new papers (here and here) published in the journal Nature Astronomy, that lone neutrino was likely born from the TDE, which serves as a cosmic-scale particle accelerator near the center of the distant galaxy, spewing out high-energy subatomic particles as the star’s matter is consumed by the black hole. This finding also sheds light on the origin of ultrahigh-energy cosmic rays, a question that has puzzled astronomers for decades.

“The origin of cosmic high-energy neutrinos is unknown, primarily because they are notoriously hard to pin down,” said co-author Sjoert van Velzen, a postdoc at New York University at the time of the discovery. “This result would be only the second time high-energy neutrinos have been traced back to their source.”

Neutrinos travel very near the speed of light. John Updike’s 1959 poem, “Cosmic Gall,” pays tribute to the two most defining features of neutrinos: they have no charge and, for decades, physicists believed they had no mass (they actually have a teeny bit of mass). Neutrinos are the most abundant subatomic particle in the universe, but they very rarely interact with any type of matter. We are constantly being bombarded every second by millions of these tiny particles, yet they pass right through us without our even noticing. That’s why Isaac Asimov dubbed them “ghost particles.”

That low rate of interaction makes neutrinos extremely difficult to detect, but because they are so light, they can escape unimpeded (and thus largely unchanged) by collisions with other particles of matter. This means they can provide valuable clues to astronomers about distant systems, further augmented by what can be learned with telescopes across the electromagnetic spectrum, as well as gravitational waves. Together, these difference sources of information have been dubbed “multi-messenger” astronomy.

The majority of neutrinos that reach the Earth come from our own Sun, but every now and then, neutrino detectors pick up the rare neutrino that hails from further afield. Such is the case with this latest detection: a neutrino that began its journey in a faraway, as yet-unnamed-galaxy in the constellation Delphinus, born from the death throes of a shredded star.

A view of the accretion disc around the supermassive black hole, with jet-like structures flowing away from the disc. The extreme mass of the black hole bends spacetime, allowing the far side of the accretion disc to be seen as an image above and below the black hole.
Enlarge / A view of the accretion disc around the supermassive black hole, with jet-like structures flowing away from the disc. The extreme mass of the black hole bends spacetime, allowing the far side of the accretion disc to be seen as an image above and below the black hole.

DESY, Science Communication Lab

As we’ve reported previously, it’s a popular misconception that black holes behave like cosmic vacuum cleaners, ravenously sucking up any matter in their surroundings. In reality, only stuff that passes beyond the event horizon—including light—is swallowed up and can’t escape, although black holes are also messy eaters. That means that part of an object’s matter is actually ejected out in a powerful jet. If that object is a star, the process of being shredded (or “spaghettified”) by the powerful gravitational forces of a black hole occurs outside the event horizon, and part of the star’s original mass is ejected violently outward. This in turn can form a rotating ring of matter (aka an accretion disk) around the black hole that emits powerful X-rays and visible light. 

Tidal disruption describes the large forces created when a small body passes very close to a much larger one, like a star that strays too close to a supermassive black hole. “The force of gravity gets stronger and stronger, the closer you get to something. That means the black hole’s gravity pulls the star’s near side more strongly than the star’s far side, leading to a stretching effect,” said co-author Robert Stein of DESY in Germany. “As the star gets closer, this stretching becomes more extreme. Eventually it rips the star apart, and then we call it a tidal disruption event. It’s the same process that leads to ocean tides on Earth, but luckily for us the moon doesn’t pull hard enough to shred the Earth.”

TDEs are likely quite common in our universe, even though only a few have been detected to date. For instance, in 2018, astronomers announced the first direct image of the aftermath of a star being shredded by a black hole 20 million times more massive than our Sun, in a pair of colliding galaxies called Arp 299 about 150 million light years from Earth. And last fall, astronomers recorded the final death throes of a star being shredded by a supermassive black hole, publishing the discovery in Nature Astronomy.

The glow from this most recent TDE was first detected on April 9, 2019 by the Zwicky Transient Facility (ZTF) at California’s Mount Palomar observatory, which has spotted more than 30 such events since it came online 2018. Nearly five months later, on October 1, 2019, the IceCube neutrino observatory at the South Pole recorded the signal from a highly energetic neutrino originating from the same direction as the TDE. Just how energetic was it? Co-author Anna Franckowiak of DESY pegged the energy at over 100 teraelectronvolts (TEV), 10 times the maximum energy for subatomic particles that can be produced by the Large Hadron Collider.

Artistic rendering of the IceCube lab at the South Pole. A distant source emits neutrinos that are then detected below the ice by IceCube sensors.
Enlarge / Artistic rendering of the IceCube lab at the South Pole. A distant source emits neutrinos that are then detected below the ice by IceCube sensors.

Ice Cube/NSF

The likelihood of detecting this solitary high-energy neutrino was just 1 in 500. “This is the first neutrino linked to a tidal disruption event, and it brings us valuable evidence,” said Stein. “Tidal disruption events are not well understood. The detection of the neutrino points to the existence of a central, powerful engine near the accretion disc, spewing out fast particles. And the combined analysis of data from radio, optical and ultraviolet telescopes gives us additional evidence that the TDE acts as a gigantic particle accelerator.”

It’s yet one more example of all the new knowledge to be gained by combining multiple data sources to get different perspectives on the same celestial event. “The combined observations demonstrate the power of multi-messenger astronomy,” said co-author Marek Kowalski of DESY and Humboldt University in Berlin. “Without the detection of the tidal disruption event, the neutrino would be just one of many. And without the neutrino, the observation of the tidal disruption event would be just one of many. Only through the combination could we find the accelerator and learn something new about the processes inside.”

As for the future, “We might only be seeing the tip of the iceberg here. In the future, we expect to find many more associations between high-energy neutrinos and their sources,” said Francis Halzen of the University of Wisconsin-Madison, who was not directly involved in the study. “There is a new generation of telescopes being built that will provide greater sensitivity to TDEs and other prospective neutrino sources. Even more essential is the planned extension of the IceCube neutrino detector, that would increase the number of cosmic neutrino detections at least tenfold.”

DOI: Nature Astronomy, 2021. 10.1038/s41550-020-01295-8

DOI: Nature Astronomy, 2021. 10.1038/s41550-021-01305-3  (About DOIs).

Continue Reading

Science

Johnson & Johnson’s vaccine safe and effective, FDA review concludes

Published

on

Enlarge / A sign at the Johnson & Johnson campus on August 26, 2019 in Irvine, California.

Johnson & Johnson’s single-shot COVID-19 vaccine is effective and has a “favorable safety profile,” according to scientists at the Food and Drug Administration.

The endorsement comes out of a review released by the regulatory agency Wednesday. The FDA has been looking over data on Johnson & Johnson’s vaccine since February 4, when the company applied for Emergency Use Authorization. The agency’s green light is a positive sign ahead of this Friday, February 26, when the FDA will convene an advisory committee to make a recommendation on whether the FDA should grant the EUA. The FDA isn’t obligated to follow the committee’s recommendation, but it usually does.

If Johnson & Johnson’s vaccine is granted an EUA, it will become the third COVID-19 vaccine available for use in the US. The other two vaccines are both two-dose, mRNA-based vaccines, one made by Pfizer and its German partner BioNTech and the other from Moderna, which developed its vaccine in collaboration with researchers at the US National Institutes of Health.

According to data from a Phase III clinical trial involving more than 44,000 participants, Johnson & Johnson’s vaccine is less effective than the two mRNA vaccines, which were both around 95 percent effective at preventing symptomatic COVID-19. Johnson & Johnson’s vaccine was found to be 66 percent effective overall at preventing moderate to severe COVID-19. However, efficacy differed based on the trial’s location sites, with efficacy found to be 72 percent in the United States, 66 percent in Latin America, and 57 percent in South Africa. The differences may be partly explained by the circulation of variants in Latin America and South Africa, which have been found to reduce the efficacy of vaccines.

Favorable review

But overall, Johnson & Johnson’s vaccine was 85 percent effective against severe COVID-19. Even in South Africa, the vaccine was 82 percent effective against severe and critical COVID-19, according to the FDA’s review.

After the shot, six vaccinated participants and 42 participants who received the placebo were hospitalized. When researchers looked out 28 days after vaccination, zero vaccinated participants were hospitalized, compared with 16 in the placebo group. There were seven deaths in the trial, but all were in the placebo.

Though the efficacy numbers are lower than the mRNA vaccines, experts spotlight the high efficacy against severe disease and death—the most critical functions of any vaccine. Moreover, Johnson & Johnson’s vaccine has clear logistical advantages over the other vaccines. It is only one shot, rather than two, and it also doesn’t require freezer temperatures during shipping.

In terms of side effects, the FDA found that the vaccine has a favorable safety profile, with no specific safety concerns and the most common effects being mild to moderate pain at the injection site, headache, fatigue, and myalgia.

The fate of the vaccine now moves to the FDA advisory committee, which will dive deeper into all the data. If the FDA grants the EUA, Johnson & Johnson’s executive said in congressional testimony this week that the company would provide 4 million doses after the EUA, with a total of 20 million ready by the end of March and a total of 100 million by the end of June.

Continue Reading

Science

D-Wave’s hardware outperforms a classic computer

Published

on

D-Wave

Early on in D-Wave’s history, the company made bold claims about its quantum annealer outperforming algorithms run on traditional CPUs. Those claims turned out to be premature, as improvements to these algorithms pulled the traditional hardware back in front. Since then, the company has been far more circumspect about its performance claims, even as it brought out newer generations of hardware.

But in the run-up to the latest hardware, the company apparently became a bit more interested in performance again. And it recently got together with Google scientists to demonstrate a significant boost in performance compared to a classical algorithm, with the gap growing as the problem became complex—although the company’s scientists were very upfront about the prospects of finding a way to boost classical hardware further. Still, there are a lot of caveats even beyond that, so it’s worth taking a detailed look at what the company did.

Magnets, how do they flip?

D-Wave’s system is based on a large collection of quantum devices that are connected to some of their neighbors. Each device can have its state set separately, and the devices are then given the chance to influence their neighbors as the system moves through different states and individual devices change their behavior. These transitions are the equivalent of performing operations. And because of the quantum nature of these devices, the hardware seems to be able to “tunnel” to new states, even if the only route between them involves high-energy states that are impossible to reach.

In the end, if the system is operated properly, the final state of the devices can be read out as an answer to the calculation performed by the operations. And because of the quantum effects, it can potentially provide solutions that a classical computer might find difficult to reach.

Validating that idea, however, has proven challenging, as noted above. Where the system has done best is in modeling quantum systems that look a lot like the quantum annealing hardware itself. And that’s what the D-Wave/Google team has done here. The problem can be described as an array of quantum magnets, with the orientation of each magnet influencing that of its neighbors. The system is in the lowest energy state when all of a magnet’s neighbors have the opposite orientation. Depending on the precise configuration of the array, however, that might not be possible to satisfy.

Now, imagine that you start the system in a configuration where the magnets aren’t in a stable state—there are too many cases where neighboring magnets have the same orientation. Magnets will start flipping to get there, but in the process, they may cause their neighbors to flip. The whole thing may work through a variety of intermediate configurations to make its way toward stability. Because of the quantum nature of the device’s components, the progression through different states may involve some steps that are, to our non-quantum brains, difficult to understand.

Quantum Monte Carlo

This system is interesting for a couple of reasons: it’s an approachable way to examine complicated quantum behaviors, and other interesting problems can be mapped onto its behavior. So researchers have figured out how to look at its behavior using computer algorithms. The one the research team says shows the highest performance is what’s called Path-Integral Monte Carlo. “Path-integral” simply indicates that there are multiple valid paths between a starting state and a low-energy state, and the software looks at a subset of them, since there are so many. “Monte Carlo” is an indication that the paths it does sample are chosen randomly.

But the D-Wave system looks a lot like an array of quantum magnets, so it’s possible to configure it so that it behaves a lot like what is being modeled. There’s a chance that configuring the D-Wave machine properly can get it to very efficiently recapitulate the behavior of the system being modeled.

This is what the team tried for the paper, but it found out there was a little problem. With the traditional computing algorithm, it’s easy to essentially stop the system and look at how it’s evolving. With the D-Wave system, things moved so quickly that it ended up carrying on to the final state before it could be sampled. Instead, the researchers had to arrange some fairly tortured configurations to slow the D-Wave hardware down long enough to have a look at what was going on.

The performance measurement the team cared about isn’t the final state; instead, it’s trying to figure out how quickly a given configuration of magnets will take to reach a stable, equilibrium state.

For generating this measure, the researchers found that the D-Wave hardware could outperform the x86 CPU they were using (a hyperthreading Xeon with 26 cores). And the advantage grew larger as the research team increased the complexity of the magnets’ arrangement, reaching up to 3 million times faster. And while the entire D-Wave system didn’t behave as a single quantum object, there were quantum interactions that were larger than the smallest groups of magnets in the D-Wave hardware (linked groups of four).

The caveats

To start with, the gap in performance is between a single Xeon and a chip that requires a cabinet-sized cooling system with some pretty hefty energy use. Should the classical computer algorithm scale with additional processors, it should be relatively simple to put this on a cluster and take a big chunk out of D-Wave’s speed advantage. But Ars’ own Chris Lee notes that even on the simpler problems, the Xeon (which has 26 cores) was already struggling with any increase in complexity. This might be a sign that there are only limited gains we can expect from throwing more processors at the issue.

That said, D-Wave was also not operating at its full advantage. While it recently introduced a new generation of processors, the work was done on an experimental processor that was part of the development of the new generation. This had the same hardware layout—same number and connections among the quantum devices—as the previous generation of hardware. But it was made with a new manufacturing process that lowered the noise in the system and was put into full use in the latest generation of chips.

In addition, the new generation more than doubles the quantum devices on the chip and boosts the connectivity among them. These advances should allow the system to model larger and more complicated magnet arrays, expanding D-Wave’s advantage back.

Finally, the team behind the work emphasizes that there may be ways to optimize the performance of the classical algorithm as well, saying, “Our study does not constitute a demonstration of superiority over all possible classical methods.” How this all shakes out will undoubtedly come with additional work, so we may not have an update on where performance stands for a couple of years.

Still, it’s interesting that D-Wave has become so interested in performance again. The company recently announced that it had adapted its control software so that a specific type of operation (a quadratic unconstrained binary optimization) could be both used by a D-Wave machine and sent to the Qiskit software package that would allow it to run on IBM’s quantum computers. This makes sense for the company’s user base; a large percentage of the base is made up of companies that are simply trying to make sure they’re ready for any disruptive computing technologies, so they are looking at all the quantum hardware on the market. But in the press release announcing the data, the company says this “opens the door to performance comparisons.”

Nature Communications, 2021. DOI: 10.1038/s41467-021-20901-5  (About DOIs).

Continue Reading

Trending