Connect with us

Security

Netflix to Linux users: Patch SACK Panic kernel bug now to stop remote attacks

Published

on

Google Project Zero accuses Linux of sloppy kernel patching
Project Zero accuses Linux distributions of leaving users exposed to known kernel vulnerabilities for weeks.

Organizations running large fleets of production Linux computers are being urged to apply new patches to stop remote attackers from crashing the machines. Three flaws affect how the Linux kernel handles TCP networking and one affects the FreeBSD TCP stack. 

The most serious of the four flaws, CVE-2019-11477, is called SACK Panic, referring to the Linux kernel’s TCP Selective Acknowledgement (SACK) capabilities. 

Remote attackers can exploit this flaw to trigger a kernel ‘panic’ that could crash a machine, leading to a denial of service. This affects Linux kernel versions from 2.6.29 and above. 

Netflix detailed the bugs in an advisory posted on GitHub and has collectively rated them as critical-severity flaws. However, RedHat individually rates SACK Panic as having an ‘important’ severity, while the remaining bugs are considered ‘moderate’. 

But Netflix’s critical rating would make sense if remote attackers could down the video-streaming giant’s Linux machines, which are likely hosted on Amazon Web Services (AWS) infrastructure. 

On that note, AWS has released updates for the three Linux bugs, which affected AWS Elastic Beanstalk, Amazon Linux, Linux-based EC2 instances, Amazon Linux WorkSpaces, and Amazon’s Kubernetes container service.

Some services, such as Amazon ElastiCache are not vulnerable if left in default settings, but could be if customers have changed a configuration.

The other bugs include CVE-2019-11478 or SACK Slowness, which affects Linux 4.15 and below, CVE-2019-5599, another SACK Slowness bug that affects FreeBSD 12, and CVE-2019-11479, which causes excess resource consumption. 

The three Linux flaws are related and affect how the kernel handles TCP SACK packets with low Maximum Segment Size (MSS). RedHat notes in its advisory that the impact is limited to denial of service “at this time” and that it can’t be used for privilege escalation of leaking information. 

SACK is a mechanism used to improve network inefficiencies caused by TCP packet loss between sender and receiver. 

The engineers who drew up SACK in a IETF- standard explain: “TCP may experience poor performance when multiple packets are lost from one window of data. With the limited information available from cumulative acknowledgments, a TCP sender can only learn about a single lost packet per round trip time. An aggressive sender could choose to retransmit packets early, but such retransmitted segments may have already been successfully received.

“A Selective Acknowledgment (SACK) mechanism, combined with a selective repeat retransmission policy, can help to overcome these limitations.  The receiving TCP sends back SACK packets to the sender informing the sender of data that has been received. The sender can then retransmit only the missing data segments.”   

The crash can happen due to a data structure used in Linux TCP implementations called Socket Buffer (SKB), which is capable of holding up to 17 fragments of packet data, according to RedHat. 

Once that limit is reached, the result can be a kernel panic issue. The other factor is MSS, or the maximum size parameter, which specifies the total amount of data contained in a reconstructed TCP segment.  

“A remote user can trigger this issue by setting the Maximum Segment Size (MSS) of a TCP connection to its lowest limit of 48 bytes and sending a sequence of specially crafted SACK packets. Lowest MSS leaves merely eight bytes of data per segment, thus increasing the number of TCP segments required to send all data,” explains RedHat.

More on Linux security

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Security

The Five Pillars of (Azure) Cloud-based Application Security

Published

on

This 1-hour webinar from GigaOm brings together experts in Azure cloud application migration and security, featuring GigaOm analyst Jon Collins and special guests from Fortinet, Director of Product Marketing for Public Cloud, Daniel Schrader, and Global Director of Public Cloud Architecture and Engineering, Aidan Walden.

These interesting times have accelerated the drive towards digital transformation, application rationalization, and migration to cloud-based architectures. Enterprise organizations are looking to increase efficiency, but without impacting performance or increasing risk, either from infrastructure resilience or end-user behaviors.

Success requires a combination of best practice and appropriate use of technology, depending on where the organization is on its cloud journey. Elements such as zero-trust access and security-driven networking need to be deployed in parallel with security-first operations, breach prevention and response.

If you are looking to migrate applications to the cloud and want to be sure your approach maximizes delivery whilst minimizing risk, this webinar is for you.

Continue Reading

Security

Data Management and Secure Data Storage for the Enterprise

Published

on

This free 1-hour webinar from GigaOm Research brings together experts in data management and security, featuring GigaOm Analyst Enrico Signoretti and special guest from RackTop Systems, Jonathan Halstuch. The discussion will focus on data storage and how to protect data against cyberattacks.

Most of the recent news coverage and analysis of cyberattacks focus on hackers getting access and control of critical systems. Yet rarely is it mentioned that the most valuable asset for the organizations under attack is the data contained in these systems.

In this webinar, you will learn about the risks and costs of a poor data security management approach, and how to improve your data storage to prevent and mitigate the consequences of a compromised infrastructure.

Continue Reading

Security

CISO Podcast: Talking Anti-Phishing Solutions

Published

on

Simon Gibson earlier this year published the report, “GigaOm Radar for Phishing Prevention and Detection,” which assessed more than a dozen security solutions focused on detecting and mitigating email-borne threats and vulnerabilities. As Gibson noted in his report, email remains a prime vector for attack, reflecting the strategic role it plays in corporate communications.

Earlier this week, Gibson’s report was a featured topic of discussions on David Spark’s popular CISO Security Vendor Relationship Podcast. In it, Spark interviewed a pair of chief information security officers—Mike Johnson, CISO for SalesForce, and James Dolph, CISO for Guidewire Software—to get their take on the role of anti-phishing solutions.

“I want to first give GigaOm some credit here for really pointing out the need to decide what to do with detections,” Johnson said when asked for his thoughts about selecting an anti-phishing tool. “I think a lot of companies charge into a solution for anti-phishing without thinking about what they are going to do when the thing triggers.”

As Johnson noted, the needs and vulnerabilities of a large organization aligned on Microsoft 365 are very different from those of a smaller outfit working with GSuite. A malicious Excel macro-laden file, for example, poses a credible threat to a Microsoft shop and therefore argues for a detonation solution to detect and neutralize malicious payloads before they can spread and morph. On the other hand, a smaller company is more exposed to business email compromise (BEC) attacks, since spending authority is often spread among many employees in these businesses.

Gibson’s radar report describes both in-line and out-of-band solutions, but Johnson said cloud-aligned infrastructures argue against traditional in-line schemes.

“If you put an in-line solution in front of [Microsoft] 365 or in front of GSuite, you are likely decreasing your reliability, because you’ve now introduced this single point of failure. Google and Microsoft have this massive amount of reliability that is built in,” Johnson said.

So how should IT decision makers go about selecting an anti-phishing solution? Dolph answered that question with a series of questions of his own:

“Does it nail the basics? Does it fit with the technologies we have in place? And then secondarily, is it reliable, is it tunable, is it manageable?” he asked. “Because it can add a lot overhead, especially if you have a small team if these tools are really disruptive to the email flow.”

Dolph concluded by noting that it’s important for solutions to provide insight that can help organizations target their protections, as well as support both training and awareness around threats. Finally, he urged organizations to consider how they can measure the effectiveness of solutions.

“I may look at other solutions in the future and how do I compare those solutions to the benchmark of what we have in place?”

Listen to the Podcast: CISO Podcast

Continue Reading

Trending