Connect with us

Security

Netflix to Linux users: Patch SACK Panic kernel bug now to stop remote attacks

Published

on

Google Project Zero accuses Linux of sloppy kernel patching
Project Zero accuses Linux distributions of leaving users exposed to known kernel vulnerabilities for weeks.

Organizations running large fleets of production Linux computers are being urged to apply new patches to stop remote attackers from crashing the machines. Three flaws affect how the Linux kernel handles TCP networking and one affects the FreeBSD TCP stack. 

The most serious of the four flaws, CVE-2019-11477, is called SACK Panic, referring to the Linux kernel’s TCP Selective Acknowledgement (SACK) capabilities. 

Remote attackers can exploit this flaw to trigger a kernel ‘panic’ that could crash a machine, leading to a denial of service. This affects Linux kernel versions from 2.6.29 and above. 

Netflix detailed the bugs in an advisory posted on GitHub and has collectively rated them as critical-severity flaws. However, RedHat individually rates SACK Panic as having an ‘important’ severity, while the remaining bugs are considered ‘moderate’. 

But Netflix’s critical rating would make sense if remote attackers could down the video-streaming giant’s Linux machines, which are likely hosted on Amazon Web Services (AWS) infrastructure. 

On that note, AWS has released updates for the three Linux bugs, which affected AWS Elastic Beanstalk, Amazon Linux, Linux-based EC2 instances, Amazon Linux WorkSpaces, and Amazon’s Kubernetes container service.

Some services, such as Amazon ElastiCache are not vulnerable if left in default settings, but could be if customers have changed a configuration.

The other bugs include CVE-2019-11478 or SACK Slowness, which affects Linux 4.15 and below, CVE-2019-5599, another SACK Slowness bug that affects FreeBSD 12, and CVE-2019-11479, which causes excess resource consumption. 

The three Linux flaws are related and affect how the kernel handles TCP SACK packets with low Maximum Segment Size (MSS). RedHat notes in its advisory that the impact is limited to denial of service “at this time” and that it can’t be used for privilege escalation of leaking information. 

SACK is a mechanism used to improve network inefficiencies caused by TCP packet loss between sender and receiver. 

The engineers who drew up SACK in a IETF- standard explain: “TCP may experience poor performance when multiple packets are lost from one window of data. With the limited information available from cumulative acknowledgments, a TCP sender can only learn about a single lost packet per round trip time. An aggressive sender could choose to retransmit packets early, but such retransmitted segments may have already been successfully received.

“A Selective Acknowledgment (SACK) mechanism, combined with a selective repeat retransmission policy, can help to overcome these limitations.  The receiving TCP sends back SACK packets to the sender informing the sender of data that has been received. The sender can then retransmit only the missing data segments.”   

The crash can happen due to a data structure used in Linux TCP implementations called Socket Buffer (SKB), which is capable of holding up to 17 fragments of packet data, according to RedHat. 

Once that limit is reached, the result can be a kernel panic issue. The other factor is MSS, or the maximum size parameter, which specifies the total amount of data contained in a reconstructed TCP segment.  

“A remote user can trigger this issue by setting the Maximum Segment Size (MSS) of a TCP connection to its lowest limit of 48 bytes and sending a sequence of specially crafted SACK packets. Lowest MSS leaves merely eight bytes of data per segment, thus increasing the number of TCP segments required to send all data,” explains RedHat.

More on Linux security

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Security

Phish Fight: Securing Enterprise Communications

Published

on

Yes, much of the world may have moved on from email to social media and culturally dubious TikTok dances, yet traditional electronic mail remains a foundation of business communication. And sadly, it remains a prime vector for malware, data leakage, and phishing attacks that can undermine enterprise protections. It doesn’t have to be that way.

In a just released report titled “GigaOm Radar for Phishing Prevention and Detection,” GigaOm Analyst Simon Gibson surveyed more than a dozen enterprise-focused email security solutions. He found a range of approaches to securing communications that often can be fitted together to provide critical, defense-in-depth protection against even determined attackers.

Figure 1. GigaOm Radar for Email Phishing Prevention and Detection

“When evaluating these vendors and their solutions, it is important to consider your own business and workflow,” Gibson writes in the report, stressing the need to deploy solutions that best address your organization’s business workflow and email traffic. “For some it may be preferable to settle on one comprehensive solution, while for others building a best-of-breed architecture from multiple vendors may be preferable.”

In a field of competent solutions, Gibson found that Forcepoint, purchased recently by Raytheon, stood apart thanks to the layered protections provided by its Advanced Classification Engine. Area 1 and Zimperium, meanwhile, are both leaders that exhibit significant momentum, with Area 1 boosted by its recent solution partnership with Virtru, and Zimperium excelling in its deep commitment to mobile message security.

A mobile focus is timely, Gibson says in a video interview for GigaOm. He says companies are “tuning the spigot on” and enabling unprecedented access and reliance on mobile devices, which is creating an urgent need to get ahead of threats.

Gibson’s conclusion in the report? He singles out three things: Defense in depth, awareness of existing patterns and infrastructure, and a healthy respect for the “human factor” that can make security so hard to lock down.

Continue Reading

Security

When Is a DevSecOps Vendor Not a DevSecOps Vendor?

Published

on

DevOps’ general aim is to enable a more efficient process for producing software and technology solutions and bringing stakeholders together to speed up delivery. But we know from experience that this inherently creative, outcome-driven approach often forgets about one thing until too late in the process—security. Too often, security is brought into the timeline just before deployment, risking last minute headaches and major delays. The security team is pushed into being the Greek chorus of the process, “ruining everyone’s fun” by demanding changes and slowing things down.

But as we know, in the complex, multi-cloud and containerized environment we find ourselves in, security is becoming more important and challenging than ever. And the costs of security failure are not only measured in slower deployment, but in compliance breaches and reputational damage.

The term “DevSecOps” has been coined to characterize how security needs to be at the heart of the DevOps process. This is in part principle and part tools. As a principle, DevSecOps fits with the concept of “shifting left,” that is, ensuring that security is treated as early as possible in the development process. So far, so simple.

From a tooling perspective, however, things get more complicated, not least because the market has seen a number of platforms marketing themselves as DevSecOps. As we have been writing our Key Criteria report on the subject, we have learned that not all DevSecOps vendors are necessarily DevSecOps vendors. Specifically, we have learned to distinguish capabilities that directly enable the goals of DevSecOps from a process perspective, from those designed to support DevSecOps practices. We could define them as: “Those that do, and those that help.”

This is how to tell the two types of vendor apart and how to use them.

Vendors Enabling DevSecOps: “Tools That Do”

A number of tools work to facilitate the DevSecOps process -– let’s bite the bullet and call them DevSecOps tools. They help teams set out each stage of software development, bringing siloed teams together behind a unified vision that allows fast, high-quality development, with security considerations at its core. DevSecOps tools work across the development process, for example:

  • Create: Help to set and implement policy
  • Develop: Apply guidance to the process and aid its implementation
  • Test: Facilitate and guide security testing procedures
  • Deploy: Provide reports to assure confidence to deploy the application

The key element that sets these tool sets apart is the ability to automate and reduce friction within the development process. They will prompt action, stop a team from moving from one stage to another if the process has not adequately addressed security concerns, and guide the roadmap for the development from start to finish.

Supporting DevSecOps: “Tools That Help”

In this category we place those tools which aid the execution, and monitoring, of good DevSecOps principles. Security scanning and application/infrastructure hardening tools are a key element of these processes: Software composition analysis (SCA) forms a part of the development stage, static/dynamic application security testing (SAST/DAST) is integral to the test stage and runtime app protection (RASP) is a key to the Deploy stage.

Tools like this are a vital part of the security layer of security tooling, especially just before deployment – and they often come with APIs so they can be plugged into the CI/CD process. However, while these capabilities are very important to DevSecOps, they can be seen in more of a supporting role, rather than being DevSecOps tools per se.

DevSecOps-washing is not a good idea for the enterprise

While one might argue that security should never have been shifted right, DevSecOps exists to ensure that security best practices take place across the development lifecycle. A corollary exists to the idea of “tools that help,” namely that organizations implementing these tools are not “doing DevSecOps,” any more than vendors providing these tools are DevSecOps vendors.

The only way to “do” DevSecOps is to fully embrace security at a process management and governance level: This means assessing risk, defining policy, setting review gates, and disallowing progress for insecure deliverables. Organizations that embrace DevSecOps can get help from what we are calling DevSecOps tools, as well as from scanning and hardening tools that help support its goals.

At the end of the day, all security and governance boils down to risk: If you buy a scanning tool so you can check a box that says “DevSecOps,” you are potentially adding to your risk posture, rather than mitigating it. So, get your DevSecOps strategy fixed first, then consider how you can add automation, visibility, and control using “tools that do,” as well as benefit from “tools that help.”

Continue Reading

Security

High Performance Application Security Testing

Published

on

This free 1-hour webinar from GigaOm Research. It is hosted by an expert in Application and API testing, and GigaOm analyst, Jake Dolezal. His presentation will focus on the results of high performance testing we completed against two security mechanisms: ModSecurity on NGINX and NGINX App Protect. Additionally, we tested the AWS Web Application Firewall (WAF) as a fully managed security offering.

While performance is important, it is only one criterion for a Web Application Firewall selection. The results of the report are revealing about these platforms. The methodology will be shown with clarity and transparency on how you might replicate these tests to mimic your own workloads and requirements.

Register now to join GigaOm and sponsor NGINX for this free expert webinar.

Continue Reading

Trending