Connect with us

Security

Thunderclap flaws impact how Windows, Mac, Linux handle Thunderbolt peripherals

Published

on


Logo: Markettos et al. // Composition: ZDNet

Windows, Mac, Linux, and FreeBSD systems are all impacted by a new vulnerability that was disclosed this week at the NDSS 2019 security conference.

The vulnerability –named Thunderclap– affects the way Thunderbolt-based peripherals are allowed to connect and interact with these operating systems, allowing a malicious device to steal data directly from the operating system’s memory, including highly sensitive information.

The research team behind this vulnerability says that “all Apple laptops and desktops produced since 2011 are vulnerable, with the exception of the 12-inch MacBook.”

Similarly, “many laptops, and some desktops, designed to run Windows or Linux produced since 2016 are also affected,” as long as they support Thunderbolt interfacing.

What is Thunderbolt?

Thunderbolt is the name of a hardware interface designed by Apple and Intel to allow the connection of external peripherals (keyboards, chargers, video projectors, network cards, etc.) to a computer.

These interfaces became wildly popular because they combined different technologies into one single cable, such as the ability to transmit DC power (for charging purposes), serial data (via PCI Express), and video output (via DisplayPort).

The technology was initially available for Apple devices but was later made available for all hardware vendors, becoming ubiquitous nowadays, especially thanks to the standard’s latest version, Thunderbolt 3.

But according to the research team, all Thunderbolt versions are affected by Thunderclap. This means Thunderbolt 1 and 2 (the interface versions that use a Mini DisplayPort [MDP] connector) and Thunderbolt 3 (the one that works via USB-C ports).

What is Thunderclap?

Thunderclap is a collection of flaws in the way the Thunderbolt hardware interface has been implemented on operating systems.

At the core of this vulnerability, researchers say they are exploiting an OS design issue where the operating system automatically puts faith in any newly connected peripheral, granting it access to all of its memory –a state known as Direct Memory Access (DMA).

Thunderclap flaws allow attackers to create malicious, but fully-working peripherals that when connected via a Thunderbolt-capable port can perform their normal operations, but also run malicious code in the operating system’s background without any restriction from the operating.

This makes the Thunderclap attack highly dangerous, as it can be easily hidden inside any peripheral.

The Thunderclap vulnerabilities are even capable of bypassing an OS security feature known as Input-Output Memory Management Units (IOMMUs) that hardware and OS makers have created in the early 2000s to counter malicious peripherals that abuse their access to the entire OS memory (in what’s known as a DMA attack).

The reason why Thunderclap vulnerabilities work against IOMMU is either because operating systems disable this feature by default, or, in cases the feature has been enabled by the user, the OS leaves user data in the same memory space where the malicious peripheral runs its exploit code, making IOMMU useless.

What’s being done about it?

Researchers from the University of Cambridge, Rice University, and SRI International discovered the Thunderclap issues back in 2016, and they’ve been working with hardware and OS versions for three years in complete silence to have them fixed.

However, despite the almost three-year warning, OS makers have been slow to react, with most of the Thunderclap attack variations described in a research paper published today still working. Here’s the current state of patches, according to researchers:

Windows – “Microsoft have enabled support for the IOMMU for Thunderbolt devices in Windows 10 version 1803, which shipped in 2018. Earlier hardware upgraded to 1803 requires a firmware update from the vendor. This brings them into line with the baseline for our work, however the more complex vulnerabilities we describe remain relevant.”

macOS – “In macOS 10.12.4 and later, Apple addressed the specific network card vulnerability we used to achieve a root shell. However the general scope of our work still applies; in particular that Thunderbolt devices have access to all network traffic and sometimes keystrokes and framebuffer data.”

Linux – “Recently, Intel have contributed patches to version 5.0 of the Linux kernel (shortly to be released) that enable the IOMMU for Thunderbolt and prevent the protection-bypass vulnerability that uses the ATS feature of PCI Express.”

FreeBSD – “The FreeBSD Project indicated that malicious peripheral devices are not currently within their threat model for security response. However, FreeBSD does not currently support Thunderbolt hotplugging.”

As the table below shows, most Thunderclap flaws are still unpatched.

Thunderclap flaws still working

Image: Markettos et al.

In the meantime, users are advised to disable Thunderbolt ports via BIOS/UEFI firmware settings and to avoid plugging in peripherals from untrusted sources.

Technical details about the Thunderclap flaws are available in a research paper entitled “Thunderclap: Exploring Vulnerabilities in Operating System IOMMU Protection via DMA from Untrustworthy Peripherals,” available for download in PDF format from here and here, with more details here.

The research team also released the “Thunderclap platform” on GitHub, which is a collection of ready-made proof-of-concept code to create malicious Thunderclap peripherals.

Extra details are also available on a dedicated website and in this blog post.

As a closing note, Thunderclap vulnerabilities can also be exploited by compromised PCI Express (PCIe) peripherals, such as plug-in cards or chips soldered to the motherboard, but these attacks require compromising the peripheral’s firmware, making the attack much harder to pull off than just plugging in a charger or video projector via a Thunderbolt interface.

Related cybersecurity news coverage:

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Security

CISO Podcast: Talking Anti-Phishing Solutions

Published

on

Simon Gibson earlier this year published the report, “GigaOm Radar for Phishing Prevention and Detection,” which assessed more than a dozen security solutions focused on detecting and mitigating email-borne threats and vulnerabilities. As Gibson noted in his report, email remains a prime vector for attack, reflecting the strategic role it plays in corporate communications.

Earlier this week, Gibson’s report was a featured topic of discussions on David Spark’s popular CISO Security Vendor Relationship Podcast. In it, Spark interviewed a pair of chief information security officers—Mike Johnson, CISO for SalesForce, and James Dolph, CISO for Guidewire Software—to get their take on the role of anti-phishing solutions.

“I want to first give GigaOm some credit here for really pointing out the need to decide what to do with detections,” Johnson said when asked for his thoughts about selecting an anti-phishing tool. “I think a lot of companies charge into a solution for anti-phishing without thinking about what they are going to do when the thing triggers.”

As Johnson noted, the needs and vulnerabilities of a large organization aligned on Microsoft 365 are very different from those of a smaller outfit working with GSuite. A malicious Excel macro-laden file, for example, poses a credible threat to a Microsoft shop and therefore argues for a detonation solution to detect and neutralize malicious payloads before they can spread and morph. On the other hand, a smaller company is more exposed to business email compromise (BEC) attacks, since spending authority is often spread among many employees in these businesses.

Gibson’s radar report describes both in-line and out-of-band solutions, but Johnson said cloud-aligned infrastructures argue against traditional in-line schemes.

“If you put an in-line solution in front of [Microsoft] 365 or in front of GSuite, you are likely decreasing your reliability, because you’ve now introduced this single point of failure. Google and Microsoft have this massive amount of reliability that is built in,” Johnson said.

So how should IT decision makers go about selecting an anti-phishing solution? Dolph answered that question with a series of questions of his own:

“Does it nail the basics? Does it fit with the technologies we have in place? And then secondarily, is it reliable, is it tunable, is it manageable?” he asked. “Because it can add a lot overhead, especially if you have a small team if these tools are really disruptive to the email flow.”

Dolph concluded by noting that it’s important for solutions to provide insight that can help organizations target their protections, as well as support both training and awareness around threats. Finally, he urged organizations to consider how they can measure the effectiveness of solutions.

“I may look at other solutions in the future and how do I compare those solutions to the benchmark of what we have in place?”

Listen to the Podcast: CISO Podcast

Continue Reading

Security

Phish Fight: Securing Enterprise Communications

Published

on

Yes, much of the world may have moved on from email to social media and culturally dubious TikTok dances, yet traditional electronic mail remains a foundation of business communication. And sadly, it remains a prime vector for malware, data leakage, and phishing attacks that can undermine enterprise protections. It doesn’t have to be that way.

In a just released report titled “GigaOm Radar for Phishing Prevention and Detection,” GigaOm Analyst Simon Gibson surveyed more than a dozen enterprise-focused email security solutions. He found a range of approaches to securing communications that often can be fitted together to provide critical, defense-in-depth protection against even determined attackers.

Figure 1. GigaOm Radar for Email Phishing Prevention and Detection

“When evaluating these vendors and their solutions, it is important to consider your own business and workflow,” Gibson writes in the report, stressing the need to deploy solutions that best address your organization’s business workflow and email traffic. “For some it may be preferable to settle on one comprehensive solution, while for others building a best-of-breed architecture from multiple vendors may be preferable.”

In a field of competent solutions, Gibson found that Forcepoint, purchased recently by Raytheon, stood apart thanks to the layered protections provided by its Advanced Classification Engine. Area 1 and Zimperium, meanwhile, are both leaders that exhibit significant momentum, with Area 1 boosted by its recent solution partnership with Virtru, and Zimperium excelling in its deep commitment to mobile message security.

A mobile focus is timely, Gibson says in a video interview for GigaOm. He says companies are “tuning the spigot on” and enabling unprecedented access and reliance on mobile devices, which is creating an urgent need to get ahead of threats.

Gibson’s conclusion in the report? He singles out three things: Defense in depth, awareness of existing patterns and infrastructure, and a healthy respect for the “human factor” that can make security so hard to lock down.

Continue Reading

Security

When Is a DevSecOps Vendor Not a DevSecOps Vendor?

Published

on

DevOps’ general aim is to enable a more efficient process for producing software and technology solutions and bringing stakeholders together to speed up delivery. But we know from experience that this inherently creative, outcome-driven approach often forgets about one thing until too late in the process—security. Too often, security is brought into the timeline just before deployment, risking last minute headaches and major delays. The security team is pushed into being the Greek chorus of the process, “ruining everyone’s fun” by demanding changes and slowing things down.

But as we know, in the complex, multi-cloud and containerized environment we find ourselves in, security is becoming more important and challenging than ever. And the costs of security failure are not only measured in slower deployment, but in compliance breaches and reputational damage.

The term “DevSecOps” has been coined to characterize how security needs to be at the heart of the DevOps process. This is in part principle and part tools. As a principle, DevSecOps fits with the concept of “shifting left,” that is, ensuring that security is treated as early as possible in the development process. So far, so simple.

From a tooling perspective, however, things get more complicated, not least because the market has seen a number of platforms marketing themselves as DevSecOps. As we have been writing our Key Criteria report on the subject, we have learned that not all DevSecOps vendors are necessarily DevSecOps vendors. Specifically, we have learned to distinguish capabilities that directly enable the goals of DevSecOps from a process perspective, from those designed to support DevSecOps practices. We could define them as: “Those that do, and those that help.”

This is how to tell the two types of vendor apart and how to use them.

Vendors Enabling DevSecOps: “Tools That Do”

A number of tools work to facilitate the DevSecOps process -– let’s bite the bullet and call them DevSecOps tools. They help teams set out each stage of software development, bringing siloed teams together behind a unified vision that allows fast, high-quality development, with security considerations at its core. DevSecOps tools work across the development process, for example:

  • Create: Help to set and implement policy
  • Develop: Apply guidance to the process and aid its implementation
  • Test: Facilitate and guide security testing procedures
  • Deploy: Provide reports to assure confidence to deploy the application

The key element that sets these tool sets apart is the ability to automate and reduce friction within the development process. They will prompt action, stop a team from moving from one stage to another if the process has not adequately addressed security concerns, and guide the roadmap for the development from start to finish.

Supporting DevSecOps: “Tools That Help”

In this category we place those tools which aid the execution, and monitoring, of good DevSecOps principles. Security scanning and application/infrastructure hardening tools are a key element of these processes: Software composition analysis (SCA) forms a part of the development stage, static/dynamic application security testing (SAST/DAST) is integral to the test stage and runtime app protection (RASP) is a key to the Deploy stage.

Tools like this are a vital part of the security layer of security tooling, especially just before deployment – and they often come with APIs so they can be plugged into the CI/CD process. However, while these capabilities are very important to DevSecOps, they can be seen in more of a supporting role, rather than being DevSecOps tools per se.

DevSecOps-washing is not a good idea for the enterprise

While one might argue that security should never have been shifted right, DevSecOps exists to ensure that security best practices take place across the development lifecycle. A corollary exists to the idea of “tools that help,” namely that organizations implementing these tools are not “doing DevSecOps,” any more than vendors providing these tools are DevSecOps vendors.

The only way to “do” DevSecOps is to fully embrace security at a process management and governance level: This means assessing risk, defining policy, setting review gates, and disallowing progress for insecure deliverables. Organizations that embrace DevSecOps can get help from what we are calling DevSecOps tools, as well as from scanning and hardening tools that help support its goals.

At the end of the day, all security and governance boils down to risk: If you buy a scanning tool so you can check a box that says “DevSecOps,” you are potentially adding to your risk posture, rather than mitigating it. So, get your DevSecOps strategy fixed first, then consider how you can add automation, visibility, and control using “tools that do,” as well as benefit from “tools that help.”

Continue Reading

Trending