Connect with us

Security

Yes, there are security ramifications to serverless computing

Published

on

At least one in five organizations, 21%, have implemented serverless computing as part of their cloud-based infrastructure. That’s the finding of a recent survey of 108 IT managers conducted by Datamation. Another 39% are planning or considering serverless resources. 


Photo: Joe McKendrick

The question is, will serverless computing soon gain critical mass, used by a majority of enterprises? Along with this, what are the ramifications for security? 

Existing on-premises systems and applications — you can call some of them “legacy” — still require more traditional care and feeding. Even existing cloud-based applications are still structured around the more serverful mode of development and delivery. 

That’s what many enterprises are dealing with now — loads of traditional applications to manage even while they begin a transition to serverless mode. Again, even if applications or systems are in the cloud, that still is closer to traditional IT than serverless on the continuum, says Marc Feghali, founder and VP product management for Attivo Networks. “Traditional IT architectures use a server infrastructure, that requires managing the systems and services required for an application to function,” he says. It doesn’t matter if the servers happen to be on-premises or cloud-based. “The application must always be running, and the organization must spin up other instances of the application to handle more load which tends to be resource-intensive.”  

Serverless architecture goes much deeper than traditional cloud arrangements, which are still modeled on the serverful model. Serverless, Feghali says, is more granular, “focusing instead on having the infrastructure provided by a third party, with the organization only providing the code for the applications broken down into functions that are hosted by the third party. This allows the application to scale based on function usage. It’s more cost-effective since the third-party charges for how often the application uses the function, instead of having the application running all the time.”

How should the existing or legacy architecture be phased out? Is it an instant cut over, or should it be a more gradual migration? Feghali urges a gradual migration, paying close attention to security requirements. “There are specific use cases that will still require existing legacy architecture,” and serverless computing “is constrained by performance requirements, resource limits, and security concerns,” Feghali points out. The advantage serverless offers is that it “excels at reducing costs for compute. That being said, where feasible, one should gradually migrate over to serverless infrastructure to make sure it can handle the application requirements before phasing out the legacy infrastructure.”   

Importantly, a serverless architecture calls for looking at security in new ways, says Feghali, “With the new service or solution, security frameworks need to be evaluated to see what new gaps and risks will present themselves. They will then need to reassess their controls and processes to refine them to address these new risk models.”

Security protocols and processes differ in a serverless environment. Namely, with the use of serverless computing, an enterprise’s attack surface widens. “The attack surface is much larger as attackers can leverage every component of the application as an entry point,” Feghali says, which includes “the application layer, code, dependencies, configurations and any cloud resources their application requires to run properly. There is no OS to worry about securing, but there is no way to install endpoint or network-level detection solutions such as antivirus or [intrusion protection or prevention systems[. This lack of visibility allows attackers to remain undetected as they leverage vulnerable functions for their attacks, whether to steal data or compromise certificates, keys, and credentials to access the organization.”

At this point, introducing the security measures needed to better protect serverless environments may add more cost and overhead, according to a study out of the University of California at Berkeley, led by Eric Jonas. “Serverless computing reshuffles security responsibilities, shifting many of them from the cloud user to the cloud provider without fundamentally changing them,” their report states. “However, serverless computing must also grapple with the risks inherent in both application disaggregation multi-tenant resource sharing.”

One approach to securing serverless is “oblivious algorithms,” the UC Berkeley team continues. “The tendency to decompose serverless applications into many small functions exacerbates this security exposure. While the primary security concern is from external attackers, the network patterns can be protected from employees by adopting oblivious algorithms. Unfortunately, these tend to have high overhead.” 

Physical isolation of serverless resources and functions is another approach — but this, of course, comes with premium pricing from cloud providers. Jonas and his team also see possibilities with generating very rapid instances of serverless functions. “The challenge in providing function-level sandboxing is to maintain a short startup time without caching the execution environments in a way that shares state between repeated function invocations. One possibility would be to locally snapshot the instances so that each function can start from clean state.” 

Feghali’s firm, Attivio Networks, focuses on adoption of “deception technologies” intended to provide greater visibility across the various components in a serverless stack, “as a way to understand when security controls are not working as they should, detect attacks that have by-passed them, and for notification of policy violations by insiders, suppliers, or external threat actors.”  

The bottom line is handing over the keys of the server stack to a third-party cloud provider doesn’t mean outsourcing security as well. Security needs to remain the enterprise customer’s responsibility, because it is they who will need to answer in the event of a breach.

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Security

Phish Fight: Securing Enterprise Communications

Published

on

Yes, much of the world may have moved on from email to social media and culturally dubious TikTok dances, yet traditional electronic mail remains a foundation of business communication. And sadly, it remains a prime vector for malware, data leakage, and phishing attacks that can undermine enterprise protections. It doesn’t have to be that way.

In a just released report titled “GigaOm Radar for Phishing Prevention and Detection,” GigaOm Analyst Simon Gibson surveyed more than a dozen enterprise-focused email security solutions. He found a range of approaches to securing communications that often can be fitted together to provide critical, defense-in-depth protection against even determined attackers.

Figure 1. GigaOm Radar for Email Phishing Prevention and Detection

“When evaluating these vendors and their solutions, it is important to consider your own business and workflow,” Gibson writes in the report, stressing the need to deploy solutions that best address your organization’s business workflow and email traffic. “For some it may be preferable to settle on one comprehensive solution, while for others building a best-of-breed architecture from multiple vendors may be preferable.”

In a field of competent solutions, Gibson found that Forcepoint, purchased recently by Raytheon, stood apart thanks to the layered protections provided by its Advanced Classification Engine. Area 1 and Zimperium, meanwhile, are both leaders that exhibit significant momentum, with Area 1 boosted by its recent solution partnership with Virtru, and Zimperium excelling in its deep commitment to mobile message security.

A mobile focus is timely, Gibson says in a video interview for GigaOm. He says companies are “tuning the spigot on” and enabling unprecedented access and reliance on mobile devices, which is creating an urgent need to get ahead of threats.

Gibson’s conclusion in the report? He singles out three things: Defense in depth, awareness of existing patterns and infrastructure, and a healthy respect for the “human factor” that can make security so hard to lock down.

Continue Reading

Security

When Is a DevSecOps Vendor Not a DevSecOps Vendor?

Published

on

DevOps’ general aim is to enable a more efficient process for producing software and technology solutions and bringing stakeholders together to speed up delivery. But we know from experience that this inherently creative, outcome-driven approach often forgets about one thing until too late in the process—security. Too often, security is brought into the timeline just before deployment, risking last minute headaches and major delays. The security team is pushed into being the Greek chorus of the process, “ruining everyone’s fun” by demanding changes and slowing things down.

But as we know, in the complex, multi-cloud and containerized environment we find ourselves in, security is becoming more important and challenging than ever. And the costs of security failure are not only measured in slower deployment, but in compliance breaches and reputational damage.

The term “DevSecOps” has been coined to characterize how security needs to be at the heart of the DevOps process. This is in part principle and part tools. As a principle, DevSecOps fits with the concept of “shifting left,” that is, ensuring that security is treated as early as possible in the development process. So far, so simple.

From a tooling perspective, however, things get more complicated, not least because the market has seen a number of platforms marketing themselves as DevSecOps. As we have been writing our Key Criteria report on the subject, we have learned that not all DevSecOps vendors are necessarily DevSecOps vendors. Specifically, we have learned to distinguish capabilities that directly enable the goals of DevSecOps from a process perspective, from those designed to support DevSecOps practices. We could define them as: “Those that do, and those that help.”

This is how to tell the two types of vendor apart and how to use them.

Vendors Enabling DevSecOps: “Tools That Do”

A number of tools work to facilitate the DevSecOps process -– let’s bite the bullet and call them DevSecOps tools. They help teams set out each stage of software development, bringing siloed teams together behind a unified vision that allows fast, high-quality development, with security considerations at its core. DevSecOps tools work across the development process, for example:

  • Create: Help to set and implement policy
  • Develop: Apply guidance to the process and aid its implementation
  • Test: Facilitate and guide security testing procedures
  • Deploy: Provide reports to assure confidence to deploy the application

The key element that sets these tool sets apart is the ability to automate and reduce friction within the development process. They will prompt action, stop a team from moving from one stage to another if the process has not adequately addressed security concerns, and guide the roadmap for the development from start to finish.

Supporting DevSecOps: “Tools That Help”

In this category we place those tools which aid the execution, and monitoring, of good DevSecOps principles. Security scanning and application/infrastructure hardening tools are a key element of these processes: Software composition analysis (SCA) forms a part of the development stage, static/dynamic application security testing (SAST/DAST) is integral to the test stage and runtime app protection (RASP) is a key to the Deploy stage.

Tools like this are a vital part of the security layer of security tooling, especially just before deployment – and they often come with APIs so they can be plugged into the CI/CD process. However, while these capabilities are very important to DevSecOps, they can be seen in more of a supporting role, rather than being DevSecOps tools per se.

DevSecOps-washing is not a good idea for the enterprise

While one might argue that security should never have been shifted right, DevSecOps exists to ensure that security best practices take place across the development lifecycle. A corollary exists to the idea of “tools that help,” namely that organizations implementing these tools are not “doing DevSecOps,” any more than vendors providing these tools are DevSecOps vendors.

The only way to “do” DevSecOps is to fully embrace security at a process management and governance level: This means assessing risk, defining policy, setting review gates, and disallowing progress for insecure deliverables. Organizations that embrace DevSecOps can get help from what we are calling DevSecOps tools, as well as from scanning and hardening tools that help support its goals.

At the end of the day, all security and governance boils down to risk: If you buy a scanning tool so you can check a box that says “DevSecOps,” you are potentially adding to your risk posture, rather than mitigating it. So, get your DevSecOps strategy fixed first, then consider how you can add automation, visibility, and control using “tools that do,” as well as benefit from “tools that help.”

Continue Reading

Security

High Performance Application Security Testing

Published

on

This free 1-hour webinar from GigaOm Research. It is hosted by an expert in Application and API testing, and GigaOm analyst, Jake Dolezal. His presentation will focus on the results of high performance testing we completed against two security mechanisms: ModSecurity on NGINX and NGINX App Protect. Additionally, we tested the AWS Web Application Firewall (WAF) as a fully managed security offering.

While performance is important, it is only one criterion for a Web Application Firewall selection. The results of the report are revealing about these platforms. The methodology will be shown with clarity and transparency on how you might replicate these tests to mimic your own workloads and requirements.

Register now to join GigaOm and sponsor NGINX for this free expert webinar.

Continue Reading

Trending