Connect with us

Security

Yes, there are security ramifications to serverless computing

Published

on

At least one in five organizations, 21%, have implemented serverless computing as part of their cloud-based infrastructure. That’s the finding of a recent survey of 108 IT managers conducted by Datamation. Another 39% are planning or considering serverless resources. 


Photo: Joe McKendrick

The question is, will serverless computing soon gain critical mass, used by a majority of enterprises? Along with this, what are the ramifications for security? 

Existing on-premises systems and applications — you can call some of them “legacy” — still require more traditional care and feeding. Even existing cloud-based applications are still structured around the more serverful mode of development and delivery. 

That’s what many enterprises are dealing with now — loads of traditional applications to manage even while they begin a transition to serverless mode. Again, even if applications or systems are in the cloud, that still is closer to traditional IT than serverless on the continuum, says Marc Feghali, founder and VP product management for Attivo Networks. “Traditional IT architectures use a server infrastructure, that requires managing the systems and services required for an application to function,” he says. It doesn’t matter if the servers happen to be on-premises or cloud-based. “The application must always be running, and the organization must spin up other instances of the application to handle more load which tends to be resource-intensive.”  

Serverless architecture goes much deeper than traditional cloud arrangements, which are still modeled on the serverful model. Serverless, Feghali says, is more granular, “focusing instead on having the infrastructure provided by a third party, with the organization only providing the code for the applications broken down into functions that are hosted by the third party. This allows the application to scale based on function usage. It’s more cost-effective since the third-party charges for how often the application uses the function, instead of having the application running all the time.”

How should the existing or legacy architecture be phased out? Is it an instant cut over, or should it be a more gradual migration? Feghali urges a gradual migration, paying close attention to security requirements. “There are specific use cases that will still require existing legacy architecture,” and serverless computing “is constrained by performance requirements, resource limits, and security concerns,” Feghali points out. The advantage serverless offers is that it “excels at reducing costs for compute. That being said, where feasible, one should gradually migrate over to serverless infrastructure to make sure it can handle the application requirements before phasing out the legacy infrastructure.”   

Importantly, a serverless architecture calls for looking at security in new ways, says Feghali, “With the new service or solution, security frameworks need to be evaluated to see what new gaps and risks will present themselves. They will then need to reassess their controls and processes to refine them to address these new risk models.”

Security protocols and processes differ in a serverless environment. Namely, with the use of serverless computing, an enterprise’s attack surface widens. “The attack surface is much larger as attackers can leverage every component of the application as an entry point,” Feghali says, which includes “the application layer, code, dependencies, configurations and any cloud resources their application requires to run properly. There is no OS to worry about securing, but there is no way to install endpoint or network-level detection solutions such as antivirus or [intrusion protection or prevention systems[. This lack of visibility allows attackers to remain undetected as they leverage vulnerable functions for their attacks, whether to steal data or compromise certificates, keys, and credentials to access the organization.”

At this point, introducing the security measures needed to better protect serverless environments may add more cost and overhead, according to a study out of the University of California at Berkeley, led by Eric Jonas. “Serverless computing reshuffles security responsibilities, shifting many of them from the cloud user to the cloud provider without fundamentally changing them,” their report states. “However, serverless computing must also grapple with the risks inherent in both application disaggregation multi-tenant resource sharing.”

One approach to securing serverless is “oblivious algorithms,” the UC Berkeley team continues. “The tendency to decompose serverless applications into many small functions exacerbates this security exposure. While the primary security concern is from external attackers, the network patterns can be protected from employees by adopting oblivious algorithms. Unfortunately, these tend to have high overhead.” 

Physical isolation of serverless resources and functions is another approach — but this, of course, comes with premium pricing from cloud providers. Jonas and his team also see possibilities with generating very rapid instances of serverless functions. “The challenge in providing function-level sandboxing is to maintain a short startup time without caching the execution environments in a way that shares state between repeated function invocations. One possibility would be to locally snapshot the instances so that each function can start from clean state.” 

Feghali’s firm, Attivio Networks, focuses on adoption of “deception technologies” intended to provide greater visibility across the various components in a serverless stack, “as a way to understand when security controls are not working as they should, detect attacks that have by-passed them, and for notification of policy violations by insiders, suppliers, or external threat actors.”  

The bottom line is handing over the keys of the server stack to a third-party cloud provider doesn’t mean outsourcing security as well. Security needs to remain the enterprise customer’s responsibility, because it is they who will need to answer in the event of a breach.

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Security

Work from Home Security

Published

on

Spin Master is a leading global children’s entertainment company that invents toys and games, produces dozens of television and studio series that are distributed in 160 countries, and creates a variety of digital games played by more than 30 million children. What was once a small private company founded by childhood friends is now a public global supply chain with over 1,500 employees and 28 offices around the world.

Like most organizations in 2020, Spin Master had to adapt quickly to the new normal of remote work, shifting most of its production from cubicles in regional and head offices to hundreds of employees working from home and other remote locations.

This dramatic shift created potential security risks, as most employees were no longer behind the firewall on the corporate network. Without the implementation of hardened endpoint security, the door would be open for bad actors to infiltrate the organization, acquire intellectual property, and ransom customer information. Additionally, the potential downtime caused by a security breach could harm the global supply chain. With that in mind, Spin Master created a self-imposed 30-day deadline to extend its network protection capabilities to the edge.

Key Findings:

  • Think Long Term: The initial goal of establishing a stop-gap work-from-home (WFH) and work-from-anywhere (WFA) strategy has since morphed into a permanent strategy, requiring long-term solutions.
  • Gather Skills: The real urgency posed by the global pandemic made forging partnerships with providers that could fill all the required skill sets a top priority.
  • Build Momentum: The compressed timeline left no room for delay or error. The Board of Directors threw its support behind the implementation team and gave it broad budget authority to ensure rapid action, while providing active guidance to align strategy with action.
  • Deliver Value: The team established two key requirements that the selected partner must deliver: implementation support and establishing an ongoing managed security operations center (SOC).
Continue Reading

Security

Key Criteria for Evaluating Privileged Access Management

Published

on

Privileged Access Management (PAM) enables administrative access to critical IT systems while minimizing the chances of security compromises through monitoring, policy enforcement, and credential management.

A key operating principle of all PAM systems is the separation of user credentials for individual staff members from the system administration credentials they are permitted to use. PAM solutions store and manage all of the privileged credentials, providing system access without requiring users to remember, or even know, the privileged password. Of course, all staff have their own unique user ID and password that they use to complete everyday tasks such as accessing email and writing documents. Users who are permitted to handle system administration tasks that require privileged credentials log into the PAM solution, which provides and controls such access according to predefined security policies. These policies control who is allowed to use which privileged credentials when, where, and for what tasks. An organization’s policy may also require logging and recording of the actions undertaken with the privileged credentials.

Once implemented, PAM will improve your security posture in several ways. The first is by segregating day-to-day duties from duties that require elevated access, reducing the risk of accidental privileged actions. Secondly, automated password management reduces the possibility that credentials will be shared while also lowering the risk if credentials are accidentally exposed. Finally, extensive logging and activity recording in PAM solutions aids audits of critical system access for both preventative and forensic security.

How to Read this Report

This GigaOm report is one of a series of documents that helps IT organizations assess competing solutions in the context of well-defined features and criteria. For a fuller understanding consider reviewing the following reports:

Key Criteria report: A detailed market sector analysis that assesses the impact that key product features and criteria have on top-line solution characteristics—such as scalability, performance, and TCO—that drive purchase decisions.

GigaOm Radar report: A forward-looking analysis that plots the relative value and progression of vendor solutions along multiple axes based on strategy and execution. The Radar report includes a breakdown of each vendor’s offering in the sector.

Vendor Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.

Continue Reading

Security

Adventist Risk Management Data Protection Infrastructure

Published

on

Companies always want to enhance their ability to quickly address pressing business needs. Toward that end, they look for new ways to make their IT infrastructures more efficient—and more cost effective. Today, those pressing needs often center around data protection and regulatory compliance, which was certainly the case for Adventist Risk Management. What they wanted was an end-to-end, best-in-class solution to meet their needs. After trying several others, they found the perfect combination with HYCU and Nutanix, which provided:

  • Ease of deployment
  • Outstanding ROI
  • Overall TCO improvement

Nutanix Cloud Platform provides a software-defined hyperconverged infrastructure, while HYCU offers purpose-built backup and recovery for Nutanix. Compared to the previous traditional infrastructure and data protection solutions in use at Adventist Risk Management, Nutanix and HYCU simplified processes, speeding day-to-day operations up to 75%. Now, migration and update activities typically scheduled for weekends can be performed during working hours and help to increase IT staff and management quality of life. HYCU further increased savings by providing faster and more frequent points of recovery as well as better DR Recovery Point Objective (RPO) and Recovery Time Objective (RTO) by increasing the ability to do daily backups from one to four per day.

Furthermore, the recent adoption of Nutanix Objects, which provides secure and performant S3 storage capabilities, enhanced the infrastructure by:

    • Improving overall performance for backups
    • Adding security against potential ransomware attacks
    • Replacing components difficult to manage and support

In the end, Nutanix and HYCU enabled their customer to save money, improve the existing environment, and, above all, meet regulatory compliance requirements without any struggle.

Continue Reading

Trending