Connect with us

Security

Key Criteria for Evaluating Hybrid Cloud Data Protection

Published

on

Cloud and edge computing have changed the way enterprises do IT. To best meet business and user needs, many enterprises are embracing hybrid cloud solutions, with multi-cloud around the corner. Data is now created, stored, and consumed everywhere, on several types of devices concurrently, and at any time of the day. At the same time, machine-generated data is growing dramatically faster than human-generated data. However, this is only the tip of the iceberg: data quantity and diversity are just two of many aspects to consider when evaluating a new data protection solution. Challenges now faced by enterprises include:

  • Exponential data growth: Machine-generated data is taking the lion’s share now. Backup and restore processes change accordingly. For example, it is very unlikely that it will be necessary to retrieve single files accidentally deleted.
  • Disparate data types: Structured and unstructured data created by traditional applications are now joined by containers and SaaS data, metadata, and blobs. This new complex type of data is usually self-consistent, capable of reproducing the entire applications or recreating the state of a cloud service if necessary.
  • Pressing SLAs: Digital transformation initiatives embraced by all organizations transformed every process, and now it is becoming harder and harder to stop any part of the infrastructure or having a long RTO (recovery time objective) or RPO (recovery point objective). Instant and continuous backups, as well as fast recovery speeds, are mandatory for an ever-growing number of applications.
  • Data dispersion and consolidation: Data is now created, stored, and consumed on mobile devices, PCs, data centers, and many other places. In order to build effective data protection instrumental in creating a data management strategy, it is necessary to consolidate backup repositories in a single physical or virtual domain.
  • New threats: Traditional risks and threats, such as natural disasters or human errors are now joined by cyberattacks like ransomware, which are even more dangerous and harder to detect.
  • Regulatory compliance: There is a growing number of demanding regulations (like GDPR or CCPA) that require strong data protection and tools able to search and find information quickly or remove/mask them properly when retrieved, as in the case of the right to be forgotten.
  • Data management and reusability: Backups can either be considered a liability or be transformed into an asset for the organization. By indexing data and making it searchable, the data’s real value is revealed, making it reusable for other users or applications across the entire organization.

For these reasons, data protection operations are very difficult and more critical than in the past.

Report Methodology

A Key Criteria report analyzes the most important features of a technology category to understand how it impacts an enterprise and its IT organization. Features are grouped into three categories:

  1. Table Stakes
  2. Key Criteria
  3. Critical Features: Impact Analysis
  4. Near-term game-changing technology

The goal is to help organizations assess capabilities and build a mid-to-long-term infrastructure strategy. In a mature technology, the solutions are divided into three target market categories: enterprise, high-performance, and specialized solutions. In a mature market, these differ in their characteristics and how they can be integrated with existing infrastructures. That said, the assessment is more dependent on the specific user’s needs and not solely on the organization’s vertical.

Table Stakes

Table stakes are system characteristics and features that are important when choosing the right solution. They include architectural choices that depend on the size of the organization, their requirements, the expected growth over time, and the types of workloads. Table stakes are mature and the implementation of these features will not add any business advantage nor significantly change the TCO or ROI of the infrastructure.

Key Criteria

Key Criteria features really differentiate one solution from another. Depending on real user needs, they have a positive impact on one or more of the metrics mentioned. Therefore, implementation details are essential to understanding the benefits relative to the infrastructure, processes, or business. 
Following table stakes and key criteria, aspects like architectural design and implementation regain importance and need to be analyzed in great detail. In some cases, the features described in the Key Criteria section are the core solution, and the rest of the system is designed around them. This could be an important benefit for organizations that see them as a real practical advantage, but it also poses some risks in the long term. In fact, over time, the differentiation introduced by a feature becomes less relevant and falls into the “table stakes” group, while new system capabilities introduce new benefits or address new needs, with a positive impact on the metrics like efficiency, manageability, flexibility and so on.

Key Criteria brings several benefits to organizations of all sizes with different business needs. It is organized to give the reader a brief description of the specific functionality or technology, its benefits in general terms, and what to expect from a good implementation. In order to give a complete picture, we also include examples of the most interesting implementation currently available in the market.

Critical Impact of Features on the Metrics

Technology, functionality, and architecture designs that have demonstrated their value are adopted by other vendors, become a standard, and lose their status as a differentiator. Initially, the implementation of these key criteria was crucial for delivering real value, perhaps with some trade-offs. The most important metrics for the evaluation of a technology solution include:

  • Architecture
  • Scalability
  • Flexibility
  • Efficiency
  • Manageability and ease of use
  • Partner ecosystem

This section provides the impact individual features have on the metrics at the moment of report publication. Each feature is scored from one to five, with a score of five having the most impact on an enterprise. This is not absolute and should always be verified with the organization’s requirements and use case. Strategic decisions can then be based on the impact each metric can have on the infrastructure, system management, and IT processes already in place with particular emphasis on ROI and TCO.

Near-term Game-changing Technology

In this report section, we analyze the most interesting technologies on the horizon over the next 12 to 18 months. Some are already present in some form but usually as part of niche products or for addressing very specific use cases. In either case, at this stage, the implementations available are not mature enough to be grouped in key criteria. Yet when implemented correctly and efficiently, this technology can really make a difference to the metrics.

Over time, game-changing features become key criteria, and the cycle repeats. Therefore, to get the best ROI, it is important to check what vendors are offering today and what they plan to release in the near future.

Continue Reading

Security

Key Criteria for Evaluating Security Information and Event Management Solutions (SIEM)

Published

on

Security Information and Event Management (SIEM) solutions consolidate multiple security data streams under a single roof. Initially, SIEM supported early detection of cyberattacks and data breaches by collecting and correlating security event logs. Over time, it evolved into sophisticated systems capable of ingesting huge volumes of data from disparate sources, analyzing data in real time, and gathering additional context from threat intelligence feeds and new sources of security-related data. Next-generation SIEM solutions deliver tight integrations with other security products, advanced analytics, and semi-autonomous incident response.

SIEM solutions can be deployed on-premises, in the cloud, or a mix of the two. Deployment models must be weighed with regard to the environments the SIEM solution will protect. With more and more digital infrastructure and services becoming mission critical to every enterprise, SIEMs must handle higher volumes of data. Vendors and customers are increasingly focused on cloud-based solutions, whether SaaS or cloud-hosted models, for their scalability and flexibility.

The latest developments for SIEM solutions include machine learning capabilities for incident detection, advanced analytics features that include user behavior analytics (UBA), and integrations with other security solutions, such as security orchestration automation and response (SOAR) and endpoint detection and response (EDR) systems. Even though additional capabilities within the SIEM environment are a natural progression, customers are finding it even more difficult to deploy, customize, and operate SIEM solutions.

Other improvements include better user experience and lower time-to-value for new deployments. To achieve this, vendors are working on:

  • Streamlining data onboarding
  • Preloading customizable content—use cases, rulesets, and playbooks
  • Standardizing data formats and labels
  • Mapping incident alerts to common frameworks, such as the MITRE ATT&CK framework

Vendors and service providers are also expanding their offerings beyond managed SIEM solutions to à la carte services, such as content development services and threat hunting-as-a-service.

There is no one-size-fits-all SIEM solution. Each organization will have to evaluate its own requirements and resource constraints to find the right solution. Organizations will weigh factors such as deployment models or integrations with existing applications and security solutions. However, the main decision factor for most customers will revolve around usability, affordability, and return on investment. Fortunately, a wide range of solutions available in the market can almost guarantee a good fit for every customer.

How to Read this Report

This GigaOm report is one of a series of documents that helps IT organizations assess competing solutions in the context of well-defined features and criteria. For a fuller understanding consider reviewing the following reports:

Key Criteria report: A detailed market sector analysis that assesses the impact that key product features and criteria have on top-line solution characteristics—such as scalability, performance, and TCO—that drive purchase decisions.

GigaOm Radar report: A forward-looking analysis that plots the relative value and progression of vendor solutions along multiple axes based on strategy and execution. The Radar report includes a breakdown of each vendor’s offering in the sector.

Solution Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.

Continue Reading

Security

Key Criteria for Evaluating Secure Service Access

Published

on

Since the inception of large-scale computing, enterprises, organizations, and service providers have protected their digital assets by securing the perimeter of their on-premises data centers. With the advent of cloud computing, the perimeter has dissolved, but—in most cases—the legacy approach to security hasn not. Many corporations still manage the expanded enterprise and remote workforce as an extension of the old headquarters office/branch model serviced by LANs and WANs.

Bolting new security products onto their aging networks increased costs and complexity exponentially, while at the same time severely limiting their ability to meet regulatory compliance mandates, scale elastically, or secure the threat surface of the new any place/any user/any device perimeter.

The result? Patchwork security ill-suited to the demands of the post-COVID distributed enterprise.

Converging networking and security, secure service access (SSA) represents a significant shift in the way organizations consume network security, enabling them to replace multiple security vendors with a single, integrated platform offering full interoperability and end-to-end redundancy. Encompassing secure access service edge (SASE), zero-trust network access (ZTNA), and extended detection and response (XDR), SSA shifts the focus of security consumption from being either data center or edge-centric to being ubiquitous, with an emphasis on securing services irrespective of user identity or resources accessed.

This GigaOm Key Criteria report outlines critical criteria and evaluation metrics for selecting an SSA solution. The corresponding GigaOm Radar Report provides an overview of notable SSA vendors and their offerings available today. Together, these reports are designed to help educate decision-makers, making them aware of various approaches and vendors that are meeting the challenges of the distributed enterprise in the post-pandemic era.

How to Read this Report

This GigaOm report is one of a series of documents that helps IT organizations assess competing solutions in the context of well-defined features and criteria. For a fuller understanding consider reviewing the following reports:

Key Criteria report: A detailed market sector analysis that assesses the impact that key product features and criteria have on top-line solution characteristics—such as scalability, performance, and TCO—that drive purchase decisions.

GigaOm Radar report: A forward-looking analysis that plots the relative value and progression of vendor solutions along multiple axes based on strategy and execution. The Radar report includes a breakdown of each vendor’s offering in the sector.

Solution Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.

Continue Reading

Security

Key Criteria for Evaluating Edge Platforms

Published

on

Edge platforms leverage distributed infrastructure to deliver content, computing, and security closer to end devices, offloading networks and improving performance. We define edge platforms as the solutions capable of providing end users with millisecond access to processing power, media files, storage, secure connectivity, and related “cloud-like” services.

The key benefit of edge platforms is bringing websites, applications, media, security, and a multitude of virtual infrastructures and services closer to end devices compared to public or private cloud locations.

The need for content proximity started to become more evident in the early 2000s as the web evolved from a read-only service to a read-write experience, and users worldwide began both consuming and creating content. Today, this is even more important, as live and on-demand video streaming at very high resolutions cannot be sustained from a single central location. Content delivery networks (CDNs) helped host these types of media at the edge, and the associated network optimization methods allowed them to provide these new demanding services.

As we moved into the early 2010s, we experienced the rapid cloudification of traditional infrastructure. Roughly speaking, cloud computing takes a server from a user’s office, puts it in a faraway data center, and allows it to be used across the internet. Cloud providers manage the underlying hardware and provide it as a service, allowing users to provision their own virtual infrastructure. There are many operational benefits, but at least one unavoidable downside: the increase in latency. This is especially true in this dawning age of distributed enterprises for which there is not just a single office to optimize. Instead, “the office” is now anywhere and everywhere employees happen to be.

Even so, this centralized, cloud-based compute methodology works very well for most enterprise applications, as long as there is no critical sensitivity to delay. But what about use cases that cannot tolerate latency? Think industrial monitoring and control, real-time machine learning, autonomous vehicles, augmented reality, and gaming. If a cloud data center is a few hundred or even thousands of miles away, the physical limitations of sending an optical or electrical pulse through a cable mean there are no options to lower the latency. The answer to this is leveraging a distributed infrastructure model, which has traditionally been used by content delivery networks.

As CDNs have brought the internet’s content closer to everyone, CDN providers have positioned themselves in the unique space of owning much of the infrastructure required to bring computing and security closer to users and end devices. With servers close to the topological edge of the network, CDN providers can offer processing power and other “cloud-like” services to end devices with only a few milliseconds latency.

While CDN operators are in the right place at the right time to develop edge platforms, we’ve observed a total of four types of vendors that have been building out relevant—and potentially competing—edge infrastructure. These include traditional CDNs, hyperscale cloud providers, telecommunications companies, and new dedicated edge platform operators, purpose-built for this emerging requirement.

How to Read this Report

This GigaOm report is one of a series of documents that helps IT organizations assess competing solutions in the context of well-defined features and criteria. For a fuller understanding consider reviewing the following reports:

Key Criteria report: A detailed market sector analysis that assesses the impact that key product features and criteria have on top-line solution characteristics—such as scalability, performance, and TCO—that drive purchase decisions.

GigaOm Radar report: A forward-looking analysis that plots the relative value and progression of vendor solutions along multiple axes based on strategy and execution. The Radar report includes a breakdown of each vendor’s offering in the sector.

Vendor Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.

Continue Reading

Trending