Connect with us

Security

Microsoft: Defender ATP is coming to Linux in 2020

Published

on

Why is Windows Defender so successful?
Microsoft claims it rules the Windows antivirus world, with Defender on over half a billion PCs.

Microsoft is planning to bring its Defender antivirus to Linux systems next year and will be giving a demo of how security specialists can use Microsoft Defender at the Ignite Conference this week. 

Microsoft announced the brand change from Windows Defender to Microsoft Defender in March after giving security analysts the tools to inspect enterprise Mac computers for malware via the Microsoft Defender console.    

Rob Lefferts, corporate vice president for Microsoft’s M365 Security, told ZDNet that Microsoft Defender for Linux systems will be available for customers in 2020. 

Application Guard is also coming to all Office 365 documents. Previously, this security feature was only available in Edge and allowed users to safely open a webpage in an isolated virtual machine to protect them from malware. Now, users who open Office 365 apps, like Word or Excel, will have the same protection. 

“It’s coming in preview first, but when you get an untrusted document with potentially malicious macros via email, it will open in a container,” he said.  

It means when an attacker attempts to download more code from the internet and then install malware on the machine, the machine is a VM, so the victim never actually installs the malware. 

The move should help protect against phishing and other attacks that attempt to trick users into exiting from Protected View, which prevents users from running macros by default.  

Lefferts will also discuss how Microsoft is protecting organizations from sophisticated malware attackers who are exploiting the ‘information parity problem’ – a highbrow term for how aspects of a network can influence its overall design. 

“Defenders have to know everything perfectly and attackers only need to know one thing kind of well. The point is, it’s not a level playing field and it’s getting worse,” said Lefferts. 

Key to this ability is the Microsoft Security Intelligent Graph that Microsoft is selling to enterprise customers. But what exactly is the Microsoft Intelligent Security Graph? 

“It’s built into Defender ATP, Office 365, and Azure. We have signals built into events, behaviors, and things as simple as a user logged on to a machine or as complicated as the behavior of the memory layout in Word on this device is different to what it normally looks like,” explained Lefferts. 

“Essentially we have sensors across all the identities, endpoints, cloud apps, and infrastructure and they’re sending all of this to a central place inside Microsoft’s cloud.”

Microsoft doesn’t mean physical sensors in the context of its Intelligent Security Graph but rather pieces of code sitting inside its various applications that feed into to the Intelligent Security Graph. 

The idea is to assist security teams to solve challenges differently to the way humans would do it. 

“Humans aren’t great at huge numbers, but this is the place where machines can provide new insight.”

Microsoft’s evidence that it is making a difference is that it has helped prevent 13.5 billion malicious emails so far in 2019, and Lefferts expects Microsoft to have blocked 14 billion by the end of the year. The company has highlighted its work in defending US and European political organizations against cyberattacks attacks ahead of the 2020 US presidential mid-term elections.  

“Defending democracy is a big point for us because we’re making sure we take all the capabilities we’re building here and use it to help organizations and governments around the world,” he said.

“The goal is to help defenders cut through the noise and prioritize important work and be ready to help protect and respond, both smarter and faster using signals from Windows, Office, and Azure.”

The key tool Microsoft is introducing now is automated remediation for Office 365 customers that have Microsoft Threat Protection. 

“There’s a kill chain that represents every step an attacker takes as they move through the organization. When you find that going on, you want to ensure that you clean up the whole thing,” said Lefferts. 

For example, a hacker breaches a network through a phishing email, installs malware on the device, and then moves laterally to critical infrastructure, such as an email server or domain controller. The hacker can maintain a presence on the network for potentially years.

“The whole point about automation is finding all the compromised accounts and resetting those passwords, finding all the users who got malicious emails and scrubbing them out of inboxes, and finding all the devices that were impacted and isolating them, quarantining them, and cleaning them.”  

Lefferts was careful not to use the word artificial intelligence and stressed that Microsoft’s technologies are aimed at “augmentation of people” in security teams or “exoskeletons” for people rather than robots. 

So how would it help enterprise organizations respond to the next NotPetya ransomware outbreak? 

NotPetya spread initially through a poisoned update from a Ukraine-based accounting software firm, crippling several global firms, including Maersk and Mondelez. 

“The first thing is that it happens faster than the vendors can respond, which is a huge issue. [Responders] really need the augmentation that we’re talking about so that they can go faster. There are also so many opportunities for defenders to intermediate and break the kill chain and fix everything. And we want to make sure we can work across that kill chain.”

Microsoft will also roll out new features for customers using Office 365 Advanced Threat Protection, offering admins a better overview of targeted phishing attacks. The idea is to subvert typical strategies that attackers use to avoid detection, such as sending email from different IP addresses.

“However they pick their targets, they’re going to have a factory where they’re going to build a campaign that they’re going to direct at those targets. And they will keep iterating on all the pieces of that campaign to see what’s most effective at getting past the defenders and how they best trick the user into clicking something,” said Lefferts. 

“It shows up as an onslaught of email across multiple users within the organization – sometimes just a few, sometimes in the hundreds. What we give defenders is a view of what’s happening. There’s email coming from different IP addresses and different sender domains and it’s got different components in it because they keep running different experiments. We put the whole picture together to show you the flow, how it evolved over time.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Security

Key Criteria for Evaluating Security Information and Event Management Solutions (SIEM)

Published

on

Security Information and Event Management (SIEM) solutions consolidate multiple security data streams under a single roof. Initially, SIEM supported early detection of cyberattacks and data breaches by collecting and correlating security event logs. Over time, it evolved into sophisticated systems capable of ingesting huge volumes of data from disparate sources, analyzing data in real time, and gathering additional context from threat intelligence feeds and new sources of security-related data. Next-generation SIEM solutions deliver tight integrations with other security products, advanced analytics, and semi-autonomous incident response.

SIEM solutions can be deployed on-premises, in the cloud, or a mix of the two. Deployment models must be weighed with regard to the environments the SIEM solution will protect. With more and more digital infrastructure and services becoming mission critical to every enterprise, SIEMs must handle higher volumes of data. Vendors and customers are increasingly focused on cloud-based solutions, whether SaaS or cloud-hosted models, for their scalability and flexibility.

The latest developments for SIEM solutions include machine learning capabilities for incident detection, advanced analytics features that include user behavior analytics (UBA), and integrations with other security solutions, such as security orchestration automation and response (SOAR) and endpoint detection and response (EDR) systems. Even though additional capabilities within the SIEM environment are a natural progression, customers are finding it even more difficult to deploy, customize, and operate SIEM solutions.

Other improvements include better user experience and lower time-to-value for new deployments. To achieve this, vendors are working on:

  • Streamlining data onboarding
  • Preloading customizable content—use cases, rulesets, and playbooks
  • Standardizing data formats and labels
  • Mapping incident alerts to common frameworks, such as the MITRE ATT&CK framework

Vendors and service providers are also expanding their offerings beyond managed SIEM solutions to à la carte services, such as content development services and threat hunting-as-a-service.

There is no one-size-fits-all SIEM solution. Each organization will have to evaluate its own requirements and resource constraints to find the right solution. Organizations will weigh factors such as deployment models or integrations with existing applications and security solutions. However, the main decision factor for most customers will revolve around usability, affordability, and return on investment. Fortunately, a wide range of solutions available in the market can almost guarantee a good fit for every customer.

How to Read this Report

This GigaOm report is one of a series of documents that helps IT organizations assess competing solutions in the context of well-defined features and criteria. For a fuller understanding consider reviewing the following reports:

Key Criteria report: A detailed market sector analysis that assesses the impact that key product features and criteria have on top-line solution characteristics—such as scalability, performance, and TCO—that drive purchase decisions.

GigaOm Radar report: A forward-looking analysis that plots the relative value and progression of vendor solutions along multiple axes based on strategy and execution. The Radar report includes a breakdown of each vendor’s offering in the sector.

Solution Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.

Continue Reading

Security

Key Criteria for Evaluating Secure Service Access

Published

on

Since the inception of large-scale computing, enterprises, organizations, and service providers have protected their digital assets by securing the perimeter of their on-premises data centers. With the advent of cloud computing, the perimeter has dissolved, but—in most cases—the legacy approach to security hasn not. Many corporations still manage the expanded enterprise and remote workforce as an extension of the old headquarters office/branch model serviced by LANs and WANs.

Bolting new security products onto their aging networks increased costs and complexity exponentially, while at the same time severely limiting their ability to meet regulatory compliance mandates, scale elastically, or secure the threat surface of the new any place/any user/any device perimeter.

The result? Patchwork security ill-suited to the demands of the post-COVID distributed enterprise.

Converging networking and security, secure service access (SSA) represents a significant shift in the way organizations consume network security, enabling them to replace multiple security vendors with a single, integrated platform offering full interoperability and end-to-end redundancy. Encompassing secure access service edge (SASE), zero-trust network access (ZTNA), and extended detection and response (XDR), SSA shifts the focus of security consumption from being either data center or edge-centric to being ubiquitous, with an emphasis on securing services irrespective of user identity or resources accessed.

This GigaOm Key Criteria report outlines critical criteria and evaluation metrics for selecting an SSA solution. The corresponding GigaOm Radar Report provides an overview of notable SSA vendors and their offerings available today. Together, these reports are designed to help educate decision-makers, making them aware of various approaches and vendors that are meeting the challenges of the distributed enterprise in the post-pandemic era.

How to Read this Report

This GigaOm report is one of a series of documents that helps IT organizations assess competing solutions in the context of well-defined features and criteria. For a fuller understanding consider reviewing the following reports:

Key Criteria report: A detailed market sector analysis that assesses the impact that key product features and criteria have on top-line solution characteristics—such as scalability, performance, and TCO—that drive purchase decisions.

GigaOm Radar report: A forward-looking analysis that plots the relative value and progression of vendor solutions along multiple axes based on strategy and execution. The Radar report includes a breakdown of each vendor’s offering in the sector.

Solution Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.

Continue Reading

Security

Key Criteria for Evaluating Edge Platforms

Published

on

Edge platforms leverage distributed infrastructure to deliver content, computing, and security closer to end devices, offloading networks and improving performance. We define edge platforms as the solutions capable of providing end users with millisecond access to processing power, media files, storage, secure connectivity, and related “cloud-like” services.

The key benefit of edge platforms is bringing websites, applications, media, security, and a multitude of virtual infrastructures and services closer to end devices compared to public or private cloud locations.

The need for content proximity started to become more evident in the early 2000s as the web evolved from a read-only service to a read-write experience, and users worldwide began both consuming and creating content. Today, this is even more important, as live and on-demand video streaming at very high resolutions cannot be sustained from a single central location. Content delivery networks (CDNs) helped host these types of media at the edge, and the associated network optimization methods allowed them to provide these new demanding services.

As we moved into the early 2010s, we experienced the rapid cloudification of traditional infrastructure. Roughly speaking, cloud computing takes a server from a user’s office, puts it in a faraway data center, and allows it to be used across the internet. Cloud providers manage the underlying hardware and provide it as a service, allowing users to provision their own virtual infrastructure. There are many operational benefits, but at least one unavoidable downside: the increase in latency. This is especially true in this dawning age of distributed enterprises for which there is not just a single office to optimize. Instead, “the office” is now anywhere and everywhere employees happen to be.

Even so, this centralized, cloud-based compute methodology works very well for most enterprise applications, as long as there is no critical sensitivity to delay. But what about use cases that cannot tolerate latency? Think industrial monitoring and control, real-time machine learning, autonomous vehicles, augmented reality, and gaming. If a cloud data center is a few hundred or even thousands of miles away, the physical limitations of sending an optical or electrical pulse through a cable mean there are no options to lower the latency. The answer to this is leveraging a distributed infrastructure model, which has traditionally been used by content delivery networks.

As CDNs have brought the internet’s content closer to everyone, CDN providers have positioned themselves in the unique space of owning much of the infrastructure required to bring computing and security closer to users and end devices. With servers close to the topological edge of the network, CDN providers can offer processing power and other “cloud-like” services to end devices with only a few milliseconds latency.

While CDN operators are in the right place at the right time to develop edge platforms, we’ve observed a total of four types of vendors that have been building out relevant—and potentially competing—edge infrastructure. These include traditional CDNs, hyperscale cloud providers, telecommunications companies, and new dedicated edge platform operators, purpose-built for this emerging requirement.

How to Read this Report

This GigaOm report is one of a series of documents that helps IT organizations assess competing solutions in the context of well-defined features and criteria. For a fuller understanding consider reviewing the following reports:

Key Criteria report: A detailed market sector analysis that assesses the impact that key product features and criteria have on top-line solution characteristics—such as scalability, performance, and TCO—that drive purchase decisions.

GigaOm Radar report: A forward-looking analysis that plots the relative value and progression of vendor solutions along multiple axes based on strategy and execution. The Radar report includes a breakdown of each vendor’s offering in the sector.

Vendor Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.

Continue Reading

Trending