Connect with us

Security

Chinese hacker group caught bypassing 2FA

Published

on


Image: Fox-IT

Security researchers say they found evidence that a Chinese government-linked hacking group has been bypassing two-factor authentication (2FA) in a recent wave of attacks.

The attacks have been attributed to a group the cyber-security industry is tracking as APT20, believed to operate on the behest of the Beijing government, Dutch cyber-security firm Fox-IT said in a report published last week.

The group’s primary targets were government entities and managed service providers (MSPs). The government entities and MSPs were active in fields like aviation, healthcare, finance, insurance, energy, and even something as niche as gambling and physical locks.

Recent APT20 activity

The Fox-IT report comes to fill in a gap in the group’s history. APT20’s hacking goes back to 2011, but researchers lost track of the group’s operations in 2016-2017, when they changed their mode of operation.

Fox-IT’s report documents what the group has been doing over the past two years and how they’ve been doing it.

According to researchers, the hackers used web servers as the initial point of entry into a target’s systems, with a particular focus on JBoss, an enterprise application platform often found in large corporate and government networks.

APT20 used vulnerabilities to gain access to these servers, install web shells, and then spread laterally through a victim’s internal systems.

While on the inside, Fox-IT said the group dumped passwords and looked for administrator accounts, in order to maximize their access. A primary concern was obtaining VPN credentials, so hackers could escalate access to more secure areas of a victim’s infrastructure, or use the VPN accounts as more stable backdoors.

Fox-IT said that despite what appears to be a very prodigious hacking activity over the past two years, “overall the actor has been able to stay under the radar.”

They did so, researchers explain, by using legitimate tools that were already installed on hacked devices, rather than downloading their own custom-built malware, which could have been detected by local security software.

APT20 seen bypassing 2FA

But this wasn’t the thing that stood out the most in all the attacks the Dutch security firm investigated. Fox-IT analysts said they found evidence the hackers connected to VPN accounts protected by 2FA.

How they did it remains unclear; although, the Fox-IT team has their theory. They said APT20 stole an RSA SecurID software token from a hacked system, which the Chinese actor then used on its computers to generate valid one-time codes and bypass 2FA at will.

Normally, this wouldn’t be possible. To use one of these software tokens, the user would need to connect a physical (hardware) device to their computer. The device and the software token would then generate a valid 2FA code. If the device was missing, the RSA SecureID software would generate an error.

rsa-passcode-error.png

Image: Fox-IT

The Fox-IT team explains how hackers might have gone around this issue:

The software token is generated for a specific system, but of course this system specific value could easily be retrieved by the actor when having access to the system of the victim.

As it turns out, the actor does not actually need to go through the trouble of obtaining the victim’s system specific value, because this specific value is only checked when importing the SecurID Token Seed, and has no relation to the seed used to generate actual 2-factor tokens. This means the actor can actually simply patch the check which verifies if the imported soft token was generated for this system, and does not need to bother with stealing the system specific value at all.

In short, all the actor has to do to make use of the 2 factor authentication codes is to steal an RSA SecurID Software Token and to patch 1 instruction, which results in the generation of valid tokens.

rsa-passcode.png

Image: Fox-IT

Wocao

Fox-IT said it was able to investigate APT20’s attacks because they were called in by one of the hacked companies to help investigate and respond to the hacks.

More on these attacks can be found in a report named “Operation Wocao.”

The Dutch company said it named the report “Wocao” after a response the Chinese hackers had after they’ve been detected and booted out of a victim’s network.

In the screenshot below, you can view APT20 trying to connect to a (now-removed) web shell they installed on a victim’s network.

The hackers try running several Windows commands. When the commands fail to execute, APT20 hackers understand they’ve been detected and thrown out of the network, and they type one last command in frustration — wocao, which is the Chinese slang for “shit” or “damn.”

wocao.png

Image: Fox-IT



Source link

Continue Reading

Security

Key Criteria for Evaluating Security Information and Event Management Solutions (SIEM)

Published

on

Security Information and Event Management (SIEM) solutions consolidate multiple security data streams under a single roof. Initially, SIEM supported early detection of cyberattacks and data breaches by collecting and correlating security event logs. Over time, it evolved into sophisticated systems capable of ingesting huge volumes of data from disparate sources, analyzing data in real time, and gathering additional context from threat intelligence feeds and new sources of security-related data. Next-generation SIEM solutions deliver tight integrations with other security products, advanced analytics, and semi-autonomous incident response.

SIEM solutions can be deployed on-premises, in the cloud, or a mix of the two. Deployment models must be weighed with regard to the environments the SIEM solution will protect. With more and more digital infrastructure and services becoming mission critical to every enterprise, SIEMs must handle higher volumes of data. Vendors and customers are increasingly focused on cloud-based solutions, whether SaaS or cloud-hosted models, for their scalability and flexibility.

The latest developments for SIEM solutions include machine learning capabilities for incident detection, advanced analytics features that include user behavior analytics (UBA), and integrations with other security solutions, such as security orchestration automation and response (SOAR) and endpoint detection and response (EDR) systems. Even though additional capabilities within the SIEM environment are a natural progression, customers are finding it even more difficult to deploy, customize, and operate SIEM solutions.

Other improvements include better user experience and lower time-to-value for new deployments. To achieve this, vendors are working on:

  • Streamlining data onboarding
  • Preloading customizable content—use cases, rulesets, and playbooks
  • Standardizing data formats and labels
  • Mapping incident alerts to common frameworks, such as the MITRE ATT&CK framework

Vendors and service providers are also expanding their offerings beyond managed SIEM solutions to à la carte services, such as content development services and threat hunting-as-a-service.

There is no one-size-fits-all SIEM solution. Each organization will have to evaluate its own requirements and resource constraints to find the right solution. Organizations will weigh factors such as deployment models or integrations with existing applications and security solutions. However, the main decision factor for most customers will revolve around usability, affordability, and return on investment. Fortunately, a wide range of solutions available in the market can almost guarantee a good fit for every customer.

How to Read this Report

This GigaOm report is one of a series of documents that helps IT organizations assess competing solutions in the context of well-defined features and criteria. For a fuller understanding consider reviewing the following reports:

Key Criteria report: A detailed market sector analysis that assesses the impact that key product features and criteria have on top-line solution characteristics—such as scalability, performance, and TCO—that drive purchase decisions.

GigaOm Radar report: A forward-looking analysis that plots the relative value and progression of vendor solutions along multiple axes based on strategy and execution. The Radar report includes a breakdown of each vendor’s offering in the sector.

Solution Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.

Continue Reading

Security

Key Criteria for Evaluating Secure Service Access

Published

on

Since the inception of large-scale computing, enterprises, organizations, and service providers have protected their digital assets by securing the perimeter of their on-premises data centers. With the advent of cloud computing, the perimeter has dissolved, but—in most cases—the legacy approach to security hasn not. Many corporations still manage the expanded enterprise and remote workforce as an extension of the old headquarters office/branch model serviced by LANs and WANs.

Bolting new security products onto their aging networks increased costs and complexity exponentially, while at the same time severely limiting their ability to meet regulatory compliance mandates, scale elastically, or secure the threat surface of the new any place/any user/any device perimeter.

The result? Patchwork security ill-suited to the demands of the post-COVID distributed enterprise.

Converging networking and security, secure service access (SSA) represents a significant shift in the way organizations consume network security, enabling them to replace multiple security vendors with a single, integrated platform offering full interoperability and end-to-end redundancy. Encompassing secure access service edge (SASE), zero-trust network access (ZTNA), and extended detection and response (XDR), SSA shifts the focus of security consumption from being either data center or edge-centric to being ubiquitous, with an emphasis on securing services irrespective of user identity or resources accessed.

This GigaOm Key Criteria report outlines critical criteria and evaluation metrics for selecting an SSA solution. The corresponding GigaOm Radar Report provides an overview of notable SSA vendors and their offerings available today. Together, these reports are designed to help educate decision-makers, making them aware of various approaches and vendors that are meeting the challenges of the distributed enterprise in the post-pandemic era.

How to Read this Report

This GigaOm report is one of a series of documents that helps IT organizations assess competing solutions in the context of well-defined features and criteria. For a fuller understanding consider reviewing the following reports:

Key Criteria report: A detailed market sector analysis that assesses the impact that key product features and criteria have on top-line solution characteristics—such as scalability, performance, and TCO—that drive purchase decisions.

GigaOm Radar report: A forward-looking analysis that plots the relative value and progression of vendor solutions along multiple axes based on strategy and execution. The Radar report includes a breakdown of each vendor’s offering in the sector.

Solution Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.

Continue Reading

Security

Key Criteria for Evaluating Edge Platforms

Published

on

Edge platforms leverage distributed infrastructure to deliver content, computing, and security closer to end devices, offloading networks and improving performance. We define edge platforms as the solutions capable of providing end users with millisecond access to processing power, media files, storage, secure connectivity, and related “cloud-like” services.

The key benefit of edge platforms is bringing websites, applications, media, security, and a multitude of virtual infrastructures and services closer to end devices compared to public or private cloud locations.

The need for content proximity started to become more evident in the early 2000s as the web evolved from a read-only service to a read-write experience, and users worldwide began both consuming and creating content. Today, this is even more important, as live and on-demand video streaming at very high resolutions cannot be sustained from a single central location. Content delivery networks (CDNs) helped host these types of media at the edge, and the associated network optimization methods allowed them to provide these new demanding services.

As we moved into the early 2010s, we experienced the rapid cloudification of traditional infrastructure. Roughly speaking, cloud computing takes a server from a user’s office, puts it in a faraway data center, and allows it to be used across the internet. Cloud providers manage the underlying hardware and provide it as a service, allowing users to provision their own virtual infrastructure. There are many operational benefits, but at least one unavoidable downside: the increase in latency. This is especially true in this dawning age of distributed enterprises for which there is not just a single office to optimize. Instead, “the office” is now anywhere and everywhere employees happen to be.

Even so, this centralized, cloud-based compute methodology works very well for most enterprise applications, as long as there is no critical sensitivity to delay. But what about use cases that cannot tolerate latency? Think industrial monitoring and control, real-time machine learning, autonomous vehicles, augmented reality, and gaming. If a cloud data center is a few hundred or even thousands of miles away, the physical limitations of sending an optical or electrical pulse through a cable mean there are no options to lower the latency. The answer to this is leveraging a distributed infrastructure model, which has traditionally been used by content delivery networks.

As CDNs have brought the internet’s content closer to everyone, CDN providers have positioned themselves in the unique space of owning much of the infrastructure required to bring computing and security closer to users and end devices. With servers close to the topological edge of the network, CDN providers can offer processing power and other “cloud-like” services to end devices with only a few milliseconds latency.

While CDN operators are in the right place at the right time to develop edge platforms, we’ve observed a total of four types of vendors that have been building out relevant—and potentially competing—edge infrastructure. These include traditional CDNs, hyperscale cloud providers, telecommunications companies, and new dedicated edge platform operators, purpose-built for this emerging requirement.

How to Read this Report

This GigaOm report is one of a series of documents that helps IT organizations assess competing solutions in the context of well-defined features and criteria. For a fuller understanding consider reviewing the following reports:

Key Criteria report: A detailed market sector analysis that assesses the impact that key product features and criteria have on top-line solution characteristics—such as scalability, performance, and TCO—that drive purchase decisions.

GigaOm Radar report: A forward-looking analysis that plots the relative value and progression of vendor solutions along multiple axes based on strategy and execution. The Radar report includes a breakdown of each vendor’s offering in the sector.

Vendor Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.

Continue Reading

Trending