Connect with us

Security

US Treasury sanctions three North Korean hacking groups

Published

on


Pixabay/CC0 Creative Commons

The US Department of the Treasury imposed sanctions today on three North Korean state-controlled hacking groups, which US authorities claim to have helped the Pyongyang regime raise funds for its weapons and missile programs.

US officials cited three hacking groups whose names are well known to cyber-security experts — namely the Lazarus Group, Bluenoroff, and Andarial.

Treasury officials said the three groups operate under the control and on orders from the Reconnaissance General Bureau (RGB), North Korea’s primary intelligence bureau.

The three hacking groups used ransomware and attacks on banks, ATM networks, gambling sites, online casinos, and cryptocurrency exchanges to steal funds from legitimate businesses.

The US claims the stolen funds made their way back into the hermit kingdom, where they’ve been used to help the Pyongyang regime continue funding its controversial nuclear missile program.

Through the sanctions signed today by the Treasury’s Office of Foreign Assets Control (OFAC), the US has instructed members of the global banking sector to freeze any financial assets associated with these three groups.

Lazarus Group

Of the three groups named today, the name Lazarus Group (also known as Hidden Cobra) is sometimes used to describe the entire North Korean cyber-espionage apparatus, but it’s only one of the groups, although, without doubt, the biggest.

It is the largest because it operates directly under the highest authority of the RGB, and has access to most resources. Treasury officials said the Lazarus Group is a subordinate to the 110th Research Center under the 3rd Bureau of the RGB. This bureau, also known as the 3rd Technical Surveillance Bureau, is responsible for overseeing North Korea’s entire cyber operations.

The Lazarus Group’s most infamous operations were the hack of Sony Pictures Entertainment back in 2014, and the WannaCry ransomware outbreak from May 2016.

However, the group formed in 2007, has been much more prodigious. Treasury officials said the group has also targeted government, military, financial, manufacturing, publishing, media, entertainment, and international shipping companies, as well as critical infrastructure, using tactics such as cyber espionage, data theft, monetary heists, and destructive malware operations.

The financial losses caused by this group are unknown, but their extensive operations make them the most dangerous and well-known of the three.

Bluenoroff

But while the activities of the Lazarus Group spread far and wide, the second group Treasury officials named is the one that appears to have been specifically created to hack banks and financial institutions.

“Bluenoroff was formed by the North Korean government to earn revenue illicitly in response to increased global sanctions,” Treasury officials said.

“Bluenoroff conducts malicious cyber activity in the form of cyber-enabled heists against foreign financial institutions on behalf of the North Korean regime to generate revenue, in part, for its growing nuclear weapons and ballistic missile programs,” they added.

Officials said that since 2014, the group (also known AS APT38 or Stardust Chollima) had conducted cyber-heists against banks in Bangladesh, India, Mexico, Pakistan, Philippines, South Korea, Taiwan, Turkey, Chile, and Vietnam.

Its most high-profile hack remains the attempt to steal $1 billion from the Central Bank of Bangladesh’s New York Federal Reserve account. The heist failed, netting hackers only $80 million.

Andariel

The third group named today is Andariel and has been active since 2015. According to Treasury officials, the group often mixes cyber-espionage with cybercrime operations.

They’ve often been seen targeting South Korea’s government and infrastructure “to collect information and to create disorder,” but they’ve also been seen “attempting to steal bank card information by hacking into ATMs to withdraw cash or steal customer information to later sell on the black market.”

Furthermore, Andariel is the North Korean group “responsible for developing and creating unique malware to hack into online poker and gambling sites to steal cash.”

The three groups have stolen hundreds of millions

The Treasury Department cites a report published earlier this year by the United Nations panel on threat intelligence, which concluded that North Korean hackers stole around $571 million from at least five cryptocurrency exchanges in Asia between January 2017 and September 2018.

The UN report echoes two other reports published in October 2018, which also blamed North Korean hackers for two cryptocurrency scams and five trading platform hacks.

A FireEye report from October 2018 also blamed North Korean hackers for carrying out bank heists of over $100 million.

Another report published in January this year claimed that North Korean hackers infiltrated Chile national ATM network after tricking an employee to run malicious code during a Skype job interview, showing the resolve Lazarus Group operators usually have when they want to infiltrate organizations in search for funds.

A Kaspersky Lab report from March this year claimed that North Korean hackers have constantly attacked cryptocurrency exchanges over the past two years, seeking new ways to exfiltrate funds, even developing custom new Mac malware just for one heist.

Sanctions have been a long time coming

Today’s Treasury sanctions are just the latest actions from the US government on this front. US government officials have recently adopted a naming and shaming approach to dealing with Russian, Iranian, and North Korean hackers.

The Department of Homeland Security (DHS) has been publicly exposing North Korean malware for two years now. The agency has been publishing reports detailing North Korean hacking tools on its website, to help companies improve detection capabilities and safeguard critical networks.

In January 2019, the Department of Justice (DOJ), the Federal Bureau of Investigation (FBI), and the US Air Force obtained a court order and successfully took down a malware botnet operated by North Korean hackers.

Just this past weekend, on a North Korean national holiday, US Cyber Command published new North Korean malware samples on Twitter and Virus Total, exposing new hacking capabilities and ongoing campaigns.

“This is yet another indication of how forward-leaning US government’s position has become in a relatively short period of time on doing attribution of malevolent cyber actors,” Dmitri Alperovitch, CrowdStrike CTO and co-founder, told ZDNet. “A few years ago, this type of action would have been unprecedented. Today it is routine.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Security

Key Criteria for Evaluating Security Information and Event Management Solutions (SIEM)

Published

on

Security Information and Event Management (SIEM) solutions consolidate multiple security data streams under a single roof. Initially, SIEM supported early detection of cyberattacks and data breaches by collecting and correlating security event logs. Over time, it evolved into sophisticated systems capable of ingesting huge volumes of data from disparate sources, analyzing data in real time, and gathering additional context from threat intelligence feeds and new sources of security-related data. Next-generation SIEM solutions deliver tight integrations with other security products, advanced analytics, and semi-autonomous incident response.

SIEM solutions can be deployed on-premises, in the cloud, or a mix of the two. Deployment models must be weighed with regard to the environments the SIEM solution will protect. With more and more digital infrastructure and services becoming mission critical to every enterprise, SIEMs must handle higher volumes of data. Vendors and customers are increasingly focused on cloud-based solutions, whether SaaS or cloud-hosted models, for their scalability and flexibility.

The latest developments for SIEM solutions include machine learning capabilities for incident detection, advanced analytics features that include user behavior analytics (UBA), and integrations with other security solutions, such as security orchestration automation and response (SOAR) and endpoint detection and response (EDR) systems. Even though additional capabilities within the SIEM environment are a natural progression, customers are finding it even more difficult to deploy, customize, and operate SIEM solutions.

Other improvements include better user experience and lower time-to-value for new deployments. To achieve this, vendors are working on:

  • Streamlining data onboarding
  • Preloading customizable content—use cases, rulesets, and playbooks
  • Standardizing data formats and labels
  • Mapping incident alerts to common frameworks, such as the MITRE ATT&CK framework

Vendors and service providers are also expanding their offerings beyond managed SIEM solutions to à la carte services, such as content development services and threat hunting-as-a-service.

There is no one-size-fits-all SIEM solution. Each organization will have to evaluate its own requirements and resource constraints to find the right solution. Organizations will weigh factors such as deployment models or integrations with existing applications and security solutions. However, the main decision factor for most customers will revolve around usability, affordability, and return on investment. Fortunately, a wide range of solutions available in the market can almost guarantee a good fit for every customer.

How to Read this Report

This GigaOm report is one of a series of documents that helps IT organizations assess competing solutions in the context of well-defined features and criteria. For a fuller understanding consider reviewing the following reports:

Key Criteria report: A detailed market sector analysis that assesses the impact that key product features and criteria have on top-line solution characteristics—such as scalability, performance, and TCO—that drive purchase decisions.

GigaOm Radar report: A forward-looking analysis that plots the relative value and progression of vendor solutions along multiple axes based on strategy and execution. The Radar report includes a breakdown of each vendor’s offering in the sector.

Solution Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.

Continue Reading

Security

Key Criteria for Evaluating Secure Service Access

Published

on

Since the inception of large-scale computing, enterprises, organizations, and service providers have protected their digital assets by securing the perimeter of their on-premises data centers. With the advent of cloud computing, the perimeter has dissolved, but—in most cases—the legacy approach to security hasn not. Many corporations still manage the expanded enterprise and remote workforce as an extension of the old headquarters office/branch model serviced by LANs and WANs.

Bolting new security products onto their aging networks increased costs and complexity exponentially, while at the same time severely limiting their ability to meet regulatory compliance mandates, scale elastically, or secure the threat surface of the new any place/any user/any device perimeter.

The result? Patchwork security ill-suited to the demands of the post-COVID distributed enterprise.

Converging networking and security, secure service access (SSA) represents a significant shift in the way organizations consume network security, enabling them to replace multiple security vendors with a single, integrated platform offering full interoperability and end-to-end redundancy. Encompassing secure access service edge (SASE), zero-trust network access (ZTNA), and extended detection and response (XDR), SSA shifts the focus of security consumption from being either data center or edge-centric to being ubiquitous, with an emphasis on securing services irrespective of user identity or resources accessed.

This GigaOm Key Criteria report outlines critical criteria and evaluation metrics for selecting an SSA solution. The corresponding GigaOm Radar Report provides an overview of notable SSA vendors and their offerings available today. Together, these reports are designed to help educate decision-makers, making them aware of various approaches and vendors that are meeting the challenges of the distributed enterprise in the post-pandemic era.

How to Read this Report

This GigaOm report is one of a series of documents that helps IT organizations assess competing solutions in the context of well-defined features and criteria. For a fuller understanding consider reviewing the following reports:

Key Criteria report: A detailed market sector analysis that assesses the impact that key product features and criteria have on top-line solution characteristics—such as scalability, performance, and TCO—that drive purchase decisions.

GigaOm Radar report: A forward-looking analysis that plots the relative value and progression of vendor solutions along multiple axes based on strategy and execution. The Radar report includes a breakdown of each vendor’s offering in the sector.

Solution Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.

Continue Reading

Security

Key Criteria for Evaluating Edge Platforms

Published

on

Edge platforms leverage distributed infrastructure to deliver content, computing, and security closer to end devices, offloading networks and improving performance. We define edge platforms as the solutions capable of providing end users with millisecond access to processing power, media files, storage, secure connectivity, and related “cloud-like” services.

The key benefit of edge platforms is bringing websites, applications, media, security, and a multitude of virtual infrastructures and services closer to end devices compared to public or private cloud locations.

The need for content proximity started to become more evident in the early 2000s as the web evolved from a read-only service to a read-write experience, and users worldwide began both consuming and creating content. Today, this is even more important, as live and on-demand video streaming at very high resolutions cannot be sustained from a single central location. Content delivery networks (CDNs) helped host these types of media at the edge, and the associated network optimization methods allowed them to provide these new demanding services.

As we moved into the early 2010s, we experienced the rapid cloudification of traditional infrastructure. Roughly speaking, cloud computing takes a server from a user’s office, puts it in a faraway data center, and allows it to be used across the internet. Cloud providers manage the underlying hardware and provide it as a service, allowing users to provision their own virtual infrastructure. There are many operational benefits, but at least one unavoidable downside: the increase in latency. This is especially true in this dawning age of distributed enterprises for which there is not just a single office to optimize. Instead, “the office” is now anywhere and everywhere employees happen to be.

Even so, this centralized, cloud-based compute methodology works very well for most enterprise applications, as long as there is no critical sensitivity to delay. But what about use cases that cannot tolerate latency? Think industrial monitoring and control, real-time machine learning, autonomous vehicles, augmented reality, and gaming. If a cloud data center is a few hundred or even thousands of miles away, the physical limitations of sending an optical or electrical pulse through a cable mean there are no options to lower the latency. The answer to this is leveraging a distributed infrastructure model, which has traditionally been used by content delivery networks.

As CDNs have brought the internet’s content closer to everyone, CDN providers have positioned themselves in the unique space of owning much of the infrastructure required to bring computing and security closer to users and end devices. With servers close to the topological edge of the network, CDN providers can offer processing power and other “cloud-like” services to end devices with only a few milliseconds latency.

While CDN operators are in the right place at the right time to develop edge platforms, we’ve observed a total of four types of vendors that have been building out relevant—and potentially competing—edge infrastructure. These include traditional CDNs, hyperscale cloud providers, telecommunications companies, and new dedicated edge platform operators, purpose-built for this emerging requirement.

How to Read this Report

This GigaOm report is one of a series of documents that helps IT organizations assess competing solutions in the context of well-defined features and criteria. For a fuller understanding consider reviewing the following reports:

Key Criteria report: A detailed market sector analysis that assesses the impact that key product features and criteria have on top-line solution characteristics—such as scalability, performance, and TCO—that drive purchase decisions.

GigaOm Radar report: A forward-looking analysis that plots the relative value and progression of vendor solutions along multiple axes based on strategy and execution. The Radar report includes a breakdown of each vendor’s offering in the sector.

Vendor Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.

Continue Reading

Trending