Connect with us

Security

North Korea’s APT38 hacking group behind bank heists of over $100 million

Published

on


According to a new report published today by US cyber-security firm FireEye, there’s a clear and visible distinction between North Korea’s hacking units –with two groups specialized in political cyber-espionage, and a third focused only in cyber-heists at banks and financial institutions.

For the past four years, ever since the Sony hack of 2014, when the world realized North Korea was a serious player on the cyber-espionage scene, all three groups have been incessantly covered by news media under the umbrella term of Lazarus Group.

But in a report released today, FireEye’s experts believe there should be made a clear distinction between the three groups, and especially between the ones focused on cyber-espionage (TEMP.Hermit and Lazarus Group), and the one focused on financial crime (APT38).


Image: FireEye

The activities of the first two have been tracked and analyzed for a long time, and have been the subject of tens of reports from both the private security industry and government agencies, but little is known about the third.

Many of the third group’s financially-motivated hacking tools have often been included in Lazarus Group reports, where they stuck out like a sore thumb when looked at together with malware designed for cyber-espionage.

But when you isolate all these financially-motivated tools and track down the incidents where they’ve been spotted, you get a clear picture of completely separate hacking group that seems to operate on its own, on a separate agenda from most of the Lazarus Group operations.

This group, according to FireEye, doesn’t operate by a quick smash-and-grab strategy specific to day-to-day cyber-crime groups, but with the patience of a nation-state threat actor that has the time and tools to wait for the perfect time to pull off an attack.

apt38-modus-operandi.pngapt38-modus-operandi.png

Image: FireEye

FireEye said that when it put all these tools and past incidents together, it tracked down APT38’s first signs of activity going back to 2014, about the same time that all the Lazarus Group-associated divisions started operating.

But the company doesn’t blame the Sony hack and the release of “The Interview” movie release on the group’s apparent rise. According to FireEye’s experts, it was UN economic sanctions levied against North Korea after a suite of nuclear tests carried out in 2013.

Experts believe –and FireEye isn’t the only one, with other sources reporting the same thing– that in the face of dwindling state revenues, North Korea turned to its military state hacking divisions for help in bringing in funds from external sources through unorthodox methods.

These methods relied on hacking banks, financial institutions, and cryptocurrency exchanges. Target geography didn’t matter, and no area was safe from APT38 hackers, according to FireEye, which reported smaller hacks all over the world, in countries such as Poland, Malaysia, Vietnam, and others.

apt38-targeting.pngapt38-targeting.png

Image: FireEye

FireEye’s “APT38: Un-usual Suspects” report details a timeline of past hacks and important milestones in the group’s evolution.

  • February 2014 – Start of first known operation by APT38
  • December 2015 – Attempted heist at TPBank
  • January 2016 – APT38 is engaged in compromises at multiple international banks concurrently
  • February 2016 – Heist at Bangladesh Bank (intrusion via SWIFT inter-banking system)
  • October 2016 – Reported beginning of APT38 watering hole attacks orchestrated on government and media sites
  • March 2017 – SWIFT bans all North Korean banks under UN sanctions from access
  • September 2017 – Several Chinese banks restrict financial activities of North Korean individuals and entities
  • October 2017 – Heist at Far Eastern International Bank in Taiwan (ATM cash-out scheme)
  • January 2018 – Attempted heist at Bancomext in Mexico
  • May 2018 – Heist at Banco de Chile

All in all, FireEye believes APT38 tried to steal over $1.1 billion, but made off with roughly $100 million, based on the company’s conservative estimates.

The security firms says that all the bank cyber-heists, successful or not, revealed a complex modus operandi, one that followed patterns previous seen with nation-state attackers, and not with regular cyber-criminals.

The main giveaway is their patience and willingness to wait for months, if not years, to pull off a hack, during which time they carried out extensive reconnaissance and surveillance of the compromised target or they created target-specific tools.

“APT38 operators put significant effort into understanding their environments and ensuring successful deployment of tools against targeted systems,” FireEye experts wrote in their report. “The group has demonstrated a desire to maintain access to a victim environment for as long as necessary to understand the network layout, necessary permissions, and system technologies to achieve its goals.”

“APT38 also takes steps to make sure they remain undetected while they are conducting their internal reconnaissance,” they added. “On average, we have observed APT38 remain within a victim network approximately 155 days, with the longest time within a compromised system believed to be 678 days (almost two years).”

apt38-bank-heist-modus-operandi.pngapt38-bank-heist-modus-operandi.png

Image: FireEye

But the group also stood out because it did what very few others financially-motivated groups did. It destroyed evidence when in danger of getting caught, or after a hack, as a diversionary tactic.

In cases where the group believed they left too much forensic data behind, they didn’t bother cleaning the logs of each computer in part but often deployed ransomware or disk-wiping malware instead.

Some argue that this was done on purpose to put investigators on the wrong trail, which is a valid argument, especially since it almost worked in some cases.

For example, APT38 deployed the Hermes ransomware on the network of Far Eastern International Bank (FEIB) in Taiwan shortly after they withdrew large sums of money from the bank’s ATMs, in an attempt to divert IT teams to data recovery efforts instead of paying attention to ATM monitoring systems.

APT38 also deployed the KillDisk disk-wiping malware on the network of Bancomext after a failed attempt of stealing over $110 million from the bank’s accounts, and also on the network of Banco de Chile after APT38 successfully stole $10 million from its systems.

Initially, these hacks were reported as IT system failures, but through the collective efforts of experts around the world [1, 2, 3] and thanks to clues in the malware’s source, experts linked these hacks to North Korea’s hacking units.

But while the FireEye report is the first step into separating North Korea’s hacking units from one another, it will be a hard thing to pull off, and the main reason is because all of North Korea’s hacking infrastructure appears to heavily overlap, with agents sometimes reusing malware and online infrastructure for all sorts of operations.

This problem was more than evident last month when the US Department of Justice indicted a North Korean hacker named Park Jin Hyok with every North Korean hack under the sun, ranging from both cyber-espionage operations (Sony Pictures hack, WannaCry, Lockheed Martin hack) to financially-motivated hacks (Bangladesh Bank heist).

But while companies like FireEye continue to pull on the string of North Korean hacking efforts in an effort to shed some light on past attacks, the Pyongyang regime doesn’t seem to be interested in reining in APT38, despite some recent positive developments in diplomatic talks.

“We believe APT38’s operations will continue in the future,” FireEye said. “In particular, the number of SWIFT heists that have been ultimately thwarted in recent years coupled with growing awareness for security around the financial messaging system could drive APT38 to employ new tactics to obtain funds especially if North Korea’s access to currency continues to deteriorate.”

Previous and related coverage:

Source link



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Security

Key Criteria for Evaluating Security Information and Event Management Solutions (SIEM)

Published

on

Security Information and Event Management (SIEM) solutions consolidate multiple security data streams under a single roof. Initially, SIEM supported early detection of cyberattacks and data breaches by collecting and correlating security event logs. Over time, it evolved into sophisticated systems capable of ingesting huge volumes of data from disparate sources, analyzing data in real time, and gathering additional context from threat intelligence feeds and new sources of security-related data. Next-generation SIEM solutions deliver tight integrations with other security products, advanced analytics, and semi-autonomous incident response.

SIEM solutions can be deployed on-premises, in the cloud, or a mix of the two. Deployment models must be weighed with regard to the environments the SIEM solution will protect. With more and more digital infrastructure and services becoming mission critical to every enterprise, SIEMs must handle higher volumes of data. Vendors and customers are increasingly focused on cloud-based solutions, whether SaaS or cloud-hosted models, for their scalability and flexibility.

The latest developments for SIEM solutions include machine learning capabilities for incident detection, advanced analytics features that include user behavior analytics (UBA), and integrations with other security solutions, such as security orchestration automation and response (SOAR) and endpoint detection and response (EDR) systems. Even though additional capabilities within the SIEM environment are a natural progression, customers are finding it even more difficult to deploy, customize, and operate SIEM solutions.

Other improvements include better user experience and lower time-to-value for new deployments. To achieve this, vendors are working on:

  • Streamlining data onboarding
  • Preloading customizable content—use cases, rulesets, and playbooks
  • Standardizing data formats and labels
  • Mapping incident alerts to common frameworks, such as the MITRE ATT&CK framework

Vendors and service providers are also expanding their offerings beyond managed SIEM solutions to à la carte services, such as content development services and threat hunting-as-a-service.

There is no one-size-fits-all SIEM solution. Each organization will have to evaluate its own requirements and resource constraints to find the right solution. Organizations will weigh factors such as deployment models or integrations with existing applications and security solutions. However, the main decision factor for most customers will revolve around usability, affordability, and return on investment. Fortunately, a wide range of solutions available in the market can almost guarantee a good fit for every customer.

How to Read this Report

This GigaOm report is one of a series of documents that helps IT organizations assess competing solutions in the context of well-defined features and criteria. For a fuller understanding consider reviewing the following reports:

Key Criteria report: A detailed market sector analysis that assesses the impact that key product features and criteria have on top-line solution characteristics—such as scalability, performance, and TCO—that drive purchase decisions.

GigaOm Radar report: A forward-looking analysis that plots the relative value and progression of vendor solutions along multiple axes based on strategy and execution. The Radar report includes a breakdown of each vendor’s offering in the sector.

Solution Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.

Continue Reading

Security

Key Criteria for Evaluating Secure Service Access

Published

on

Since the inception of large-scale computing, enterprises, organizations, and service providers have protected their digital assets by securing the perimeter of their on-premises data centers. With the advent of cloud computing, the perimeter has dissolved, but—in most cases—the legacy approach to security hasn not. Many corporations still manage the expanded enterprise and remote workforce as an extension of the old headquarters office/branch model serviced by LANs and WANs.

Bolting new security products onto their aging networks increased costs and complexity exponentially, while at the same time severely limiting their ability to meet regulatory compliance mandates, scale elastically, or secure the threat surface of the new any place/any user/any device perimeter.

The result? Patchwork security ill-suited to the demands of the post-COVID distributed enterprise.

Converging networking and security, secure service access (SSA) represents a significant shift in the way organizations consume network security, enabling them to replace multiple security vendors with a single, integrated platform offering full interoperability and end-to-end redundancy. Encompassing secure access service edge (SASE), zero-trust network access (ZTNA), and extended detection and response (XDR), SSA shifts the focus of security consumption from being either data center or edge-centric to being ubiquitous, with an emphasis on securing services irrespective of user identity or resources accessed.

This GigaOm Key Criteria report outlines critical criteria and evaluation metrics for selecting an SSA solution. The corresponding GigaOm Radar Report provides an overview of notable SSA vendors and their offerings available today. Together, these reports are designed to help educate decision-makers, making them aware of various approaches and vendors that are meeting the challenges of the distributed enterprise in the post-pandemic era.

How to Read this Report

This GigaOm report is one of a series of documents that helps IT organizations assess competing solutions in the context of well-defined features and criteria. For a fuller understanding consider reviewing the following reports:

Key Criteria report: A detailed market sector analysis that assesses the impact that key product features and criteria have on top-line solution characteristics—such as scalability, performance, and TCO—that drive purchase decisions.

GigaOm Radar report: A forward-looking analysis that plots the relative value and progression of vendor solutions along multiple axes based on strategy and execution. The Radar report includes a breakdown of each vendor’s offering in the sector.

Solution Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.

Continue Reading

Security

Key Criteria for Evaluating Edge Platforms

Published

on

Edge platforms leverage distributed infrastructure to deliver content, computing, and security closer to end devices, offloading networks and improving performance. We define edge platforms as the solutions capable of providing end users with millisecond access to processing power, media files, storage, secure connectivity, and related “cloud-like” services.

The key benefit of edge platforms is bringing websites, applications, media, security, and a multitude of virtual infrastructures and services closer to end devices compared to public or private cloud locations.

The need for content proximity started to become more evident in the early 2000s as the web evolved from a read-only service to a read-write experience, and users worldwide began both consuming and creating content. Today, this is even more important, as live and on-demand video streaming at very high resolutions cannot be sustained from a single central location. Content delivery networks (CDNs) helped host these types of media at the edge, and the associated network optimization methods allowed them to provide these new demanding services.

As we moved into the early 2010s, we experienced the rapid cloudification of traditional infrastructure. Roughly speaking, cloud computing takes a server from a user’s office, puts it in a faraway data center, and allows it to be used across the internet. Cloud providers manage the underlying hardware and provide it as a service, allowing users to provision their own virtual infrastructure. There are many operational benefits, but at least one unavoidable downside: the increase in latency. This is especially true in this dawning age of distributed enterprises for which there is not just a single office to optimize. Instead, “the office” is now anywhere and everywhere employees happen to be.

Even so, this centralized, cloud-based compute methodology works very well for most enterprise applications, as long as there is no critical sensitivity to delay. But what about use cases that cannot tolerate latency? Think industrial monitoring and control, real-time machine learning, autonomous vehicles, augmented reality, and gaming. If a cloud data center is a few hundred or even thousands of miles away, the physical limitations of sending an optical or electrical pulse through a cable mean there are no options to lower the latency. The answer to this is leveraging a distributed infrastructure model, which has traditionally been used by content delivery networks.

As CDNs have brought the internet’s content closer to everyone, CDN providers have positioned themselves in the unique space of owning much of the infrastructure required to bring computing and security closer to users and end devices. With servers close to the topological edge of the network, CDN providers can offer processing power and other “cloud-like” services to end devices with only a few milliseconds latency.

While CDN operators are in the right place at the right time to develop edge platforms, we’ve observed a total of four types of vendors that have been building out relevant—and potentially competing—edge infrastructure. These include traditional CDNs, hyperscale cloud providers, telecommunications companies, and new dedicated edge platform operators, purpose-built for this emerging requirement.

How to Read this Report

This GigaOm report is one of a series of documents that helps IT organizations assess competing solutions in the context of well-defined features and criteria. For a fuller understanding consider reviewing the following reports:

Key Criteria report: A detailed market sector analysis that assesses the impact that key product features and criteria have on top-line solution characteristics—such as scalability, performance, and TCO—that drive purchase decisions.

GigaOm Radar report: A forward-looking analysis that plots the relative value and progression of vendor solutions along multiple axes based on strategy and execution. The Radar report includes a breakdown of each vendor’s offering in the sector.

Vendor Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.

Continue Reading

Trending