Connect with us

Security

North Korea’s APT38 hacking group behind bank heists of over $100 million

Published

on


According to a new report published today by US cyber-security firm FireEye, there’s a clear and visible distinction between North Korea’s hacking units –with two groups specialized in political cyber-espionage, and a third focused only in cyber-heists at banks and financial institutions.

For the past four years, ever since the Sony hack of 2014, when the world realized North Korea was a serious player on the cyber-espionage scene, all three groups have been incessantly covered by news media under the umbrella term of Lazarus Group.

But in a report released today, FireEye’s experts believe there should be made a clear distinction between the three groups, and especially between the ones focused on cyber-espionage (TEMP.Hermit and Lazarus Group), and the one focused on financial crime (APT38).


Image: FireEye

The activities of the first two have been tracked and analyzed for a long time, and have been the subject of tens of reports from both the private security industry and government agencies, but little is known about the third.

Many of the third group’s financially-motivated hacking tools have often been included in Lazarus Group reports, where they stuck out like a sore thumb when looked at together with malware designed for cyber-espionage.

But when you isolate all these financially-motivated tools and track down the incidents where they’ve been spotted, you get a clear picture of completely separate hacking group that seems to operate on its own, on a separate agenda from most of the Lazarus Group operations.

This group, according to FireEye, doesn’t operate by a quick smash-and-grab strategy specific to day-to-day cyber-crime groups, but with the patience of a nation-state threat actor that has the time and tools to wait for the perfect time to pull off an attack.

apt38-modus-operandi.pngapt38-modus-operandi.png

Image: FireEye

FireEye said that when it put all these tools and past incidents together, it tracked down APT38’s first signs of activity going back to 2014, about the same time that all the Lazarus Group-associated divisions started operating.

But the company doesn’t blame the Sony hack and the release of “The Interview” movie release on the group’s apparent rise. According to FireEye’s experts, it was UN economic sanctions levied against North Korea after a suite of nuclear tests carried out in 2013.

Experts believe –and FireEye isn’t the only one, with other sources reporting the same thing– that in the face of dwindling state revenues, North Korea turned to its military state hacking divisions for help in bringing in funds from external sources through unorthodox methods.

These methods relied on hacking banks, financial institutions, and cryptocurrency exchanges. Target geography didn’t matter, and no area was safe from APT38 hackers, according to FireEye, which reported smaller hacks all over the world, in countries such as Poland, Malaysia, Vietnam, and others.

apt38-targeting.pngapt38-targeting.png

Image: FireEye

FireEye’s “APT38: Un-usual Suspects” report details a timeline of past hacks and important milestones in the group’s evolution.

  • February 2014 – Start of first known operation by APT38
  • December 2015 – Attempted heist at TPBank
  • January 2016 – APT38 is engaged in compromises at multiple international banks concurrently
  • February 2016 – Heist at Bangladesh Bank (intrusion via SWIFT inter-banking system)
  • October 2016 – Reported beginning of APT38 watering hole attacks orchestrated on government and media sites
  • March 2017 – SWIFT bans all North Korean banks under UN sanctions from access
  • September 2017 – Several Chinese banks restrict financial activities of North Korean individuals and entities
  • October 2017 – Heist at Far Eastern International Bank in Taiwan (ATM cash-out scheme)
  • January 2018 – Attempted heist at Bancomext in Mexico
  • May 2018 – Heist at Banco de Chile

All in all, FireEye believes APT38 tried to steal over $1.1 billion, but made off with roughly $100 million, based on the company’s conservative estimates.

The security firms says that all the bank cyber-heists, successful or not, revealed a complex modus operandi, one that followed patterns previous seen with nation-state attackers, and not with regular cyber-criminals.

The main giveaway is their patience and willingness to wait for months, if not years, to pull off a hack, during which time they carried out extensive reconnaissance and surveillance of the compromised target or they created target-specific tools.

“APT38 operators put significant effort into understanding their environments and ensuring successful deployment of tools against targeted systems,” FireEye experts wrote in their report. “The group has demonstrated a desire to maintain access to a victim environment for as long as necessary to understand the network layout, necessary permissions, and system technologies to achieve its goals.”

“APT38 also takes steps to make sure they remain undetected while they are conducting their internal reconnaissance,” they added. “On average, we have observed APT38 remain within a victim network approximately 155 days, with the longest time within a compromised system believed to be 678 days (almost two years).”

apt38-bank-heist-modus-operandi.pngapt38-bank-heist-modus-operandi.png

Image: FireEye

But the group also stood out because it did what very few others financially-motivated groups did. It destroyed evidence when in danger of getting caught, or after a hack, as a diversionary tactic.

In cases where the group believed they left too much forensic data behind, they didn’t bother cleaning the logs of each computer in part but often deployed ransomware or disk-wiping malware instead.

Some argue that this was done on purpose to put investigators on the wrong trail, which is a valid argument, especially since it almost worked in some cases.

For example, APT38 deployed the Hermes ransomware on the network of Far Eastern International Bank (FEIB) in Taiwan shortly after they withdrew large sums of money from the bank’s ATMs, in an attempt to divert IT teams to data recovery efforts instead of paying attention to ATM monitoring systems.

APT38 also deployed the KillDisk disk-wiping malware on the network of Bancomext after a failed attempt of stealing over $110 million from the bank’s accounts, and also on the network of Banco de Chile after APT38 successfully stole $10 million from its systems.

Initially, these hacks were reported as IT system failures, but through the collective efforts of experts around the world [1, 2, 3] and thanks to clues in the malware’s source, experts linked these hacks to North Korea’s hacking units.

But while the FireEye report is the first step into separating North Korea’s hacking units from one another, it will be a hard thing to pull off, and the main reason is because all of North Korea’s hacking infrastructure appears to heavily overlap, with agents sometimes reusing malware and online infrastructure for all sorts of operations.

This problem was more than evident last month when the US Department of Justice indicted a North Korean hacker named Park Jin Hyok with every North Korean hack under the sun, ranging from both cyber-espionage operations (Sony Pictures hack, WannaCry, Lockheed Martin hack) to financially-motivated hacks (Bangladesh Bank heist).

But while companies like FireEye continue to pull on the string of North Korean hacking efforts in an effort to shed some light on past attacks, the Pyongyang regime doesn’t seem to be interested in reining in APT38, despite some recent positive developments in diplomatic talks.

“We believe APT38’s operations will continue in the future,” FireEye said. “In particular, the number of SWIFT heists that have been ultimately thwarted in recent years coupled with growing awareness for security around the financial messaging system could drive APT38 to employ new tactics to obtain funds especially if North Korea’s access to currency continues to deteriorate.”

Previous and related coverage:

Source link



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published.

Security

Achieve more with GigaOm

Published

on

As we have grown substantially over the past two years. We are often asked who (even) is GigaOm, what the company does, how it differentiates, and so on. These are fair questions—many people still remember what we can call GigaOm 1.0, that fine media company born of the blogging wave.

We’ve been through the GigaOm 2.0 “boutique analyst firm” phase, before deciding we wanted to achieve more. That decision put us on a journey to where we are today, ten times the size in terms of headcount and still growing, and covering as many technology categories as the biggest analyst firms. 

Fuelling our growth has been a series of interconnected decisions. First, we asked technology decision-makers —CIOs, CTOs, VPs of Engineering and Operations, and so on—what they needed, and what was missing: unanimously, they said they needed strategic technical information based on practical experience, that is, not just theory. Industry analysts, it has been said, can be like music critics who have never played in an orchestra. Sure, there’s a place for that, but it leaves a gap for practitioner-led insights. 

Second, and building on this, we went through a test-and-learn phase to try various report models. Enrico Signoretti, now our VP of Product, spearheaded the creation of the Key Criteria and Radar document pair, based on his experience in evaluating solutions for enterprise clients. As we developed this product set in collaboration with end-user strategists, we doubled down on the Key Criteria report as a how-to guide for writing a Request For Proposals. 

Doing this led to the third strand, expanding this thinking to the enterprise decision-making cycle. Technology decision-makers don’t wake up one morning and say, “I think I need some Object Storage.”

Rather, they will be faced with a challenge, a situation, or some other scenario – perhaps existing storage products are not scaling sufficiently, applications are being rationalized, or a solution has reached the end of life. These scenarios dictate a nhttps://gigaom.com/end-user-products/btis/eed: often, the decision maker will not only need to define a response but will also then have to justify the spending. 

This reality dictates the first product in the GigaOm portfolio, the GigaBrief, which is (essentially) a how-to guide for writing a business case. Once the decision maker has confirmed the budget, they can get on with writing an RFP (cf the Key Criteria and Radar), and then consider running a proof of concept (PoC).

We have a how-to guide for these as well, based on our Benchmarks, field tests, and Business Technology Impact (BTI) reports. We know that, alongside thought leadership, decision-makers need hard numbers for costs and benefits, so we double down on these. 

For end-user organizations, our primary audience, we have therefore created a set of tools to make decisions and unblock deployments: our subscribers come to us for clarity and practitioner-led advice, which helps them work both faster and smarter and achieve their goals more effectively. Our research is high-impact by design, which is why we have an expanding set of partner organizations using it to enable their clients. 

Specifically, learning companies such as Pluralsight and A Cloud Guru use GigaOm reports helping subscribers set direction and lock down the solutions they need to deliver. By its nature, our how-to approach to report writing has created a set of strategic training tools, which directly feed more specific technical training. 

Meanwhile, channel companies such as Ingram Micro and Transformation Continuum use our research to help their clients lock down the solutions they need, together with a practitioner-led starting point for supporting frameworks, architectures, and structures. And we work together with media partners like The Register and The Channel Company to support their audiences with research and insights. 

Technology vendors, too, benefit from end-user decision makers that are better equipped to make decisions. Rather than generic market making or long-listing potential vendors, our scenario-led materials directly impact buying decisions, taking procurement from a shortlist to a conclusion. Sales teams at systems, service, and software companies tell us how they use our reports when discussing options with prospects, not to evangelize but to explore practicalities and help reach a conclusion.

All these reasons and more enable us to say with confidence how end-user businesses, learning, channel and media companies, and indeed technology vendors are achieving more with GigaOm research. In a complex and constantly evolving landscape, our practitioner- and scenario-led approach brings specificity and clarity, helping organizations reach further, work faster and deliver more. 

Our driving force is the value we bring; at the same time, we maintain a connection with our media heritage, which enables us to scale beyond traditional analyst models. We also continue to learn, reflect, and change — our open and transparent model welcomes feedback from all stakeholders so that we can drive improvements in our products, our approach, and our outreach.

This is to say, if you have any thoughts, questions, raves, or rants, don’t hesitate to get in touch with me directly. My virtual door, and my calendar, are always open. 

The post Achieve more with GigaOm appeared first on GigaOm.

Continue Reading

Security

Pragmatic view of Zero Trust

Published

on

Traditionally we have taken the approach that we trust everything in the network, everything in the enterprise, and put our security at the edge of that boundary. Pass all of our checks and you are in the “trusted” group. That worked well when the opposition was not sophisticated, most end user workstations were desktops, the number of remote users was very small, and we had all our servers in a series of data centers that we controlled completely, or in part. We were comfortable with our place in the world, and the things we built. Of course, we were also asked to do more with less and this security posture was simple and less costly than the alternative.

Starting around the time of Stuxnet this started to change. Security went from a poorly understood, accepted cost, and back room discussion to one being discussed with interest in board rooms and at shareholder meetings. Overnight the executive level went from being able to be ignorant of cybersecurity to having to be knowledgable of the company’s disposition on cyber. Attacks increased, and the major news organizations started reporting on cyber incidents. Legislation changed to reflect this new world, and more is coming. How do we handle this new world and all of its requirements?

Zero Trust is that change in security. Zero Trust is a fundamental change in cybersecurity strategy. Whereas before we focused on boundary control and built all our security around the idea of inside and outside, now we need to focus on every component and every person potentially being a Trojan Horse. It may look legitimate enough to get through the boundary, but in reality it could be hosting a threat actor waiting to attack. Even better, your applications and infrastructure could be a time bomb waiting to blow, where the code used in those tools is exploited in a “Supply Chain” attack. Where through no fault of the organization they are vulnerable to attack. Zero Trust says – “You are trusted only to take one action, one time, in one place, and the moment that changes you are no longer trusted and must be validated again, regardless of your location, application, userID, etc”. Zero Trust is exactly what it says, “I do not trust anything, so I validate all the things”.

That is a neat theory, but what does that mean in practice? We need to restrict users to the absolute minimum required access to networks that have a tight series of ACL’s, to applications that can only communicate to those things they must communicate with, to devices segmented to the point they think they are alone on private networks, while being dynamic enough to have their sphere of trust changed as the organization evolves, and still enable management of those devices. The overall goal is to reduce the “blast radius” any compromise would allow in the organization, since it is not a question of “if” but “when” for a cyber attack.

So if my philosophy changes from “I know that and trust it” to “I cannot believe that is what it says it is” then what can I do? Especially when I consider I did not get 5x budget to deal with 5x more complexity. I look to the market. Good news! Every single security vendor is now telling me how they solve Zero Trust with their tool, platform, service, new shiny thing. So I ask questions. It seems to me they only really solve it according to marketing. Why? Because Zero Trust is hard. It is very hard. Complex, it requires change across the organization, not just tools, but the full trifecta of people, process, and technology, and not restricted to my technology team, but the entire organization, not one region, but globally. It is a lot.

All is not lost though, because Zero Trust isn’t a fixed outcome, it is a philosophy. It is not a tool, or an audit, or a process. I cannot buy it, nor can I certify it (no matter what people selling things will say). So that shows hope. Additionally, I always remember the truism; “Perfection is the enemy of Progress”, and I realize I can move the needle.

So I take a pragmatic view of security, through the lens of Zero Trust. I don’t aim to do everything all at once. Instead I look at what I am able to do and where I have existing skills. How is my organization designed, am I a hub and spoke where I have a core organization with shared services and largely independent business units? Maybe I have a mesh where the BU’s are distributed to where we organically integrated and staffed as we went through years of M&A, maybe we are fully integrated as an organization with one standard for everything. Maybe it is none of those.

I start by considering my capabilities and mapping my current state. Where is my organization on the NIST security framework model? Where do I think I could get with my current staff? Who do I have in my partner organization that can help me? Once I know where I am I then fork my focus.

One fork is on low hanging fruit that can be resolved in the short term.  Can I add some firewall rules to better restrict VLAN’s that do not need to communicate? Can I audit user accounts and make sure we are following best practices for organization and permission assignment? Does MFA exist, and can I expand it’s use, or implement it for some critical systems?

My second fork is to develop an ecosystem of talent, organized around a security focused operating model, otherwise known as my long term plan. DevOps becomes SecDevOps, where security is integrated and first. My partners become more integrated and I look for, and acquire relationships with, new partners that fill my gaps. My teams are reorganized to support security by design AND practice. And I develop a training plan that includes the same focus on what we can do today (partner lunch and learns) with long term strategy (which may be up skilling my people with certifications).

This is the phase where we begin looking at a tools rationalization project. What do my existing tools not perform as needed in the new Zero Trust world, these will likely need to be replaced in the near term. What tools do I have that work well enough, but will need to be replaced at termination of the contract. What tools do I have that we will retain.

Finally where do we see the big, hard rocks being placed in our way?  It is a given that our networks will need some redesign, and will need to be designed with automation in mind, because the rules, ACL’s, and VLAN’s will be far more complex than before, and changes will happen at a far faster pace than before. Automation is the only way this will work. The best part is modern automation is self documenting.

The wonderful thing about being pragmatic is we get to make positive change, have a long term goal in mind that we can all align on, focus on what we can change, while developing for the future. All wrapped in a communications layer for executive leadership, and an evolving strategy for the board. Eating the elephant one bite at a time.

The post Pragmatic view of Zero Trust appeared first on GigaOm.

Continue Reading

Security

Retrospective thoughts on KubeCon Europe 2022

Published

on

I’m not going to lie. As I sit on a plane flying away from Valencia, I confess to have been taken aback by the scale of Kubecon Europe this year. In my defence, I wasn’t alone the volume of attendees appeared to take conference organisers and exhibitors by surprise, illustrated by the notable lack of water, (I was told) t-shirts and (at various points) taxis.

Keynotes were filled to capacity, and there was a genuine buzz from participants which seemed to fall into two camps: the young and cool, and the more mature and soberly dressed.

My time was largely spent in one-on-one meetings, analyst/press conferences and walking the stands, so I can’t comment on the engineering sessions. Across the piece however, there was a genuine sense of Kubernetes now being about the how, rather than the whether. For one reason or another, companies have decided they want to gain the benefits of building and deploying distributed, container-based applications.

Strangely enough, this wasn’t being seen as some magical sword that can slay the dragons of legacy systems and open the way to digital transformation the kool-aid was as absent as the water. Ultimately, enterprises have accepted that, from an architectural standpoint and for applications in general, the Kubernetes model is as good as any available right now, as a non-proprietary, well-supported open standard that they can get behind.

Virtualisation-based options and platform stacks are too heavyweight; serverless architectures are more applicable to specific use cases. So, if you want to build an application and you want it to be future-safe, the Kubernetes target is the one to aim for.

Whether to adopt Kubernetes might be a done deal, but how to adopt certainly is not. The challenge is not with Kubernetes itself, but everything that needs to go around it to make resulting applications enterprise-ready.

For example, they need to operate in compliance environments; data needs to be managed, protected, and served into an environment that doesn’t care too much about the state; integration tools are required with external and legacy systems; development pipelines need to be in place, robust and value-focused; IT Operations need a clear view of what’s running whereas a bill of materials, and the health of individual clusters; and disaster recovery is a must.

Kubernetes doesn’t do these things, opening the door to an ecosystem of solution vendors and (often CNCF-backed) open source projects. I could drill into these areas Service Mesh, GitOps, orchestration, observability, and backup but the broader point is that they are all evolving and coalescing around the need. As they increase in capability, barriers to adoption reduce and the number of potential use cases grows.

All of which puts the industry at an interesting juncture. It’s not that tooling isn’t ready: organizations are already successfully deploying applications based on Kubernetes. In many cases, however, they are doing more work than they need developers need insider knowledge of target environments, interfaces need to be integrated rather than using third-party APIs, higher-order management tooling (such as AIOps) has to be custom-deployed rather than recognising the norms of Kubernetes operations.

Solutions do exist, but they tend to be coming from relatively new vendors that are feature rather than platform players, meaning that end-user organisations have to choose their partners wisely, then build and maintain development and management platforms themselves rather than using pre-integrated tools from a singe vendor.

None of this is a problem per se, but it does create overheads for adopters, even if they gain earlier benefits from adopting the Kubernetes model. The value of first-mover advantage has to be weighed against that of investing time and effort in the current state of tooling: as a travel company once told me, “we want to be the world’s best travel site, not the world’s best platform engineers.”

So, Kubernetes may be inevitable, but equally, it will become simpler, enabling organisations to apply the architecture to an increasingly broad set of scenarios. For organisations yet to make the step towards Kubernetes, now may still be a good time to run a proof of concept though in some ways, that sip has sailed perhaps focus the PoC on what it means for working practices and structures, rather than determining whether the concepts work at all.

Meanwhile and perhaps most importantly, now is a very good moment for organisations to look for what scenarios Kubernetes works best “out of the box”, working with providers and reviewing architectural patterns to deliver proven results against specific, high-value needs these are likely to be by industry and by the domain (I could dig into this, but did I mention that I’m sitting on a plane? ).

Jon Collins from Kubecon 2022

Kubernetes might be a done deal, but that doesn’t mean it should be adopted wholesale before some of the peripheral detail is ironed out.

The post Retrospective thoughts on KubeCon Europe 2022 appeared first on GigaOm.

Continue Reading

Trending