A new variant of MegaCortex ransomware is making its way across Europe and the United States, leaving blackmail demands worth millions in its wake.
Accenture iDefense researchers described campaigns making use of MegaCortex v.2 in a blog post on Monday. According to Leo Fernandes, Senior Manager of the Malware Analysis and Countermeasures (MAC) team, the operators behind the ransomware are focusing on corporate targets — and are in it to hit the criminal jackpot.
During recent, targeted attacks, the operators of the C++ malware have focused on infiltrating servers containing corporate resources in order to encrypt them and any connected network hosts.
Malwarebytes believes that Qbot, Emotet, and Rietspoof Trojans may have a hand in distributing the malware. Other security experts have tracked the ransomware through Rietspoof loaders.
See also: DealPly adware abuses Microsoft, McAfee services to evade detection
Originally, MegaCortex contained a payload protected by a password only made available during a live infection. The researchers say this feature did make reverse-engineering more difficult, but also made widespread distribution a challenge as operators would need to monitor infection and manually finish up once the damage was done.
Now, in the new version of MegaCortex, the malicious code self-executes and the live password requirement has been quashed; instead, the password is now hard-coded.
There is also a range of other changes which Accenture says can be considered a trade of “some security for ease of use and automation.” These include a switch from the manual execution of batch files to automatically kill and stop antivirus solutions and other PC processes.
CNET: US military reportedly testing surveillance balloons in Midwest skies
In addition, the main payload was once executed by rundll32.exe and is now decrypted and executed from memory.
After infection, the malware performs a scan on the infected system and compares running processes to a ‘kill’ list, in order to terminate anti-analysis software. A list of drives is then drawn up and files are encrypted with the extension .megacortex. Shadow files are deleted and the ransom message is dropped in the C: directory.
An RSA public key, hardcoded into the malware, is used to encrypt files.
MegaCortex ransom demands have ranged from two to 600 Bitcoins, or roughly $20,000 to $5.8 million. The ransom note says, in part:
“We are working for profit. The core of this criminal business is to give back your valuable data in the original form (for ransom of course). We don’t do charity!”
TechRepublic: Top 10 IoT security risks for businesses
“With a hard-coded password and the addition of an anti-analysis component, third parties or affiliated actors could, in theory, distribute the ransomware without the need for an actor-supplied password for the installation,” the researchers say. “Indeed, potentially there could be an increase in the number of MegaCortex incidents if the actors decide to start delivering it through email campaigns or dropped as secondary stage by other malware families.”
Previous and related coverage
Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0
Key Criteria for Evaluating Security Information and Event Management Solutions (SIEM)
Security Information and Event Management (SIEM) solutions consolidate multiple security data streams under a single roof. Initially, SIEM supported early detection of cyberattacks and data breaches by collecting and correlating security event logs. Over time, it evolved into sophisticated systems capable of ingesting huge volumes of data from disparate sources, analyzing data in real time, and gathering additional context from threat intelligence feeds and new sources of security-related data. Next-generation SIEM solutions deliver tight integrations with other security products, advanced analytics, and semi-autonomous incident response.
SIEM solutions can be deployed on-premises, in the cloud, or a mix of the two. Deployment models must be weighed with regard to the environments the SIEM solution will protect. With more and more digital infrastructure and services becoming mission critical to every enterprise, SIEMs must handle higher volumes of data. Vendors and customers are increasingly focused on cloud-based solutions, whether SaaS or cloud-hosted models, for their scalability and flexibility.
The latest developments for SIEM solutions include machine learning capabilities for incident detection, advanced analytics features that include user behavior analytics (UBA), and integrations with other security solutions, such as security orchestration automation and response (SOAR) and endpoint detection and response (EDR) systems. Even though additional capabilities within the SIEM environment are a natural progression, customers are finding it even more difficult to deploy, customize, and operate SIEM solutions.
Other improvements include better user experience and lower time-to-value for new deployments. To achieve this, vendors are working on:
- Streamlining data onboarding
- Preloading customizable content—use cases, rulesets, and playbooks
- Standardizing data formats and labels
- Mapping incident alerts to common frameworks, such as the MITRE ATT&CK framework
Vendors and service providers are also expanding their offerings beyond managed SIEM solutions to à la carte services, such as content development services and threat hunting-as-a-service.
There is no one-size-fits-all SIEM solution. Each organization will have to evaluate its own requirements and resource constraints to find the right solution. Organizations will weigh factors such as deployment models or integrations with existing applications and security solutions. However, the main decision factor for most customers will revolve around usability, affordability, and return on investment. Fortunately, a wide range of solutions available in the market can almost guarantee a good fit for every customer.
How to Read this Report
Solution Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.
Key Criteria for Evaluating Secure Service Access
Since the inception of large-scale computing, enterprises, organizations, and service providers have protected their digital assets by securing the perimeter of their on-premises data centers. With the advent of cloud computing, the perimeter has dissolved, but—in most cases—the legacy approach to security hasn not. Many corporations still manage the expanded enterprise and remote workforce as an extension of the old headquarters office/branch model serviced by LANs and WANs.
Bolting new security products onto their aging networks increased costs and complexity exponentially, while at the same time severely limiting their ability to meet regulatory compliance mandates, scale elastically, or secure the threat surface of the new any place/any user/any device perimeter.
The result? Patchwork security ill-suited to the demands of the post-COVID distributed enterprise.
Converging networking and security, secure service access (SSA) represents a significant shift in the way organizations consume network security, enabling them to replace multiple security vendors with a single, integrated platform offering full interoperability and end-to-end redundancy. Encompassing secure access service edge (SASE), zero-trust network access (ZTNA), and extended detection and response (XDR), SSA shifts the focus of security consumption from being either data center or edge-centric to being ubiquitous, with an emphasis on securing services irrespective of user identity or resources accessed.
This GigaOm Key Criteria report outlines critical criteria and evaluation metrics for selecting an SSA solution. The corresponding GigaOm Radar Report provides an overview of notable SSA vendors and their offerings available today. Together, these reports are designed to help educate decision-makers, making them aware of various approaches and vendors that are meeting the challenges of the distributed enterprise in the post-pandemic era.
How to Read this Report
Key Criteria for Evaluating Edge Platforms
Edge platforms leverage distributed infrastructure to deliver content, computing, and security closer to end devices, offloading networks and improving performance. We define edge platforms as the solutions capable of providing end users with millisecond access to processing power, media files, storage, secure connectivity, and related “cloud-like” services.
The key benefit of edge platforms is bringing websites, applications, media, security, and a multitude of virtual infrastructures and services closer to end devices compared to public or private cloud locations.
The need for content proximity started to become more evident in the early 2000s as the web evolved from a read-only service to a read-write experience, and users worldwide began both consuming and creating content. Today, this is even more important, as live and on-demand video streaming at very high resolutions cannot be sustained from a single central location. Content delivery networks (CDNs) helped host these types of media at the edge, and the associated network optimization methods allowed them to provide these new demanding services.
As we moved into the early 2010s, we experienced the rapid cloudification of traditional infrastructure. Roughly speaking, cloud computing takes a server from a user’s office, puts it in a faraway data center, and allows it to be used across the internet. Cloud providers manage the underlying hardware and provide it as a service, allowing users to provision their own virtual infrastructure. There are many operational benefits, but at least one unavoidable downside: the increase in latency. This is especially true in this dawning age of distributed enterprises for which there is not just a single office to optimize. Instead, “the office” is now anywhere and everywhere employees happen to be.
Even so, this centralized, cloud-based compute methodology works very well for most enterprise applications, as long as there is no critical sensitivity to delay. But what about use cases that cannot tolerate latency? Think industrial monitoring and control, real-time machine learning, autonomous vehicles, augmented reality, and gaming. If a cloud data center is a few hundred or even thousands of miles away, the physical limitations of sending an optical or electrical pulse through a cable mean there are no options to lower the latency. The answer to this is leveraging a distributed infrastructure model, which has traditionally been used by content delivery networks.
As CDNs have brought the internet’s content closer to everyone, CDN providers have positioned themselves in the unique space of owning much of the infrastructure required to bring computing and security closer to users and end devices. With servers close to the topological edge of the network, CDN providers can offer processing power and other “cloud-like” services to end devices with only a few milliseconds latency.
While CDN operators are in the right place at the right time to develop edge platforms, we’ve observed a total of four types of vendors that have been building out relevant—and potentially competing—edge infrastructure. These include traditional CDNs, hyperscale cloud providers, telecommunications companies, and new dedicated edge platform operators, purpose-built for this emerging requirement.
How to Read this Report
Vendor Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.
Archaeologists recreated three common kinds of Paleolithic cave lighting
Enlarge / Spanish archaeologists recreated three common types of Paleolithic lighting systems. Medina-Alcaide et al, 2021, PLOS ONE In 1993,...
Toyota foils leakers by offering an official image of the 2022 Tundra
Earlier this week, leaked images were going around claiming to show the next generation 2022 Toyota Tundra. Automakers never like...
Ford to purchase Electriphi for integration with Ford Pro services for EV fleets
Ford has announced it will purchase Electriphi, a California-based provider of charging management and fleet monitoring software for electric vehicles....
Two Viking burials, separated by an ocean, contain close kin
Ida Marie Odgaard AFP Roughly a thousand years ago, a young man in his early 20s met a violent end...
The efforts to make text-based AI less racist and terrible
Getty Images In July 2020, OpenAI launched GPT-3, an artificial intelligence language model that quickly stoked excitement about computers writing...
Social1 year ago
CrashPlan for Small Business Review
Gadgets3 years ago
A fictional Facebook Portal videochat with Mark Zuckerberg – TechCrunch
Mobile3 years ago
Memory raises $5M to bring AI to time tracking – TechCrunch
Social3 years ago
iPhone XS priciest yet in South Korea
Cars3 years ago
What’s the best cloud storage for you?
Security3 years ago
Google latest cloud to be Australian government certified
Cars3 years ago
SK Telecom and Samsung to collaborate on 5G for enterprise
Social3 years ago
Apple’s new iPad Pro aims to keep enterprise momentum