Connect with us

Security

Cybersecurity: The key lessons of the Triton malware cyberattack you need to learn

Published

on

Triton malware targeting industrial facilities in Middle East
The malware has been designed to target industrial systems and critical infrastructure.

The Triton malware attack was far from the first time that hackers have attempted to target the networks of an industrial facility, but it was the first time that malware designed to attack safety systems was ever seen in the wild.

The malware was designed to manipulate Schneider Electric’s Triconex Safety Instrumented System (SIS) controllers – emergency shutdown systems – and was uncovered on the network at a critical infrastructure operator in the Middle East.

The malware campaign was extremely stealthy and was only uncovered because the attackers made a mistake and triggered the safety system, shutting down the plant. The outcome could’ve been much worse.

“We can speculate that their mission is of some physical consequence. They wanted to either stop production at this facility, stop things from working or potentially cause physical harm,” says Dan Caban, incident response manager at FireEye’s Mandiant.

SEE: A winning strategy for cybersecurity (ZDNet special report) | Download the report as a PDF (TechRepublic) 

Speaking during a session on Triton at the National Cyber Security Centre’s CYBERUK 19 conference, Caban argued that it was fortunate the malware was uncovered, alerting the world to dangerous cyberattacks that can alter or damage physical systems.

“We were very lucky that this accident happened, it opened the door for people to start thinking about this physical consequence which may have cybersecurity origins – that’s how this investigation kicked off and now so much has come to public light,” he says.

Following the initial point of compromise, the malware was able to use techniques such as harvesting credentials and moved across the network to reach the SIS controllers.

However, Triton was only able to reach its goal because of some lax attitudes to security throughout the facility: the safety controllers should have been disconnected from the network but were connected to internet-facing operational systems, allowing attackers to gain access.

Other failures — like a key being left inside a machine — provided attackers with access they should never have gained without physically being inside the facility.

While the malware has the potential to be highly damaging to valves, switches and sensors in an industrial environment, the threat can be countered by implementing some relatively simple cybersecurity techniques that make movement between systems almost impossible.

“Network segregation can help you avoid this happening. You should be separating them logically, but also based on criticality and by following industry best practice and industry standards,” Caban explains. “You should also consider directional gateways so it’s not possible to move certain ways.”

Organisations can also take a step towards this by ensuring there’s proper management around cybersecurity and that there’s plenty of information around systems for staff of all levels to understand what’s going on – and what to do if something goes wrong.

“In a cyber context, it’s absolutely essential that you have governance; leadership from the very top level. Without proper governance in your organisation, you’re probably setting up for failure,” says Victor Lough, head of UK business at Schneider Electric.

“For cybersecurity, you must consider the physical safety because you’re considering kinetic systems. And on the flip-side of that, physical safety must always consider cybersecurity, so they’re opposite sides of the same coin – without security we have no safety,” he says.

There was once a time when the security of cyber systems and the security of physical systems might have been able to be considered separately, but not any more: in many cases, they’re now one and the same.

“This is the blending of the cyber and the physical security – the things you can put bollards around. You kind of could have in this case – they left the key in and left it in programme mode,” said Deborah Petterson, deputy director for critical national infrastructure at the UK’s NCSC.

SEE: Industroyer: An in-depth look at the culprit behind Ukraine’s power grid blackout

In this incident, realising that the key had been left in the machine would have gone a long way to preventing hackers from gaining access to conduct malicious activity.

“People knowing where their safety systems are and how they’re connected – it’s really basic,” she said, suggesting that those running these systems should regularly be examining how the networks operate and should keep logs about updates – especially about dated systems like the industrial facility was running on.

“The one in this example was 15 years old – when was the last time you looked at risk management around that? The churn in security people is one to two years with CISOs. When was the last time you dusted off and used this as a point to go and have a look?” Petterson asked.

Triton targeted critical infrastructure in the Middle East, but there are lessons from the incident that can be applied to organisations in every sector, no matter where they are in the world.

“If you take this out of the context of safety systems, you can apply almost all of them to any enterprise system. They’re the same sort of controls we just ask any business to do to make themselves cyber safe,” says Dr Ian Levy, technical director at the NCSC.

The hacking group behind Triton – which has has been linked to Russia – remains active, with researchers at FireEye recently disclosing a new campaign targeting a fresh critical infrastructure facility.

However, with the tactics of the group now in the public eye, it’s possible to detect and protect against malicious activity.

“All these backdoors, lateral movement techniques and credential harvesting: they can be detected, it’s possible, we don’t have to give up hope,” said FireEye’s Caban.

“They can be detected in IT, detected between the IT and OT DMZ – those are easy places to start looking.”

MORE ON CYBERSECURITY

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published.

Security

Defeating Distributed Denial of Service Attacks

Published

on

It seems like every day the news brings new stories of cyberattacks. Whether ransomware, malware, crippling viruses, or more frequently of late—distributed denial of service (DDoS) attacks. According to Infosec magazine, in the first half of 2020, there was a 151% increase in the number of DDoS attacks compared to the same period the previous year. That same report states experts predict as many as 15.4 million DDoS attacks within the next two years.

These attacks can be difficult to detect until it’s too late, and then they can be challenging to defend against. There are solutions available, but there is no one magic bullet. As Alastair Cooke points out in his recent “GigaOm Radar for DDoS Protection” report, there are different categories of DDoS attacks.

And different types of attacks require different types of defenses. You’ll want to adopt each of these three defense strategies against DDoS attacks to a certain degree, as attackers are never going to limit themselves to a single attack vector:

Network Defense: Attacks targeting the OS and network operate at either Layer 3 or Layer 4 of the OSI stack. These attacks don’t flood the servers with application requests but attempt to exhaust TCP/IP resources on the supporting infrastructure. DDoS protection solutions defending against network attacks identify the attack behavior and absorb it into the platform.

Application Defense: Other DDoS attacks target the actual website itself or the web server application by overwhelming the site with random data and wasting resources. DDoS protection against these attacks might handle SSL decryption with hardware-based cryptography and prevent invalid data from reaching web servers.

Defense by Scale: There have been massive DDoS attacks, and they show no signs of stopping. The key to successfully defending against a DDoS attack is to have a scalable platform capable of deflecting an attack led by a million bots with hundreds of gigabits per second of network throughput.

Table 1. Impact of Features on Metrics
[chart id=”1001387″ show=”table”]

DDoS attacks are growing more frequent and more powerful and sophisticated. Amazon reports mitigating a massive DDoS attack a couple of years ago in which peak traffic volume reached 2.3 Tbps. Deploying DDoS protection across the spectrum of attack vectors is no longer a “nice to have,” but a necessity.

In his report, Cooke concludes that “Any DDoS protection product is only part of an overall strategy, not a silver bullet for denial-of-service hazards.” Evaluate your organization and your needs, read more about each solution evaluated in the Radar report, and carefully match the right DDoS solutions to best suit your needs.

Learn More About the Reports: Gigaom Key Criteria for DDoS, and Gigaom Radar for DDoS

The post Defeating Distributed Denial of Service Attacks appeared first on GigaOm.

Continue Reading

Security

Assessing Providers of Low-Power Wide Area Networks

Published

on

/*! elementor – v3.6.4 – 13-04-2022 */
.elementor-widget-text-editor.elementor-drop-cap-view-stacked .elementor-drop-cap{background-color:#818a91;color:#fff}.elementor-widget-text-editor.elementor-drop-cap-view-framed .elementor-drop-cap{color:#818a91;border:3px solid;background-color:transparent}.elementor-widget-text-editor:not(.elementor-drop-cap-view-default) .elementor-drop-cap{margin-top:8px}.elementor-widget-text-editor:not(.elementor-drop-cap-view-default) .elementor-drop-cap-letter{width:1em;height:1em}.elementor-widget-text-editor .elementor-drop-cap{float:left;text-align:center;line-height:1;font-size:50px}.elementor-widget-text-editor .elementor-drop-cap-letter{display:inline-block}

Blog Title: Assessing Providers of Low-Power Wide Area Network Technology

Companies are taking note of how Low-Power Wide Area Networks (LPWAN) can provide long-distance communications for certain use cases. While its slow data transfer rates and high latency aren’t going to be driving any high intensity video streaming or other bandwidth-hungry situations, it can provide inexpensive, low power, long-distance communication.

According to Chris Grundemann and Logan Andrew Green’s recent report “GigaOm Radar for LPWAN Technology Providers (Unlicensed Spectrum) v1.0,” this growing communications technology is suitable for use cases with the following characteristics:

  • Requirement for long-distance transmission—10 km/6 miles or more wireless connectivity from sensor to gateway
  • Low power consumption, with battery life lasting up to 10 years
  • Terrain and building penetration to circumvent line-of-sight issues
  • Low operational costs (device management or connection subscription cost)
  • Low data transfer rate of roughly 20kbps

These use cases could include large-scale IoT deployments within heavy industry, manufacturing, government, and retail. The LPWAN technology providers evaluated in this Radar report are currently filling a gap in the IoT market. They are certainly poised to benefit from the anticipated rapid adoption of LPWAN solutions.

Depending on the use case you’re looking to fulfill, you can select from four basic deployment models from these LPWAN providers:

  • Physical Appliance: This option would require a network server on-premises to receive sensor data from gateways.
  • Virtual Appliance: Network servers could also be deployed as virtual appliances, running either on-premises or in the cloud.
  • Network Stack as a Service: With this option, the LPWAN provider fully manages your network stack and provides you with the service. You only need devices and gateways to satisfy your requirements.
  • Network as a Service: This option is provided by mobile network operators, with the provider operating the network stack and gateways. You would only need to connect to the LPWAN provider.

Figure 1. LPWAN Connectivity

The LPWAN providers evaluated in this report are well-positioned from both a business and technical perspective, as they can function as a single point of contact for building IoT solutions. Instead of cobbling together other solutions to satisfy connectivity protocols, these providers can set up your organization with a packaged IoT solution, reducing time to market and virtually eliminating any compatibility issues.

The unlicensed spectrum aspect is also significant. The LPWAN technology providers evaluated in this Radar report use at least one protocol in the unlicensed electromagnetic spectrum bands. There’s no need to buy FCC licenses for specific frequency bands, which also lowers costs.

Learn More: Gigaom Enterprise Radar for LPWAN

The post Assessing Providers of Low-Power Wide Area Networks appeared first on GigaOm.

Continue Reading

Security

The Benefits of a Price Benchmark for Data Storage

Published

on

Why Price Benchmark Data Storage?

Customers, understandably, are highly driven by budget when it comes to data storage solutions. The cost of switching, upkeep and upgrades are high risk factors for businesses, and therefore, decision makers need to look for longevity in their chosen solution. Many factors influence how data needs to be handled within storage, including data that is frequently accessed, or storing rarely-accessed legacy data. 

Storage performance may also be shaped by geographic location, from remote work or global enterprises that need to access and share data instantly, or by the necessity of automation. Each element presents a new price-point that needs to be considered, by customers and by vendors.

A benchmark gives a comparison of system performance based on a key performance indicator, such as latency, capacity, or throughput. Competitor systems are analyzed in like-for-like situations that optimize the solution, allowing a clear representation of the performance. Price benchmarks for data storage are ideal for marketing, showing customers exactly how much value for money a solution has against competitor vendors.

Benchmark tests reinforce marketing collateral and tenders with verifiable evidence of performance capabilities and how the transactional costs relate to them. Customers are more likely to invest in long-term solutions with demonstrable evidence that can be corroborated. Fully disclosed testing environments, processes, and results, give customers the proof they need and help vendors stand out from the crowd.

The Difficulty in Choosing

Storage solutions vary greatly, from cloud options to those that utilize on-premises software. Data warehouses have different focuses which impact the overall performance, and they can vary in their pricing and licensing models. Customers find it difficult to compare vendors when the basic data storage configurations differ and price plans vary. With so many storage structures available, it’s hard to explain to customers how output relates to price, appeal to their budget, and maintain integrity, all at the same time.

Switching storage solutions is also a costly, high-risk decision that requires careful consideration. Vendors need to create compelling and honest arguments that provide reassurance of ROI and high quality performance.

Vendors should begin by pitching their costs at the right level; they need to be profitable but also appealing to the customer. Benchmarking can give an indication of how competitor cost models are calculated, allowing vendors to make judgements on their own price plans to keep ahead of the competition. 

Outshining the Competition

Benchmark testing gives an authentic overview of storage transaction-based price-performance, carrying out the test in environments that imitate real-life. Customers can gain a higher understanding of how the product works in terms of transactions per second, and how competitors process storage data in comparison.

The industry-standard for benchmarking is the TPC Benchmark E (TPC-E), a recognized standard for storage vendors. Tests need to be performed in credible environments; by giving full transparency on their construction, vendors and customers can understand how the results are derived. This can also prove systems have been configured to offer the best performance of each platform.

A step-by-step account allows tests to be recreated by external parties given the information provided. This transparency in reporting provides more trustworthy and reliable outcomes that offer a higher level of insight to vendors. Readers can also examine the testing and results themselves, to draw independent conclusions.

Next Steps

Price is the driving factor for business decisions and the selection for data storage is no different. Businesses often look towards low-cost solutions that offer high capacity, and current trends have pushed customers towards cloud solutions which are often cheaper and flexible. The marketplace is full in regard to options: new start-ups are continually emerging, and long serving vendors are needing to reinvent and upgrade their systems to keep pace. 

Vendors need evidence of price-performance, so customers can be reassured that their choice will offer longevity and functionality at an affordable price point. Industry-standard benchmarking identifies how performance is impacted by price and which vendors are best in the market – the confirmation customers need to invest.

 

The post The Benefits of a Price Benchmark for Data Storage appeared first on GigaOm.

Continue Reading

Trending