Connect with us

Security

Cyberwar predictions for 2019: The stakes have been raised

Published

on


Image: the-lightwriter, Getty Images/iStockphoto

special feature


Cyberwar and the Future of Cybersecurity

Today’s security threats have expanded in scope and seriousness. There can now be millions — or even billions — of dollars at risk when information security isn’t handled properly.

Read More

Before the internet era, geopolitical tensions drove traditional espionage, and periodically erupted into warfare. Nowadays, cyberspace not only houses a treasure-trove of commercially and politically sensitive information, but can also provide access to control systems for critical civil and military infrastructure. So it’s no surprise to find nation-state cyber activity high on the agendas of governments.

Notable cyber attacks launched by nation states in recent years include: Stuxnet (allegedly by Israel and the US); DDoS attacks against Estonia, attacks against industrial control systems for power grids in Ukraine, and electoral meddling in the US (allegedly by Russia); and the global WannaCry attack (allegedly by North Korea). China, meanwhile, has been accused of multiple intellectual property theft attacks and, most recently (and controversially), of secreting hardware backdoors into Supermicro servers.

Download all the Cyberwar and the Future of Cybersecurity articles as a free PDF ebook (free TechRepublic registration required)

The global cyber-threat landscape

What does the current threat landscape look like, in broad terms? The 2017/18 threat matrix from BRI (Business Risk Intelligence) company Flashpoint provides a useful overview:

flashpoint-threat-matrix.png

Image: Flashpoint

Threat actors are ranked on a six-point capability scale and a four-point potential impact scale, with Flashpoint’s cast ranging from Tier 2 capability/Negligible potential impact (Jihadi hackers) to Tier 6/Catastrophic impact (China, Russia and Five Eyes).

It’s probably no surprise to find China heading the 2017/18 ranking of threat actors, in terms of capability, potential impact and number of verticals targeted:

flashpoint-threat-actors.png

Colour coding corresponds to Flashpoint’s ‘potential impact’ rating (Black = Catastrophic).


Data: Flashpoint / Chart: ZDNet

In its 2018 mid-year update, Flashpoint highlighted various ‘bellwethers’ that may prompt “major shifts in the cyber threat environment”:

• The tentative rapprochement between the U.S., South Korea, and North Korea fails to result in tangible diplomatic gains to end the North Korean nuclear program.

• Additional states follow the U.S. example and relocate their embassy in Israel to Jerusalem.

• The U.S.’ official withdrawal from the Joint Comprehensive Plan of Action (JCPOA) and the subsequent renewal of economic sanctions prompts an Iranian response.

• The ongoing power struggle between Saudi Arabia and Iran for influence in the Middle East leads to kinetic conflict in the region.

• U.S. and European Union-led economic sanctions in place on Russia are extended or tightened.

• The Trump administration adopts a less-compromising approach toward U.S.-China relations or otherwise enacts policies that threaten Chinese core interests. Alternatively, China adopts an increasingly aggressive policy toward securing its vital core interests, including the South China Sea and the questions of Taiwan’s and Hong Kong’s political sovereignty.

• The situation in Syria further deteriorates into direct armed conflict between major states with differing interests in the region, potentially extending further into neighboring states.

• Other nation-states, such as China, Iran, and North Korea adopt the Russian model of engaging in cyber influence operations via proxies, resulting in the exposure of such a campaign.

Cybersecurity policy in the UK

ukncscreport-cover.png

In the UK, the National Cyber Security Center (NCSC) — an amalgam of CESG (the information security arm of GCHQ), the Centre for Cyber Assessment, CERT-UK, and the Centre for Protection of National Infrastructure — issues periodic security advisories, among other services. In April, for example, it warned of hostile state actors compromising UK organisations with focus on engineering and industrial control companies. Specifically, the threats involved “the harvesting of NTLM credentials via Server Message Block (SMB) using strategic web compromises and spear-phishing”. Other recent NCSC advisories have highlighted Russian state-sponsored cyber actors targeting network infrastructure devices and the activities of APT28 (a.k.a. the cyber espionage group Fancy Bear).

In its 2018 annual review, the NCSC said it had dealt with over a thousand cyber incidents since its inception in 2016. “The majority of these incidents were, we believe, perpetrated from within nation states in some way hostile to the UK. They were undertaken by groups of computer hackers directed, sponsored or tolerated by the governments of those countries,” said Ciaran Martin, CEO at NCSC, in the report. “These groups constitute the most acute and direct cyber threat to our national security. I remain in little doubt we will be tested to the full, as a centre, and as a nation, by a major incident at some point in the years ahead, what we would call a Category 1 attack.”

A Category 1 attack constitutes a ‘national cyber emergency’ and results in “sustained disruption of UK essential services or affects UK national security, leading to severe economic or social consequences or to loss of life.”

ukjcreport-cover.png

Despite the efforts of the NCSC, a recent report by the UK parliament’s Joint Committee on the National Security Strategy noted that “The threat to the UK and its critical national infrastructure [CNI] is both growing and evolving. States such as Russia are branching out from cyber-enabled espionage and theft of intellectual property to preparing for disruptive attacks, such as those which affected Ukraine’s energy grid in 2015 and 2016.”

The government needs to do more to change the culture of CNI operators and their extended supply chains, the report said, adding that: “This is also a lesson for the Government itself: cyber risk must be properly managed at the highest levels.”

Specifically, the Joint Committee report recommended an improvement in political leadership: “There is little evidence to suggest a ‘controlling mind’ at the centre of government, driving change consistently across the many departments and CNI sectors involved. Unless this is addressed, the government’s efforts will likely remain long on aspiration and short on delivery. We therefore urge the government to appoint a single Cabinet Office minister who is charged with delivering improved cyber resilience across the UK’s critical national infrastructure.”

Cybersecurity policy in the US

usncsreport-cover2.png

In the US, the September 2018 National Cyber Strategy (the first in 15 years, according to the White House) adopted an aggressive stance, promising to “deter and if necessary punish those who use cyber tools for malicious purposes.” The Trump administration is in no doubt about who the US is up against in the cyber sphere:

“The Administration recognizes that the United States is engaged in a continuous competition against strategic adversaries, rogue states, and terrorist and criminal networks. Russia, China, Iran, and North Korea all use cyberspace as a means to challenge the United States, its allies, and partners, often with a recklessness they would never consider in other domains. These adversaries use cyber tools to undermine our economy and democracy, steal our intellectual property, and sow discord in our democratic processes. We are vulnerable to peacetime cyber attacks against critical infrastructure, and the risk is growing that these countries will conduct cyber attacks against the United States during a crisis short of war. These adversaries are continually developing new and more effective cyber weapons.”

The US cyber security strategy is built around four tenets: Protect the American People, the Homeland and the American Way of Life; Promote American Prosperity; Preserve Peace through Strength; and Advance American Influence.

As far as preserving ‘peace through strength’ is concerned, the Trump administration states that: “Cyberspace will no longer be treated as a separate category of policy or activity disjointed from other elements of national power. The United States will integrate the employment of cyber options across every element of national power.” The objective is to “Identify, counter, disrupt, degrade, and deter behavior in cyberspace that is destabilizing and contrary to national interests, while preserving United States overmatch in and through cyberspace.”

It would seem that the stakes in the cybersecurity/cyberwar game have just been raised by the world’s most powerful nation.

2019 nation-state / cyberwar predictions

Nation-state activity has been prominent in previous annual roundups of cybersecurity predictions (2018, 2017, 2016), and given the above overview we expect plenty more in 2019. Let’s examine some of the predictions in this area that have been issued so far.

Prediction Source Detail
Increase in crime, espionage and sabotage by rogue nation-states Nuvias Group With the ongoing failure of significant national, international or UN level response and repercussion, nation-state sponsored espionage, cyber-crime and sabotage will continue to expand. Clearly, most organisations are simply not structured to defend against such attacks, which will succeed in penetrating defences. Cybersecurity teams will need to rely on breach detection techniques.
The United Nations proposes a cyber security treaty Watchguard In 2019, the United Nations will address the issue of state-sponsored cyber attacks by enacting a multinational Cyber Security Treaty…The growing number of civilian victims impacted by these attacks will cause the UN to more aggressively pursue a multinational cyber security treaty that establishes rules of engagement and impactful consequences around nation-state cyber campaigns. They have talked and argued about this topic in the past, but the most recent incidents — as well as new ones sure to surface in 2019 — will finally force the UN to come to some consensus.
A nation-state launches a ‘fire sale’ attack Watchguard In 2019, a new breed of fileless malware will emerge, with wormlike properties that allow it to self-propagate through vulnerable systems and avoid detection…Last year, a hacker group known as the Shadow Brokers caused significant damage by releasing several zero day vulnerabilities in Microsoft Windows. It only took a month for attackers to add these vulnerabilities to ransomware, leading to two of the most damaging cyber attacks to date in WannaCry and NotPetya. This isn’t the first time that new zero day vulnerabilities in Windows fueled the proliferation of a worm, and it won’t be the last. Next year, ‘vaporworms’ will emerge; fileless malware that self-propagates by exploiting vulnerabilities.
State-sponsored cyber warfare will take center stage CGS Traditional cybersecurity tools to protect against state-sponsored cyberattacks are not adequate and often obsolete as soon as they come to market. It is nearly impossible to keep up with cyberattacks as these threats are automated, continuous and adaptive. In the next year, we will continue to see government entities ramping up efforts to develop state-sponsored cybersecurity protections, policies, procedures and guidance. With individuals, businesses and government departments under attack, there must be a unified approach by the government to create guidance on a more holistic, official, focused effort to thwart state-sponsored attacks.
A collision course to cyber cold war Forcepoint Isolationist trade policies will incentivize nation states and corporate entities to steal trade secrets and use cyber tactics to disrupt government, critical infrastructure, and vital industries.
The US-China trade war will reawaken economic espionage against Western firms Forrester With heightened geopolitical tensions in Europe and Asia and the US and China in a trade war, expect China’s hacking engine, after a brief respite from 2016 to 2018, to turn again to the US and Western countries. The current (13th) five-year plan serves as an early warning system for firms in eight verticals: 1) new energy vehicles; 2) next-generation IT; 3) biotechnology; 4) new materials; 5) aerospace; 6) robotics; 7) power equipment; and 8) agricultural machinery. If you’re in one of these industries, expect a breach attempt very soon.
Trade wars trigger commercial espionage Cyberark Government policies designed to create ‘trade wars’ will trigger a new round of nation-state attacks designed to steal intellectual property and other trade secrets to gain competitive market advantages. Nation-state attackers will combine existing, unsophisticated, yet proven, tactics with new techniques to exfiltrate IP, as opposed to just targeting PII or other sensitive data.
In 2019 and beyond, we expect to see more nations developing offensive cyber capabilities FireEye (Kevin Mandia) There are people that claim nations should not do this, but in the halls of most governments around the world, officials are likely thinking their nation needs to consider offensive operations or they will be at a disadvantage.
We are also seeing deteriorating rules of engagement between state actors in cyber space FireEye (Kevin Mandia) I have spent decades responding to computer intrusions, and I am now seeing nations changing their behaviors. As an example, we have witnessed threat actors from Russia increase their targeting and launch cyber operations that are more aggressive than in the past. Today, nearly every nation has to wonder: “What are the boundaries of cyber activities? What can we do? What is permissible? What is fair game?” We have a whole global community that is entirely uncertain as to what will happen next, and that is not a comfortable place to be. We must begin sorting that out in the coming years.
The final priority is diplomacy. Cyber security is a global problem, and we are all in this together FireEye (Kevin Mandia) The fact that a lone attacker sitting in one country can instantaneously conduct an operation that threatens all computers on the internet in other nations is a problem that needs to be addressed by many people working together. We need to have conversations about rules of engagement. We need to discuss how we will enforce these rules of engagement, and how to impose risks on attackers or the nations that condone their actions. We may not be able to reach agreements on cyber espionage behaviors, but we can communicate doctrine to help us avoid the risk of escalating aggression in cyber space. And we can have a global community that agrees to a set of unacceptable actions, and that works together to ensure there exists a deterrent to avoid such actions.
As we move into 2019: remain skeptical about what you read, especially on the internet FireEye (Sandra Joyce) Russia has been conducting influence operations for a really long time, and not just in the cyber realm. They’re very skilled. We’re seeing other threat actors learning from Russia’s success in cyber influence. For example, we recently uncovered several Iranian inauthentic accounts being used to propagate a social agenda that was pro-Iranian. We’re going to increasingly see these cyber operations from more nations than just Russia, and now Iran, as nations realize how effective this tactic can be. The upside of social media is that everyone can be part of the conversation, but that can clearly be a downside as well.
China’s Belt and Road Initiative to drive cyber espionage activity in 2018 and beyond FireEye (Threat Intelligence) The Belt and Road Initiative (BRI) is an ambitious, multiyear project across Asia, Europe, the Middle East, and Africa to develop a land (Silk Road Economic Belt) and maritime (Maritime Silk Road) trade network that will project China’s influence globally. We expect BRI to be a driver of cyber threat activity. Cyber espionage activity related to the initiative will likely include the emergence of new groups and nation-state actors. Given the range of geopolitical interests affected by this endeavor, it may be a catalyst for emerging nation-state cyber actors to use their capabilities. Regional governments along these trade routes will likely be targets of espionage campaigns. Media announcements on BRI progress, newly signed agreements, and related development conferences will likely serve as operational drivers and provide lure material for future intrusions.
Iranian cyber threat activity against U.S. entities likely to increase following U.S. exit from JCPOA, may include disruptive or destructive attacks FireEye (Threat Intelligence) Last year, we reported that should the U.S. withdraw from the JCPOA [Joint Comprehensive Plan of Action], we suspect that Iran would retaliate against the U.S. using cyber threat activity. This could potentially take the form of disruptive or destructive attacks on private companies in the U.S. and could be conducted by false front personas controlled by Iranian authorities purporting to be independent hacktivists. While we do not anticipate such attacks in the immediate or near-term, we suspect that initially Iranian-nexus actors will resume probing critical infrastructure networks in preparation for potential operations in the future.
Cyber norms unlikely to constrain nation-state cyber operations in the near future FireEye (Threat Intelligence) Norms of responsible state behavior in cyberspace, though still in their infancy, have the potential to significantly affect the types of future cyber operations conducted by nation-states and their proxies in the long term. Norms can be positive or negative, either specifically condoning or condemning a behavior. The future of cyber norms will be most strongly influenced by political and corporate will to agree, and ultimately decisions by particular states to accept or disregard those norms in their conduct of cyber operations.

Various countries active in cyber diplomacy, along with a small number of international corporations, are exploring norms to manage their increasingly complex and crowded cyber threat landscape. However, except for an emerging consensus to not conduct cyber-enabled theft of intellectual property with the intent to provide commercial advantage, no norm has yet found significant, explicit agreement among states.

Outlook

It’s clear from the above round-up of predictions that nation states are likely to be more active than ever in cyberspace in 2019. Perhaps we’ll even see the sort of ‘national cyber emergency’ envisaged by the UK’s NCSC, with potential loss of life. That’s the point where cyber attack moves towards cyberwar.

It’s also clear that governments — in the UK and US at least — are increasingly, if belatedly, acknowledging the scale of the problem of hostile nation-state cyber activity. It remains to be seen how effectively they can defend themselves, and even retaliate.

RECENT AND RELATED COVERAGE

Russian hackers are trying out this new malware against US and European targets
A new phishing campaign from a Russian-state backed hacking group targets American and European inboxes.

Russia wants DNC hack lawsuit thrown out, citing international conventions
Russian Federation says it benefits from the same legal protections as the US does when carrying out military cyberattacks.

Security warning: UK critical infrastructure still at risk from devastating cyber attack
Not enough is being done to protect against cyber attacks on energy, water and other vital services.

US, Russia, China don’t sign Macron’s cyber pact
New cyber peace pact signed by 51 other countries, 224 companies, and 92 non-profits and advocacy groups.

States activate National Guard cyber units for US midterm elections
National Guard cyber units activated in Washington, Illinois, and, more recently, Wisconsin.

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Security

CXO Insight: Cloud Cost Optimization

Published

on

One of the most common discussions with users these days is about the cost of public cloud and what they can do to reduce their bills. I visited AWS Re:Invent last week, and there was no exception. What can enterprises do to solve the cost problem? And what is AWS, the biggest of the cloud providers, in this space?

Why does it matter?

Many organizations are interested in FinOps as a new operating model, but in my opinion, FinOps is not always a solution. In fact, most users and vendors do not understand it; they think FinOps is a set of tools to help identify underutilized or poorly configured resources to reduce consumption and spend less. Tools can be very effective initially, but without a general acceptance of best practices across teams, applications, and business owners, it becomes complicated to scale these solutions to cover the entire cloud spending, especially when we talk about complex multi and hybrid cloud environments. Another big problem of this approach comes from the tool itself; this is another component to trust and manage, which must support a broad range of technologies, providers, and services over time.

Challenges and Opportunities

Most FinOps tools available today are designed around three fundamental steps: observation and data collection, analysis, alerting, and actions. Now, many of these tools use AI/ML techniques to provide the necessary insights to the user. In theory, this process works well, but simpler and highly effective methods exist to achieve similar or better results. With this, I’m not saying that FinOps tools are ineffective or can’t help optimize the use of cloud resources; what I want to say is that before choosing a tool, it is necessary to implement best practices and understand why resources are incorrectly allocated.

  1. FinOps as a feature: Many cloud providers implement extended observability and automation features directly in their services. Thanks to these, the user can monitor the real utilization of resources and define policies for automated optimization. Often users don’t even know about the existence of these features.
  2. Chargeback, Showback, and Shameback are good practices: One of the main features of FinOps tools is the ability to show who is doing what. In other words,  users can easily see the cost of an application or resources associated with a single developer or end user. This feature is often available directly from cloud service providers for every service, account, and tenant.
  3. Optimization also brings cost optimization: It is often easier to think about lift and shift for legacy applications or underestimate application optimization to solve performance problems. Additional resource allocation is just easier and less expensive in the short term than doing a thorough analysis and optimizing the single application components. 

Key Actions and Takeaways

As often, common sense usually brings the best results instead of complicating things with additional tools and layers. In this context, if we look at the three points above, we can easily find how to reduce cloud costs without increasing overall complexity.

Before adopting a FinOps tool, it is fundamental to look at services and products in use. Here are some examples to understand how easy cloud cost management can be:

  1. Data storage is the most important item in cloud spending for the majority of enterprises. S3 Storage Lens is a phenomenal tool to get better visibility into what is happening with your S3 storage. An easy-to-use interface and a lot of metrics give the user insights into how applications use storage and how to remediate potential issues, not only from the cost savings point of view.
  2. KubeCost is now a popular tool in the Kubernetes space. It is simple yet effective and gives full visibility on resource consumption. It can associate a cost to each single resource, show the real cost of every application or team, provide real-time alerts and insights, or produce reports to track costs and show trends over time. 
  3. S3 intelligent tiering is another example of optimization. Instead of manually using one of the many storage classes available on AWS S3, the user can select this option and have the system place data on different storage tiers depending on access time for the single object. This automates data placement for the best combination of performance and $/GB. Users that adopted this feature have seen a tremendous drop in storage fees with no or minimal impact on applications.

Where to go from here

This article is not aimed against FinOps, but it wants to separate hype from reality. Many users don’t need FinOps tools to solve their cloud spending, especially when the best practices behind it are not adopted as well.

In most cases, common sense will suffice to reduce cloud bills. And the right utilization of features from Amazon or other public service providers are more than enough to help cut costs noticeably. 

FinOps tools should be considered only when the organization is particularly large, and it becomes complicated to track all the moving parts, teams, users, and applications. (or there are politicals problems for which FinOps is much cooler than best practices, including chargeback)

If you are interested in learning more about Cloud and FinOps, please check GigaOm’s report library on CloudOps and Cloud infrastructure topics.

The post CXO Insight: Cloud Cost Optimization appeared first on GigaOm.

Continue Reading

Security

Time Is Running Out For The “Journey To The Cloud”

Published

on

A picture containing text, monitor, indoor, dark

Description automatically generated

Cloud is all, correct? Just as all roads lead to Rome, so all information technology journeys inevitably result in everything being, in some shape or form, “in the cloud.” So we are informed, at least: this journey started back in the mid 2000s, as application service providers (ASPs) gave way to various as-a-service offerings, and Amazon launched its game-changing Elastic Compute Cloud service, EC2. 

A decade and a half later, and we’re still on the road – nonetheless, the belief system that we’re en-route to some technologically superior nirvana pervades. Perhaps we will arrive one day at that mythical place where everything just works at ultra scale, and we can all get on with our digitally enabled existences. Perhaps not. We can have that debate, and in parallel, we need to take a cold, hard look at ourselves and our technology strategies. 

This aspirational-yet-vague approach to technological transformation is not doing enterprises (large or small) any favors. To put it simply, our dreams are proving expensive. First, let’s consider what is a writ (in large letters) in front of our eyes. 

Cloud costs are out of control

For sure, it is possible to spin up a server with a handful of virtual coppers, but this is part of the problem. “Cloud cost complexity is real,” wrote Paula Rooney for CIO.com earlier this year, in five words summarising the challenges with cloud cost management strategies – that it’s too easy to do more and more with the cloud, creating costs without necessarily realizing the benefits. 

We know from our FinOps research the breadth of cost management tools and services arriving on the scene to deal with this rapidly emerging challenge to manage cloud cost.  

(As an aside, we are informed by vendors, analysts, and pundits alike that the size of the cloud market is growing – but given the runaway train that cloud economics has become, perhaps it shouldn’t be. One to ponder.)

Cloud cost optimization models of many cloud computing services, SaaS, PaaS, and IaaS, are still often based around pay-per-use, which isn’t necessarily compatible with many organizations’ budgeting mechanisms. These models can be attractive for short-term needs but are inevitably more expensive for the longer term. I could caveat this with “unless accompanied by stringent cost control mechanisms,” but evidence across the past 15 years makes this point moot. 

One option is to move systems back in-house. As per a discussion I was having with CTO Andi Mann on LinkedIn, this is nothing new; what’s weird is that the journey to the cloud is always presented as one-way, with such events as the exception. Which brings to a second point that we are still wed to the notion that the cloud is a virtual place to which we shall arrive at some point. 

Spoiler alert: it isn’t. Instead, technology options will continue to burst forth, new ways of doing things requiring new architectures and approaches. Right now, we’re talking about multi-cloud and hybrid cloud models. But, let’s face it, the world isn’t “moving to multi-cloud” or hybrid cloud: instead, these are consequences of reality. 

“Multi-cloud architecture” does not exist in a coherent form; rather, organizations find themselves having taken up cloud services from multiple providers—Amazon Web Services, Microsoft Azure, Google Cloud Platform, and so on—and are living with the consequences. 

Similarly, what can we say about hybrid cloud? The term has been applied to either cloud services needing to integrate with legacy applications and data stores; or the use of public cloud services together with on-premise, ‘private’ versions of the same. In either case, it’s a fudge and an expensive one at that. 

Why expensive? Because we are, once again, fooling ourselves that the different pieces will “just work” together. At the risk of another spoiler alert, you only have to look at the surge in demand for glue services such as integration platforms as a service (iPaaS). These are not cheap, particularly when used at scale. 

Meanwhile, we are still faced with that age-old folly that whatever we are doing now might in some way replace what has gone before. I have had this conversation so many times over the decades that the task is to build something new, then migrate and decommission older systems and applications. I wouldn’t want to put a number on it, but my rule of thumb is that it happens less often than it doesn’t. More to manage, not less, and more to integrate and interface. 

Enterprise reality is a long way from cloud nirvana

Reality is, despite cloud spend starting to grow beyond traditional IT spend (see above on maybe it shouldn’t, but anyway), cloud services will live alongside existing IT systems for the foreseeable future, further adding to the hybrid mash. 

As I wrote back in 2009, “…choosing cloud services [is] no different from choosing any other kind of service. As a result, you will inevitably continue to have some systems running in-house… the result is inevitably going to be a hybrid architecture, in which new mixes with old, and internal with external.” 

It’s still true, with the additional factor of the law of diminishing returns. The hyperscalers have monetized what they can easily, amounting to billions of dollars in terms of IT real estate. But the rest isn’t going to be so simple. 

As cloud providers look to harvest more internal applications and run them on their own servers, they move from easier wins to the more challenging territory. The fact that, as of 2022, AWS has a worldwide director of mainframe sales is a significant indicator of where the buck stops, but mainframes are not going to give up their data and applications that easily. 

And why should they if the costs of migration increase beyond the benefits of doing so, particularly if other options exist to innovate? One example is captured by the potentially oxymoronic phrase ‘Mainframe DevOps’. For finance organizations, being able to run a CI/CD pipeline within a VM inside a mainframe opens the door to real-time anti-fraud analytics. That sounds like innovation to me.

Adding to all this is the new wave of “Edge”. Local devices, from mobile phones to video cameras and radiology machines, are increasingly intelligent and able to process data. See above on technology options bursting forth, requiring new architectures: cloud providers and telcos are still tussling with how this will look, even as they watch it happen in front of their eyes. 

Don’t get me wrong, there’s lots to like about the cloud. But it isn’t the ring to rule them all. Cloud is part of the answer, not the whole answer. But seeing cloud – or cloud-plus – as the core is having a skewing effect on the way we think about it.

The fundamentals of hosted service provision

There are three truths in technology – first, it’s about the abstraction of physical resources; second, it’s about right-sizing the figurative architecture; and third, that it’s about a dynamic market of provisioning. The rest is supply chain management and outsourcing, plus marketing and sales. 

The hyperscalers know this, and have done a great job of convincing everyone that the singular vision of cloud is the only show in town. At one point, they were even saying that it was cheaper: AWS’ CEO, in 2015, Andy Jassy, said*: “AWS has such large scale, that we pass on to our customers in the form of lower prices.” 

By 2018, AWS was stating, “We never said it was about saving money.” – read into that what you will, but note that many factors are outside the control even of AWS. 

“Lower prices” may be true for small hits of variable spending, but it certainly isn’t for major systems or large-scale innovation. Recognizing that pay-per-use  couldn’t fly for enterprise spending, AWS, GCP, and Azure have introduced (varyingly named) notions of reserved instances—in which virtual servers can be paid for in advance over a one- or three-year term. 

In major part, they’re a recognition that corporate accounting models can’t cope with cloud financing models; also in major part, they’re a rejection of the elasticity principle upon which it was originally sold. 

My point is not to rub any provider’s nose in its historical marketing but to return to my opener – that we’re still buying into the notional vision, even as it continues to fragment, and by doing so, the prevarication is costing end-user enterprises money. Certain aspects, painted as different or cheaper, are nothing of the sort – they’re just managed by someone else, and the costs are dictated by what organizations do with what is provided, not its list price. 

Shifting the focus from cloud-centricity

So, what to do? We need a view that reflects current reality, not historical rhetoric or a nirvanic future. The present and forward vision of massively distributed, highly abstracted and multi-sourced infrastructure is not what vendor marketing says it is. If you want proof, show me a single picture from a hyperscaler that shows the provider living within some multi-cloud ecosystem. 

So, it’s up to us to define it for them. If enterprises can’t do this, they will constantly be pulled off track by those whose answers suit their own goals. 

So, what does it look like? In the major part, we already have the answer – a multi-hosted, highly fragmented architecture is, and will remain the norm, even for firms that major on a single cloud provider. But there isn’t currently an easy way to describe it. 

I hate to say it, but we’re going to need a new term. I know, I know, industry analysts and their terms, eh? But when Gandalf the Grey became Gandalf the White, it meant something. Labels matter. The current terminology is wrong and driving this skewing effect. 

Having played with various ideas, I’m currently majoring in multi-platform architecture – it’s not perfect, I’m happy to change it, but it makes the point. 

A journey towards a more optimized, orchestrated multi-platform architecture is a thousand times more achievable and valuable than some figurative journey to the cloud. It embraces and encompasses migration and modernization, core and edge, hybrid and multi-hosting, orchestration and management, security and governance, cost control, and innovation. 

But it does so seeing the architecture holistically, rather than (say) seeing cloud security as somehow separate to non-cloud security or cloud cost management any different to outsourcing cost optimization. 

Of course, we may build things in a cloud-native manner (with containers, Kubernetes and the like), but we can do so without seeing resulting applications as (say, again) needing to run on a hyperscaler, rather than a mainframe. In the multi-platform architecture, all elements being first class citizens even if some are older than others. 

That embraces the breadth of the problem space and isn’t skewed towards an “everything will ultimately be cloud,” nor a “cloud is good, the rest is bad,” nor a “cloud is the norm, edge is the exception” line. It also puts paid to any idea of the distorted size of the cloud market. Cloud economics should not exist as a philosophy, or at the very least, it should be one element of FinOps. 

There’s still a huge place for the hyperscalers, whose businesses run on three axes – functionality, engineering, and the aforementioned cost. AWS has always sought to out-function the competition, famous for the number of announcements it would make at re:Invent each year (and this year’s data-driven announcements are no exception). Engineering is another definitive metric of strength for a cloud provider, wrapping scalability, performance and robustness into the thought of: is it built right? 

And finally, we have the aforementioned cost. There’s also a place for spending on cloud providers, but cost management should be part of the Enterprise IT strategy, not locking the stable door after the rather expensive and hungry stallion has bolted. 

Putting multi-platform IT strategy into the driving seat

Which brings to the conclusion – that such a strategy should be built on the notion of a multi-platform architecture, not a figurative cloud. With the former, technology becomes a means to an end, with the business in control. With the latter, organizations are essentially handing the keys to their digital kingdoms to a third party (and help yourself to the contents of the fridge while you are there). 

If “every company is a software company,” they need to recognize that software decisions can only be made with a firm grip on infrastructure. This boils down to the most fundamental rule of business – which is to add value to stakeholders. Entire volumes have been written about how leaders need to decide where this value is coming from and dispense with the rest (cf Nike and manufacturing vs branding, and so on and so on). 

But this model only works if “the rest” can be delivered cost-effectively. Enterprises do not have a tight grip on their infrastructure providers, a fact that hyperscalers are content to leverage and will continue to do so as long as end-user businesses let them.

Ultimately, I don’t care what term is adopted. But we need to be able to draw a coherent picture that is centred on enterprise needs, not cloud provider capabilities, and it’ll really help everybody if we all agree on what it’s called. To stick with current philosophies is helping one set of organizations alone. However, many times, they reel out Blockbuster or Kodak as worst-case examples (see also: we’re all still reading books). 

Perhaps, we are in the middle of a revolution in service provision. But don’t believe for a minute that providers only offering one part of the answer have either the will or ability to see beyond their own solutions or profit margins. That’s the nature of competition, which is fine. But it means that enterprises need to be more savvy about the models they’re moving towards, as cloud providers aren’t going to do it for them. 

To finish on one other analyst trick, yes, we need a paradigm shift. But one which maps onto how things are and will be, with end-user organizations in the driving seat. Otherwise, their destinies will be dictated by others, even as they pick up the check.  

*The full quote, from Jassy’s 2015 keynote, is: “There’s 6 reasons that we usually tell people, that we hear most frequently. The first is, if you can turn capital expense to a variable expense, it’s usually very attractive to companies. And then, that variable expense is less than what companies pay on their own – AWS has such large scale, that we pass on to our customers in the form of lower prices.”

The post Time Is Running Out For The “Journey To The Cloud” appeared first on GigaOm.

Continue Reading

Security

How APM, Observability and AIOps drive Operational Awareness

Published

on

Ron Williams explains all to Jon Collins

Jon Collins: Hi Ron, thanks for joining me! I have two questions, if I may. One’s the general question of observability versus what’s been called application performance monitoring, APM – there’s been some debate about this in the industry, I know. Also, how do they both fit in with operational awareness, which I know is a hot topic for you.

Ron Williams: I’ll wax lyrical, and we can see where this goes – I’ll want to bring in AIOps as well, as another buzzword. Basically, we all started out with monitoring, which is, you know: Is it on? Is it off? Just monitoring performance, that’s the basis of APM. 

Observability came about when we tried to say, well, this one’s performing this way, that one’s performing that way, is there a relationship? So, it is trying to take the monitoring that you have and say, how are these things connected? Observability tools are looking at the data that you have, and trying to make sure that things are working to some degree.

But that still doesn’t tell you whether or not the company is okay, which is where operational awareness comes in. Awareness is like, hey, are all the things necessary to run the company included? And are they running okay? That’s what I call full operational awareness. This requires information that is not in it to be combined with information that obviously IT operations has, and AIOps tends to be the tool that can do that. 

So, Observability solutions serve an important function; it allows you to see the technical connections between objects and services, and why and how they may work. Awareness includes that and adds functional analysis, prediction, and prevention. But I’m not just talking about operational awareness as a technical thing, but in terms of the business. Let’s look at HR – this has an IT component, but nobody looks at that as a separate thing. If HR’s IT isn’t working, and if I’m the CEO, as far as I am concerned, HR is not working, and so the company is not working, even if other parts still function. 

So, how do I gain awareness of all the pieces being brought together? AIOps is a solution that can do that, because it is an intelligent piece that pulls data in from everywhere, whereas observability is taking the monitoring data that you have, and understanding how those data relate to each other. APM gives information and insights, observability helps solve technical problems, whereas AIOps tools helps solve for business problems. 

AIOps platforms are one tool that can combine both data sources real time IT operational awareness and Business operations awareness. Together, these constitute Organizational Awareness, that is, awareness across the company as a whole. 

Jon: For my take on the benefits of observability platforms, bear with me as I haven’t actually used these tools! I came out of the ITIL, ITSM world of the 1990s, which (to me) was about providing measures of success. Back in the day, you got a dashboard saying things aren’t performing – that gave us performance management, anomaly detection, IT service management and so on. Then it went into business service management, dashboards to say, yeah, your current accounts aren’t working as they should. But it was always about presentation of information to give you a feel of success, and kick off a diagnostic process. 

Whereas, observability,… I remember I was at a CloudBees user event, and someone said this, so I’m going to borrow from them: essentially, that solving where things are going wrong has become a kind of whodunnit. Observability, to me, is one of those words that describes itself. It’s not a solution, it’s actually an anti-word, it describes the problem in a way that makes it sound like a solution, actionable insights. It’s the lack of ability to know where the problems are happening in distributed architectures. That’s what is causing so much difficulty. 

Ron: That’s a valid statement. Operational awareness comes from situational awareness, which was originally from the military. It’s a great term, because it says you’re sitting in the middle of the field of battle. Where’s the danger? You’re doing this, your head’s on a swivel, and you don’t know where anything is. 

So operational awareness is a big deal, and it feeds the operation of not just IT, but the whole company. You can have IT operating at a hundred percent, but the company can be not making a dime, because something IT is not responsible directly for, but supports, is not working correctly.

Jon: I spoke to the mayor of the city of Chicago about situational awareness, specifically about snow ploughs: when there’s snow, you want to turn into a street and know the cars are out of the way, because once a snowplough is in a street, it can’t get out. I guess, from the point of view that you’re looking at here, operational awareness is not the awareness that IT operations requires. It’s awareness of business operations and being able to run the business better based on information about IT systems. Is that fair?

Ron: Yes. For the monitoring, are my systems OK, and is the company? Observability is, how are systems and the company behaving, why are they behaving that way, and what’s their relationship? Can I fix things without anything happening, and causing incidents? Awareness is a whole company thing – are all parts performing the way they should? Will something break, and if so, when? And can I prevent that from breaking?

That’s why operational awareness is more than situational awareness, which we can see as helping individuals – it’s aimed at the whole company, working with business awareness to drive organizational awareness. I’m not trying to invent concepts, but I am trying to be frank about what’s needed and how the different groups of tools apply. Operational awareness includes observability, monitoring, reporting and prediction, which is where AIOps comes in. You get all the pieces that we all know about, but when you put them together you get awareness of the operation of the company, not just IT. Observability and monitoring doesn’t include anything about business operations. 

Monitoring, Observability and AIOps

Jon: Is there another element? For the record, I hate maturity models because they never happen. But this is a kind of developmental model, isn’t it? From monitoring, to observability, and from this ability you want to improve to awareness. What you can also do is think upwards, from basic systems management, to IT service management to business service management. 

Business service management was great, because it said (for example) people can’t access the current accounts. That’s really important, but what it wasn’t telling you was whether or not that’s doing you any damage as a company, so you can work across monitoring, through observability to operational awareness.

Another question, then, where can you get this operational awareness thing? I don’t suppose you can go down to Woolworths, pick up some operational awareness, stick it on a pallet, and wheel it home, so what do you do? 

Ron: For a start, you must have all the pieces – if you don’t have monitoring, observability and all that you can’t get there, right? But then, one of the biggest pieces that’s missing is business awareness. The business, generally speaking, doesn’t communicate its operational state. This makes it hard – if my database isn’t running, what’s the impact of that? What does it mean to be fully aware? We can see this as a Venn diagram – if I draw another circle, it’s the whole circle, it’s the company.

Operational Awareness

Jon: Hang on, this is super important. If we go back to the origins of DevOps (we can argue whether or not it’s been successful since two thousand and seven, but bear with me on this), the origins of it were things like, “Black Friday’s coming up. How can we have the systems in place that we need to deliver on that?” It was very much from left to right – we need to deploy new features, so that we can maximize benefits, we need to set priorities and so on. 

But the way that you said it was the business is not closing the loop. It’s up to the business to say, “I’m not able to perform. I’m not able to sell as much as I should be at the moment. Let’s look into why that is, and let’s feed that back to IT, so that I can be doing that better.” You’ve got the marketing department., the sales department, upper management, all the different parts of the organization. Then all need to take responsibility for their part in telling everyone else how well they are doing. 

Ron: Absolutely. I almost put a fourth circle on my Venn diagram, which was the business side. But I decided to leave this, as it was about awareness as an intersection. It’s odd to me that many companies are not aware of all the things that are necessary to make them function as a company. They know that IT is a big deal, but they don’t know why or how or what IT’s impact is.

Jon: Yes, so bringing in elements of employee, experience and customer experience, and all those sorts of thing which then feeds the value stream management, strategic portfolio management aspects, knowing where to make a difference, shifting our needle according to the stakeholders that we have. 

Ron: Yes, and all of that’s in awareness, you know!

Jon: That’s a great point to leave this, that the business needs to recognize it has a role in this. It can’t be a passive consumer of IT. The business needs to be a supplier of information. I know we’ve said similar things before, but the context is different – cloud-native and so on, so it’s about aligning business information with a different architecture and set of variables. Thank you so much, Ron. It’s been great speaking to you.

Ron: Thank you for letting me share!

The post <strong>How APM, Observability and AIOps drive Operational Awareness</strong> appeared first on GigaOm.

Continue Reading

Trending