Connect with us

Security

Forgot password? Five reasons why you need a password manager

Published

on

old bunch of keys, rusty keys


Getty Images/iStockphoto

For years, I’ve been reading predictions about new technologies that will render passwords obsolete. Then I click through and inspect the details and I wind up shaking my head. There are plenty of clever identity technologies working their way into the mainstream, but passwords will remain a necessary evil for many years to come.

And unless you want to be a sitting duck on the Internet, you need a strategy for managing those passwords. Large organizations can create sensible password policies and use single-sign-on software, but small businesses and individuals are on their own.

Also: The Best Password Managers of 2019 CNET 

As best practices go, the rules for creating passwords are simple: Use a random combination of numbers, symbols, and mixed-case letters; never reuse passwords; turn on two-factor authentication if it’s available.

There’s some disagreement on whether you should change passwords regularly. I think there’s a strong case to be made for changing passwords every year or so, if only to avoid being innocently caught up in a database breach.

And, as far as I am concerned, the most important rule of all is use a password manager.

I have used several software-based password managers over the years and can’t imagine trying to get through the day without one.

I know people who keep password lists in an encrypted file of some sort. That’s exactly what a software-based password manager does. But that’s where the resemblance stops.

In this article, I explain why I consider a password manager essential, with links to five programs that I recommend. I also tackle some of the arguments I routinely hear from skeptics.

The case for password managers

The five programs that I have examined for this article are all similar in their core features. On a Windows PC or a Mac, you install a program that does the work of saving sets of credentials in a database whose contents are protected with AES-256 encryption. To unlock the password database, you enter a decryption key (your master password) that only you know.

Password managers that sync your password database to the cloud use end-to-end encryption. The data is encrypted before it leaves your device, and it stays encrypted as it’s transferred to the remote server. When you sign in to the app on your local device, the program sends a one-way hash of the password that identifies you but can’t be used to unlock the file itself.

Also: Why nearly 50% of organizations are failing at password security TechRepublic 

The companies that manage and sync those saved files don’t have access to the decryption keys. In fact, your master password isn’t stored anywhere, and if you forget it, you’re out of luck. There’s no known way to crack an AES-256 encrypted file that’s protected with a strong personal key.

That architecture offers five distinct advantages over a DIY solution.

One: Browser Integration

Most password managers include browser extensions that automatically save credentials when you create a new account or sign in using those credentials for the first time. That browser integration also allows you to automatically enter credentials when you visit a matching website.

Contrast that approach with the inevitable friction of a manual list. You don’t need to find a file and add a password to it to save a new or changed set of credentials, and you don’t need to find and open that same file to copy and paste your password.

Two: Password Generation

Every password manager worth its salted hash includes a password generator capable of instantly producing a truly random, never-before-used-by-you password. If you don’t like that password, you can click to generate another. You can then use that random password when creating a new account or changing credentials for an existing one.

Most password managers also allow you to customize the length and complexity of a generated password so you can deal with sites that have peculiar password rules.

With the possible exceptions of John Forbes Nash, Jr., and Raymond Babbitt, mere mortals are not capable of such feats of randomization.

Three: Phishing Protection

Integrating a password manager with a browser is superb protection against phishing sites. If you visit a site that has managed to perfectly duplicate your bank’s login page and even mess with the URL display to make it look legit, you might be fooled. Your password manager, on the other hand, won’t enter your saved credentials, because the URL of the fake site doesn’t match the legitimate domain associated with them.

Also: Google releases Chrome extension to check for leaked usernames and passwords 

That phishing protection is probably the most underrated feature of all. If you manage passwords manually, by copying and pasting from an encrypted personal file, you will paste your username and password into the respective fields on that well-designed fake page, because you don’t realize it’s fake.

Four: Cross Platform Access

Password managers work across devices, including PCs, Macs, and mobile devices, with the option to sync your encrypted password database to the cloud. Access to that file and its contents can be secured with biometric authentication and 2FA.

By contrast, if you manage passwords in an encrypted file that’s saved locally, you have to manually copy that file to other devices (or keep it in the cloud in a location under your personal control), and then make sure the contents of each copy stay in sync. More friction.

Five: Surveillance Safeguard

Password managers generally offer good protection against “shoulder surfing.” An attacker who’s able to watch you type, either live or with the help of a surveillance camera, can steal your login credentials with ease. Password managers never expose those details.

Even armed with those arguments, when I make that recommendation to other people, I typically hear the same excuses.

“I already have a perfectly good system for managing passwords.”

Usually, this system involves reusing an easy-to-remember base password of some sort, tacking on a special suffix or prefix attached to that base on a per-site basis. The trouble with that scheme is that those passwords aren’t random, and if someone figures out your pattern, they pretty much have a skeleton key to unlock everything. And a 2013 research paper from computer scientists at the University of Illinois, Princeton, and Indiana University, The Tangled Web of Password Reuse, demonstrated that attackers can figure out those patterns very, very quickly.

More importantly, this sort of scheme doesn’t scale. Eventually it collides with the password rules at a site that, say, doesn’t allow special characters or restricts password length. (I know, that’s nuts, but those sites exist.) Or a service forces you to change your password and won’t accept your new password because it’s too close to the previous one and now you have another exception to your system that you have to keep track of.

Also: How to manage your passwords effectively with KeePass TechRepublic 

And so you wind up keeping an encrypted list of passwords that are not exactly unique and not exactly random, and not at all secure. Why not just use software built for this purpose?

“If someone steals my password file, they have all my passwords.”

No, they don’t. They have an encrypted file that is, for all intents and purposes, useless gibberish. The only way to extract its secrets is with the decryption key, which you and you alone know.

Of course, this assumes you’ve followed some reasonable precautions with that decryption key. Specifically, that you’ve made it long enough, that it can’t be guessed even by someone who knows you well, and that you’ve never used it for anything else.

If you need a strong and unique password, you can generate one at correcthorsebatterystaple.net, which uses the surprisingly secure methodology from this classic XKCD cartoon.

You definitely shouldn’t write that key down on a sticky note or a piece of paper in your desk drawer, either. But you might want to write down that password and store it in a very safe place or with a very trusted person, along with instructions for how to use it to unlock your password file in the event something happens to you.

“I don’t trust someone else to store my passwords on their server.”

I understand the instinctive reaction that allowing a cloud service to keep your full database of passwords must be a horrifying security risk. Like anything cloud-related, there’s a trade-off between convenience and security, but that risk is relatively low if the service follows best practices for encryption and you’ve set a strong master password.

But if you just don’t trust the cloud, you have alternatives.

Also: 57% of IT workers who get phished don’t change their password behaviors TechRepublic 

Several of the password managers I’ve looked at offer the option to store a local-only copy of your AES-256 encrypted file, with no sync features whatsoever. If you choose that option, you’ll have to either forgo the option to use your password manager on multiple sites or devise a way to manually sync those files between different devices.

As a middle ground, you can use a personal cloud service to sync your password files. 1Password, for example, supports automatic syncing to both Dropbox and iCloud, ensuring that you’re protected even if one of those services is compromised.

“I’m not a target.”

Yes, you are.

If you’re a journalist working on security issues, or an activist in a country whose leaders don’t approve of activism, or a staffer on a high-profile political campaign, or a contractor that communicates with people in sensitive industries, you’re a high-value target. Anyone who fits in one of those categories should take opsec seriously, and a password manager is an essential part of a well-layered security program.

But even if you’re not an obvious candidate for targeted attacks, you can be swept up in a website breach. That’s why Have I Been Pwned? exists. It’s easy enough for a compromised website to force you to reset your password, minimizing the risk of that breach, but if you’ve used that same combination of credentials elsewhere, you’re at serious risk.

Five password managers worth considering

I have personally used all the programs in this list. For each one, I’ve included pricing details as well as a link to security information. Every paid program offers a free trial; I recommend taking advantage of those trials to see if a program is right for you.

1Password

Although this product earned its reputation on Apple devices, it has embraced Windows, Android, and Chrome OS as well. Personal subscriptions are $3 per month; a family option is $5 a month (both prices require annual billing). Password files can be stored locally, synced from 1Password’s servers, or connected to a Dropbox or iCloud account. Team, Business, and Enterprise accounts add 2-factor authentication and start at $4 per user per month. Security details here.

Dashlane

The youngest member of the group has been around for more than six years and has earned a reputation for ease of use. Apps are available for Windows PCs, Macs, Android, and iOS. If your password database includes fewer than 50 entries, you can get by with the free version. The $5-per-month Premium version includes a VPN option, and the $10-a-month bundle adds credit monitoring and identity theft features. Business plans include the same features as Premium, at $4 per user per month. Security details here.

KeePass

If you’re cloud-phobic or if you insist on open source software, this is your option. KeePass runs on every desktop and mobile platform, including most Linux distros, and it’s free (as in beer). Files are stored locally, and you’ll want to master its arcane keyboard shortcuts to fill in passwords automatically. Browser integration is available via third-party plugins; for multi-device use, the program’s built-in sync engine automatically updates the password database in whatever cloud-based storage location you specify. Security details here.

LastPass

Arguably the best known of the bunch, LastPass is free and works on all major desktop and mobile platforms. The service is cloud-based only, with files stored on the company’s servers and synced to local devices. A Premium version ($3 a month) supports advanced 2-factor authentication options; $4 a month covers a family of up to five. Business plans start at $4 per user per month. LastPass suffered an embarrassing data breach in 2015, shortly before the company was acquired by LogMeIn. Security details here.

RoboForm

Launched in 2000, RoboForm is by far the most senior member of the category. The free version supports unlimited logins and stores its database file locally. RoboForm Everywhere is a $24-a-year subscription service that adds cloud backup, sync, and 2-factor authentication features. The Family option ($48 a year) covers up to five users, and business plans cost $35 per user. Discounts are available for multi-year purchases. Security details here.


Affiliate disclosure: ZDNet earns commissions from the products and services featured on this page.

Related stories:

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Security

CXO Insight: Cloud Cost Optimization

Published

on

One of the most common discussions with users these days is about the cost of public cloud and what they can do to reduce their bills. I visited AWS Re:Invent last week, and there was no exception. What can enterprises do to solve the cost problem? And what is AWS, the biggest of the cloud providers, in this space?

Why does it matter?

Many organizations are interested in FinOps as a new operating model, but in my opinion, FinOps is not always a solution. In fact, most users and vendors do not understand it; they think FinOps is a set of tools to help identify underutilized or poorly configured resources to reduce consumption and spend less. Tools can be very effective initially, but without a general acceptance of best practices across teams, applications, and business owners, it becomes complicated to scale these solutions to cover the entire cloud spending, especially when we talk about complex multi and hybrid cloud environments. Another big problem of this approach comes from the tool itself; this is another component to trust and manage, which must support a broad range of technologies, providers, and services over time.

Challenges and Opportunities

Most FinOps tools available today are designed around three fundamental steps: observation and data collection, analysis, alerting, and actions. Now, many of these tools use AI/ML techniques to provide the necessary insights to the user. In theory, this process works well, but simpler and highly effective methods exist to achieve similar or better results. With this, I’m not saying that FinOps tools are ineffective or can’t help optimize the use of cloud resources; what I want to say is that before choosing a tool, it is necessary to implement best practices and understand why resources are incorrectly allocated.

  1. FinOps as a feature: Many cloud providers implement extended observability and automation features directly in their services. Thanks to these, the user can monitor the real utilization of resources and define policies for automated optimization. Often users don’t even know about the existence of these features.
  2. Chargeback, Showback, and Shameback are good practices: One of the main features of FinOps tools is the ability to show who is doing what. In other words,  users can easily see the cost of an application or resources associated with a single developer or end user. This feature is often available directly from cloud service providers for every service, account, and tenant.
  3. Optimization also brings cost optimization: It is often easier to think about lift and shift for legacy applications or underestimate application optimization to solve performance problems. Additional resource allocation is just easier and less expensive in the short term than doing a thorough analysis and optimizing the single application components. 

Key Actions and Takeaways

As often, common sense usually brings the best results instead of complicating things with additional tools and layers. In this context, if we look at the three points above, we can easily find how to reduce cloud costs without increasing overall complexity.

Before adopting a FinOps tool, it is fundamental to look at services and products in use. Here are some examples to understand how easy cloud cost management can be:

  1. Data storage is the most important item in cloud spending for the majority of enterprises. S3 Storage Lens is a phenomenal tool to get better visibility into what is happening with your S3 storage. An easy-to-use interface and a lot of metrics give the user insights into how applications use storage and how to remediate potential issues, not only from the cost savings point of view.
  2. KubeCost is now a popular tool in the Kubernetes space. It is simple yet effective and gives full visibility on resource consumption. It can associate a cost to each single resource, show the real cost of every application or team, provide real-time alerts and insights, or produce reports to track costs and show trends over time. 
  3. S3 intelligent tiering is another example of optimization. Instead of manually using one of the many storage classes available on AWS S3, the user can select this option and have the system place data on different storage tiers depending on access time for the single object. This automates data placement for the best combination of performance and $/GB. Users that adopted this feature have seen a tremendous drop in storage fees with no or minimal impact on applications.

Where to go from here

This article is not aimed against FinOps, but it wants to separate hype from reality. Many users don’t need FinOps tools to solve their cloud spending, especially when the best practices behind it are not adopted as well.

In most cases, common sense will suffice to reduce cloud bills. And the right utilization of features from Amazon or other public service providers are more than enough to help cut costs noticeably. 

FinOps tools should be considered only when the organization is particularly large, and it becomes complicated to track all the moving parts, teams, users, and applications. (or there are politicals problems for which FinOps is much cooler than best practices, including chargeback)

If you are interested in learning more about Cloud and FinOps, please check GigaOm’s report library on CloudOps and Cloud infrastructure topics.

The post CXO Insight: Cloud Cost Optimization appeared first on GigaOm.

Continue Reading

Security

Time Is Running Out For The “Journey To The Cloud”

Published

on

A picture containing text, monitor, indoor, dark

Description automatically generated

Cloud is all, correct? Just as all roads lead to Rome, so all information technology journeys inevitably result in everything being, in some shape or form, “in the cloud.” So we are informed, at least: this journey started back in the mid 2000s, as application service providers (ASPs) gave way to various as-a-service offerings, and Amazon launched its game-changing Elastic Compute Cloud service, EC2. 

A decade and a half later, and we’re still on the road – nonetheless, the belief system that we’re en-route to some technologically superior nirvana pervades. Perhaps we will arrive one day at that mythical place where everything just works at ultra scale, and we can all get on with our digitally enabled existences. Perhaps not. We can have that debate, and in parallel, we need to take a cold, hard look at ourselves and our technology strategies. 

This aspirational-yet-vague approach to technological transformation is not doing enterprises (large or small) any favors. To put it simply, our dreams are proving expensive. First, let’s consider what is a writ (in large letters) in front of our eyes. 

Cloud costs are out of control

For sure, it is possible to spin up a server with a handful of virtual coppers, but this is part of the problem. “Cloud cost complexity is real,” wrote Paula Rooney for CIO.com earlier this year, in five words summarising the challenges with cloud cost management strategies – that it’s too easy to do more and more with the cloud, creating costs without necessarily realizing the benefits. 

We know from our FinOps research the breadth of cost management tools and services arriving on the scene to deal with this rapidly emerging challenge to manage cloud cost.  

(As an aside, we are informed by vendors, analysts, and pundits alike that the size of the cloud market is growing – but given the runaway train that cloud economics has become, perhaps it shouldn’t be. One to ponder.)

Cloud cost optimization models of many cloud computing services, SaaS, PaaS, and IaaS, are still often based around pay-per-use, which isn’t necessarily compatible with many organizations’ budgeting mechanisms. These models can be attractive for short-term needs but are inevitably more expensive for the longer term. I could caveat this with “unless accompanied by stringent cost control mechanisms,” but evidence across the past 15 years makes this point moot. 

One option is to move systems back in-house. As per a discussion I was having with CTO Andi Mann on LinkedIn, this is nothing new; what’s weird is that the journey to the cloud is always presented as one-way, with such events as the exception. Which brings to a second point that we are still wed to the notion that the cloud is a virtual place to which we shall arrive at some point. 

Spoiler alert: it isn’t. Instead, technology options will continue to burst forth, new ways of doing things requiring new architectures and approaches. Right now, we’re talking about multi-cloud and hybrid cloud models. But, let’s face it, the world isn’t “moving to multi-cloud” or hybrid cloud: instead, these are consequences of reality. 

“Multi-cloud architecture” does not exist in a coherent form; rather, organizations find themselves having taken up cloud services from multiple providers—Amazon Web Services, Microsoft Azure, Google Cloud Platform, and so on—and are living with the consequences. 

Similarly, what can we say about hybrid cloud? The term has been applied to either cloud services needing to integrate with legacy applications and data stores; or the use of public cloud services together with on-premise, ‘private’ versions of the same. In either case, it’s a fudge and an expensive one at that. 

Why expensive? Because we are, once again, fooling ourselves that the different pieces will “just work” together. At the risk of another spoiler alert, you only have to look at the surge in demand for glue services such as integration platforms as a service (iPaaS). These are not cheap, particularly when used at scale. 

Meanwhile, we are still faced with that age-old folly that whatever we are doing now might in some way replace what has gone before. I have had this conversation so many times over the decades that the task is to build something new, then migrate and decommission older systems and applications. I wouldn’t want to put a number on it, but my rule of thumb is that it happens less often than it doesn’t. More to manage, not less, and more to integrate and interface. 

Enterprise reality is a long way from cloud nirvana

Reality is, despite cloud spend starting to grow beyond traditional IT spend (see above on maybe it shouldn’t, but anyway), cloud services will live alongside existing IT systems for the foreseeable future, further adding to the hybrid mash. 

As I wrote back in 2009, “…choosing cloud services [is] no different from choosing any other kind of service. As a result, you will inevitably continue to have some systems running in-house… the result is inevitably going to be a hybrid architecture, in which new mixes with old, and internal with external.” 

It’s still true, with the additional factor of the law of diminishing returns. The hyperscalers have monetized what they can easily, amounting to billions of dollars in terms of IT real estate. But the rest isn’t going to be so simple. 

As cloud providers look to harvest more internal applications and run them on their own servers, they move from easier wins to the more challenging territory. The fact that, as of 2022, AWS has a worldwide director of mainframe sales is a significant indicator of where the buck stops, but mainframes are not going to give up their data and applications that easily. 

And why should they if the costs of migration increase beyond the benefits of doing so, particularly if other options exist to innovate? One example is captured by the potentially oxymoronic phrase ‘Mainframe DevOps’. For finance organizations, being able to run a CI/CD pipeline within a VM inside a mainframe opens the door to real-time anti-fraud analytics. That sounds like innovation to me.

Adding to all this is the new wave of “Edge”. Local devices, from mobile phones to video cameras and radiology machines, are increasingly intelligent and able to process data. See above on technology options bursting forth, requiring new architectures: cloud providers and telcos are still tussling with how this will look, even as they watch it happen in front of their eyes. 

Don’t get me wrong, there’s lots to like about the cloud. But it isn’t the ring to rule them all. Cloud is part of the answer, not the whole answer. But seeing cloud – or cloud-plus – as the core is having a skewing effect on the way we think about it.

The fundamentals of hosted service provision

There are three truths in technology – first, it’s about the abstraction of physical resources; second, it’s about right-sizing the figurative architecture; and third, that it’s about a dynamic market of provisioning. The rest is supply chain management and outsourcing, plus marketing and sales. 

The hyperscalers know this, and have done a great job of convincing everyone that the singular vision of cloud is the only show in town. At one point, they were even saying that it was cheaper: AWS’ CEO, in 2015, Andy Jassy, said*: “AWS has such large scale, that we pass on to our customers in the form of lower prices.” 

By 2018, AWS was stating, “We never said it was about saving money.” – read into that what you will, but note that many factors are outside the control even of AWS. 

“Lower prices” may be true for small hits of variable spending, but it certainly isn’t for major systems or large-scale innovation. Recognizing that pay-per-use  couldn’t fly for enterprise spending, AWS, GCP, and Azure have introduced (varyingly named) notions of reserved instances—in which virtual servers can be paid for in advance over a one- or three-year term. 

In major part, they’re a recognition that corporate accounting models can’t cope with cloud financing models; also in major part, they’re a rejection of the elasticity principle upon which it was originally sold. 

My point is not to rub any provider’s nose in its historical marketing but to return to my opener – that we’re still buying into the notional vision, even as it continues to fragment, and by doing so, the prevarication is costing end-user enterprises money. Certain aspects, painted as different or cheaper, are nothing of the sort – they’re just managed by someone else, and the costs are dictated by what organizations do with what is provided, not its list price. 

Shifting the focus from cloud-centricity

So, what to do? We need a view that reflects current reality, not historical rhetoric or a nirvanic future. The present and forward vision of massively distributed, highly abstracted and multi-sourced infrastructure is not what vendor marketing says it is. If you want proof, show me a single picture from a hyperscaler that shows the provider living within some multi-cloud ecosystem. 

So, it’s up to us to define it for them. If enterprises can’t do this, they will constantly be pulled off track by those whose answers suit their own goals. 

So, what does it look like? In the major part, we already have the answer – a multi-hosted, highly fragmented architecture is, and will remain the norm, even for firms that major on a single cloud provider. But there isn’t currently an easy way to describe it. 

I hate to say it, but we’re going to need a new term. I know, I know, industry analysts and their terms, eh? But when Gandalf the Grey became Gandalf the White, it meant something. Labels matter. The current terminology is wrong and driving this skewing effect. 

Having played with various ideas, I’m currently majoring in multi-platform architecture – it’s not perfect, I’m happy to change it, but it makes the point. 

A journey towards a more optimized, orchestrated multi-platform architecture is a thousand times more achievable and valuable than some figurative journey to the cloud. It embraces and encompasses migration and modernization, core and edge, hybrid and multi-hosting, orchestration and management, security and governance, cost control, and innovation. 

But it does so seeing the architecture holistically, rather than (say) seeing cloud security as somehow separate to non-cloud security or cloud cost management any different to outsourcing cost optimization. 

Of course, we may build things in a cloud-native manner (with containers, Kubernetes and the like), but we can do so without seeing resulting applications as (say, again) needing to run on a hyperscaler, rather than a mainframe. In the multi-platform architecture, all elements being first class citizens even if some are older than others. 

That embraces the breadth of the problem space and isn’t skewed towards an “everything will ultimately be cloud,” nor a “cloud is good, the rest is bad,” nor a “cloud is the norm, edge is the exception” line. It also puts paid to any idea of the distorted size of the cloud market. Cloud economics should not exist as a philosophy, or at the very least, it should be one element of FinOps. 

There’s still a huge place for the hyperscalers, whose businesses run on three axes – functionality, engineering, and the aforementioned cost. AWS has always sought to out-function the competition, famous for the number of announcements it would make at re:Invent each year (and this year’s data-driven announcements are no exception). Engineering is another definitive metric of strength for a cloud provider, wrapping scalability, performance and robustness into the thought of: is it built right? 

And finally, we have the aforementioned cost. There’s also a place for spending on cloud providers, but cost management should be part of the Enterprise IT strategy, not locking the stable door after the rather expensive and hungry stallion has bolted. 

Putting multi-platform IT strategy into the driving seat

Which brings to the conclusion – that such a strategy should be built on the notion of a multi-platform architecture, not a figurative cloud. With the former, technology becomes a means to an end, with the business in control. With the latter, organizations are essentially handing the keys to their digital kingdoms to a third party (and help yourself to the contents of the fridge while you are there). 

If “every company is a software company,” they need to recognize that software decisions can only be made with a firm grip on infrastructure. This boils down to the most fundamental rule of business – which is to add value to stakeholders. Entire volumes have been written about how leaders need to decide where this value is coming from and dispense with the rest (cf Nike and manufacturing vs branding, and so on and so on). 

But this model only works if “the rest” can be delivered cost-effectively. Enterprises do not have a tight grip on their infrastructure providers, a fact that hyperscalers are content to leverage and will continue to do so as long as end-user businesses let them.

Ultimately, I don’t care what term is adopted. But we need to be able to draw a coherent picture that is centred on enterprise needs, not cloud provider capabilities, and it’ll really help everybody if we all agree on what it’s called. To stick with current philosophies is helping one set of organizations alone. However, many times, they reel out Blockbuster or Kodak as worst-case examples (see also: we’re all still reading books). 

Perhaps, we are in the middle of a revolution in service provision. But don’t believe for a minute that providers only offering one part of the answer have either the will or ability to see beyond their own solutions or profit margins. That’s the nature of competition, which is fine. But it means that enterprises need to be more savvy about the models they’re moving towards, as cloud providers aren’t going to do it for them. 

To finish on one other analyst trick, yes, we need a paradigm shift. But one which maps onto how things are and will be, with end-user organizations in the driving seat. Otherwise, their destinies will be dictated by others, even as they pick up the check.  

*The full quote, from Jassy’s 2015 keynote, is: “There’s 6 reasons that we usually tell people, that we hear most frequently. The first is, if you can turn capital expense to a variable expense, it’s usually very attractive to companies. And then, that variable expense is less than what companies pay on their own – AWS has such large scale, that we pass on to our customers in the form of lower prices.”

The post Time Is Running Out For The “Journey To The Cloud” appeared first on GigaOm.

Continue Reading

Security

How APM, Observability and AIOps drive Operational Awareness

Published

on

Ron Williams explains all to Jon Collins

Jon Collins: Hi Ron, thanks for joining me! I have two questions, if I may. One’s the general question of observability versus what’s been called application performance monitoring, APM – there’s been some debate about this in the industry, I know. Also, how do they both fit in with operational awareness, which I know is a hot topic for you.

Ron Williams: I’ll wax lyrical, and we can see where this goes – I’ll want to bring in AIOps as well, as another buzzword. Basically, we all started out with monitoring, which is, you know: Is it on? Is it off? Just monitoring performance, that’s the basis of APM. 

Observability came about when we tried to say, well, this one’s performing this way, that one’s performing that way, is there a relationship? So, it is trying to take the monitoring that you have and say, how are these things connected? Observability tools are looking at the data that you have, and trying to make sure that things are working to some degree.

But that still doesn’t tell you whether or not the company is okay, which is where operational awareness comes in. Awareness is like, hey, are all the things necessary to run the company included? And are they running okay? That’s what I call full operational awareness. This requires information that is not in it to be combined with information that obviously IT operations has, and AIOps tends to be the tool that can do that. 

So, Observability solutions serve an important function; it allows you to see the technical connections between objects and services, and why and how they may work. Awareness includes that and adds functional analysis, prediction, and prevention. But I’m not just talking about operational awareness as a technical thing, but in terms of the business. Let’s look at HR – this has an IT component, but nobody looks at that as a separate thing. If HR’s IT isn’t working, and if I’m the CEO, as far as I am concerned, HR is not working, and so the company is not working, even if other parts still function. 

So, how do I gain awareness of all the pieces being brought together? AIOps is a solution that can do that, because it is an intelligent piece that pulls data in from everywhere, whereas observability is taking the monitoring data that you have, and understanding how those data relate to each other. APM gives information and insights, observability helps solve technical problems, whereas AIOps tools helps solve for business problems. 

AIOps platforms are one tool that can combine both data sources real time IT operational awareness and Business operations awareness. Together, these constitute Organizational Awareness, that is, awareness across the company as a whole. 

Jon: For my take on the benefits of observability platforms, bear with me as I haven’t actually used these tools! I came out of the ITIL, ITSM world of the 1990s, which (to me) was about providing measures of success. Back in the day, you got a dashboard saying things aren’t performing – that gave us performance management, anomaly detection, IT service management and so on. Then it went into business service management, dashboards to say, yeah, your current accounts aren’t working as they should. But it was always about presentation of information to give you a feel of success, and kick off a diagnostic process. 

Whereas, observability,… I remember I was at a CloudBees user event, and someone said this, so I’m going to borrow from them: essentially, that solving where things are going wrong has become a kind of whodunnit. Observability, to me, is one of those words that describes itself. It’s not a solution, it’s actually an anti-word, it describes the problem in a way that makes it sound like a solution, actionable insights. It’s the lack of ability to know where the problems are happening in distributed architectures. That’s what is causing so much difficulty. 

Ron: That’s a valid statement. Operational awareness comes from situational awareness, which was originally from the military. It’s a great term, because it says you’re sitting in the middle of the field of battle. Where’s the danger? You’re doing this, your head’s on a swivel, and you don’t know where anything is. 

So operational awareness is a big deal, and it feeds the operation of not just IT, but the whole company. You can have IT operating at a hundred percent, but the company can be not making a dime, because something IT is not responsible directly for, but supports, is not working correctly.

Jon: I spoke to the mayor of the city of Chicago about situational awareness, specifically about snow ploughs: when there’s snow, you want to turn into a street and know the cars are out of the way, because once a snowplough is in a street, it can’t get out. I guess, from the point of view that you’re looking at here, operational awareness is not the awareness that IT operations requires. It’s awareness of business operations and being able to run the business better based on information about IT systems. Is that fair?

Ron: Yes. For the monitoring, are my systems OK, and is the company? Observability is, how are systems and the company behaving, why are they behaving that way, and what’s their relationship? Can I fix things without anything happening, and causing incidents? Awareness is a whole company thing – are all parts performing the way they should? Will something break, and if so, when? And can I prevent that from breaking?

That’s why operational awareness is more than situational awareness, which we can see as helping individuals – it’s aimed at the whole company, working with business awareness to drive organizational awareness. I’m not trying to invent concepts, but I am trying to be frank about what’s needed and how the different groups of tools apply. Operational awareness includes observability, monitoring, reporting and prediction, which is where AIOps comes in. You get all the pieces that we all know about, but when you put them together you get awareness of the operation of the company, not just IT. Observability and monitoring doesn’t include anything about business operations. 

Monitoring, Observability and AIOps

Jon: Is there another element? For the record, I hate maturity models because they never happen. But this is a kind of developmental model, isn’t it? From monitoring, to observability, and from this ability you want to improve to awareness. What you can also do is think upwards, from basic systems management, to IT service management to business service management. 

Business service management was great, because it said (for example) people can’t access the current accounts. That’s really important, but what it wasn’t telling you was whether or not that’s doing you any damage as a company, so you can work across monitoring, through observability to operational awareness.

Another question, then, where can you get this operational awareness thing? I don’t suppose you can go down to Woolworths, pick up some operational awareness, stick it on a pallet, and wheel it home, so what do you do? 

Ron: For a start, you must have all the pieces – if you don’t have monitoring, observability and all that you can’t get there, right? But then, one of the biggest pieces that’s missing is business awareness. The business, generally speaking, doesn’t communicate its operational state. This makes it hard – if my database isn’t running, what’s the impact of that? What does it mean to be fully aware? We can see this as a Venn diagram – if I draw another circle, it’s the whole circle, it’s the company.

Operational Awareness

Jon: Hang on, this is super important. If we go back to the origins of DevOps (we can argue whether or not it’s been successful since two thousand and seven, but bear with me on this), the origins of it were things like, “Black Friday’s coming up. How can we have the systems in place that we need to deliver on that?” It was very much from left to right – we need to deploy new features, so that we can maximize benefits, we need to set priorities and so on. 

But the way that you said it was the business is not closing the loop. It’s up to the business to say, “I’m not able to perform. I’m not able to sell as much as I should be at the moment. Let’s look into why that is, and let’s feed that back to IT, so that I can be doing that better.” You’ve got the marketing department., the sales department, upper management, all the different parts of the organization. Then all need to take responsibility for their part in telling everyone else how well they are doing. 

Ron: Absolutely. I almost put a fourth circle on my Venn diagram, which was the business side. But I decided to leave this, as it was about awareness as an intersection. It’s odd to me that many companies are not aware of all the things that are necessary to make them function as a company. They know that IT is a big deal, but they don’t know why or how or what IT’s impact is.

Jon: Yes, so bringing in elements of employee, experience and customer experience, and all those sorts of thing which then feeds the value stream management, strategic portfolio management aspects, knowing where to make a difference, shifting our needle according to the stakeholders that we have. 

Ron: Yes, and all of that’s in awareness, you know!

Jon: That’s a great point to leave this, that the business needs to recognize it has a role in this. It can’t be a passive consumer of IT. The business needs to be a supplier of information. I know we’ve said similar things before, but the context is different – cloud-native and so on, so it’s about aligning business information with a different architecture and set of variables. Thank you so much, Ron. It’s been great speaking to you.

Ron: Thank you for letting me share!

The post <strong>How APM, Observability and AIOps drive Operational Awareness</strong> appeared first on GigaOm.

Continue Reading

Trending