Connect with us

Security

Pragmatic view of Zero Trust

Published

on

Traditionally we have taken the approach that we trust everything in the network, everything in the enterprise, and put our security at the edge of that boundary. Pass all of our checks and you are in the “trusted” group. That worked well when the opposition was not sophisticated, most end user workstations were desktops, the number of remote users was very small, and we had all our servers in a series of data centers that we controlled completely, or in part. We were comfortable with our place in the world, and the things we built. Of course, we were also asked to do more with less and this security posture was simple and less costly than the alternative.

Starting around the time of Stuxnet this started to change. Security went from a poorly understood, accepted cost, and back room discussion to one being discussed with interest in board rooms and at shareholder meetings. Overnight the executive level went from being able to be ignorant of cybersecurity to having to be knowledgable of the company’s disposition on cyber. Attacks increased, and the major news organizations started reporting on cyber incidents. Legislation changed to reflect this new world, and more is coming. How do we handle this new world and all of its requirements?

Zero Trust is that change in security. Zero Trust is a fundamental change in cybersecurity strategy. Whereas before we focused on boundary control and built all our security around the idea of inside and outside, now we need to focus on every component and every person potentially being a Trojan Horse. It may look legitimate enough to get through the boundary, but in reality it could be hosting a threat actor waiting to attack. Even better, your applications and infrastructure could be a time bomb waiting to blow, where the code used in those tools is exploited in a “Supply Chain” attack. Where through no fault of the organization they are vulnerable to attack. Zero Trust says – “You are trusted only to take one action, one time, in one place, and the moment that changes you are no longer trusted and must be validated again, regardless of your location, application, userID, etc”. Zero Trust is exactly what it says, “I do not trust anything, so I validate all the things”.

That is a neat theory, but what does that mean in practice? We need to restrict users to the absolute minimum required access to networks that have a tight series of ACL’s, to applications that can only communicate to those things they must communicate with, to devices segmented to the point they think they are alone on private networks, while being dynamic enough to have their sphere of trust changed as the organization evolves, and still enable management of those devices. The overall goal is to reduce the “blast radius” any compromise would allow in the organization, since it is not a question of “if” but “when” for a cyber attack.

So if my philosophy changes from “I know that and trust it” to “I cannot believe that is what it says it is” then what can I do? Especially when I consider I did not get 5x budget to deal with 5x more complexity. I look to the market. Good news! Every single security vendor is now telling me how they solve Zero Trust with their tool, platform, service, new shiny thing. So I ask questions. It seems to me they only really solve it according to marketing. Why? Because Zero Trust is hard. It is very hard. Complex, it requires change across the organization, not just tools, but the full trifecta of people, process, and technology, and not restricted to my technology team, but the entire organization, not one region, but globally. It is a lot.

All is not lost though, because Zero Trust isn’t a fixed outcome, it is a philosophy. It is not a tool, or an audit, or a process. I cannot buy it, nor can I certify it (no matter what people selling things will say). So that shows hope. Additionally, I always remember the truism; “Perfection is the enemy of Progress”, and I realize I can move the needle.

So I take a pragmatic view of security, through the lens of Zero Trust. I don’t aim to do everything all at once. Instead I look at what I am able to do and where I have existing skills. How is my organization designed, am I a hub and spoke where I have a core organization with shared services and largely independent business units? Maybe I have a mesh where the BU’s are distributed to where we organically integrated and staffed as we went through years of M&A, maybe we are fully integrated as an organization with one standard for everything. Maybe it is none of those.

I start by considering my capabilities and mapping my current state. Where is my organization on the NIST security framework model? Where do I think I could get with my current staff? Who do I have in my partner organization that can help me? Once I know where I am I then fork my focus.

One fork is on low hanging fruit that can be resolved in the short term.  Can I add some firewall rules to better restrict VLAN’s that do not need to communicate? Can I audit user accounts and make sure we are following best practices for organization and permission assignment? Does MFA exist, and can I expand it’s use, or implement it for some critical systems?

My second fork is to develop an ecosystem of talent, organized around a security focused operating model, otherwise known as my long term plan. DevOps becomes SecDevOps, where security is integrated and first. My partners become more integrated and I look for, and acquire relationships with, new partners that fill my gaps. My teams are reorganized to support security by design AND practice. And I develop a training plan that includes the same focus on what we can do today (partner lunch and learns) with long term strategy (which may be up skilling my people with certifications).

This is the phase where we begin looking at a tools rationalization project. What do my existing tools not perform as needed in the new Zero Trust world, these will likely need to be replaced in the near term. What tools do I have that work well enough, but will need to be replaced at termination of the contract. What tools do I have that we will retain.

Finally where do we see the big, hard rocks being placed in our way?  It is a given that our networks will need some redesign, and will need to be designed with automation in mind, because the rules, ACL’s, and VLAN’s will be far more complex than before, and changes will happen at a far faster pace than before. Automation is the only way this will work. The best part is modern automation is self documenting.

The wonderful thing about being pragmatic is we get to make positive change, have a long term goal in mind that we can all align on, focus on what we can change, while developing for the future. All wrapped in a communications layer for executive leadership, and an evolving strategy for the board. Eating the elephant one bite at a time.

The post Pragmatic view of Zero Trust appeared first on GigaOm.

Continue Reading

Security

How APM, Observability and AIOps drive Operational Awareness

Published

on

Ron Williams explains all to Jon Collins

Jon Collins: Hi Ron, thanks for joining me! I have two questions, if I may. One’s the general question of observability versus what’s been called application performance monitoring, APM – there’s been some debate about this in the industry, I know. Also, how do they both fit in with operational awareness, which I know is a hot topic for you.

Ron Williams: I’ll wax lyrical, and we can see where this goes – I’ll want to bring in AIOps as well, as another buzzword. Basically, we all started out with monitoring, which is, you know: Is it on? Is it off? Just monitoring performance, that’s the basis of APM. 

Observability came about when we tried to say, well, this one’s performing this way, that one’s performing that way, is there a relationship? So, it is trying to take the monitoring that you have and say, how are these things connected? Observability tools are looking at the data that you have, and trying to make sure that things are working to some degree.

But that still doesn’t tell you whether or not the company is okay, which is where operational awareness comes in. Awareness is like, hey, are all the things necessary to run the company included? And are they running okay? That’s what I call full operational awareness. This requires information that is not in it to be combined with information that obviously IT operations has, and AIOps tends to be the tool that can do that. 

So, Observability solutions serve an important function; it allows you to see the technical connections between objects and services, and why and how they may work. Awareness includes that and adds functional analysis, prediction, and prevention. But I’m not just talking about operational awareness as a technical thing, but in terms of the business. Let’s look at HR – this has an IT component, but nobody looks at that as a separate thing. If HR’s IT isn’t working, and if I’m the CEO, as far as I am concerned, HR is not working, and so the company is not working, even if other parts still function. 

So, how do I gain awareness of all the pieces being brought together? AIOps is a solution that can do that, because it is an intelligent piece that pulls data in from everywhere, whereas observability is taking the monitoring data that you have, and understanding how those data relate to each other. APM gives information and insights, observability helps solve technical problems, whereas AIOps tools helps solve for business problems. 

AIOps platforms are one tool that can combine both data sources real time IT operational awareness and Business operations awareness. Together, these constitute Organizational Awareness, that is, awareness across the company as a whole. 

Jon: For my take on the benefits of observability platforms, bear with me as I haven’t actually used these tools! I came out of the ITIL, ITSM world of the 1990s, which (to me) was about providing measures of success. Back in the day, you got a dashboard saying things aren’t performing – that gave us performance management, anomaly detection, IT service management and so on. Then it went into business service management, dashboards to say, yeah, your current accounts aren’t working as they should. But it was always about presentation of information to give you a feel of success, and kick off a diagnostic process. 

Whereas, observability,… I remember I was at a CloudBees user event, and someone said this, so I’m going to borrow from them: essentially, that solving where things are going wrong has become a kind of whodunnit. Observability, to me, is one of those words that describes itself. It’s not a solution, it’s actually an anti-word, it describes the problem in a way that makes it sound like a solution, actionable insights. It’s the lack of ability to know where the problems are happening in distributed architectures. That’s what is causing so much difficulty. 

Ron: That’s a valid statement. Operational awareness comes from situational awareness, which was originally from the military. It’s a great term, because it says you’re sitting in the middle of the field of battle. Where’s the danger? You’re doing this, your head’s on a swivel, and you don’t know where anything is. 

So operational awareness is a big deal, and it feeds the operation of not just IT, but the whole company. You can have IT operating at a hundred percent, but the company can be not making a dime, because something IT is not responsible directly for, but supports, is not working correctly.

Jon: I spoke to the mayor of the city of Chicago about situational awareness, specifically about snow ploughs: when there’s snow, you want to turn into a street and know the cars are out of the way, because once a snowplough is in a street, it can’t get out. I guess, from the point of view that you’re looking at here, operational awareness is not the awareness that IT operations requires. It’s awareness of business operations and being able to run the business better based on information about IT systems. Is that fair?

Ron: Yes. For the monitoring, are my systems OK, and is the company? Observability is, how are systems and the company behaving, why are they behaving that way, and what’s their relationship? Can I fix things without anything happening, and causing incidents? Awareness is a whole company thing – are all parts performing the way they should? Will something break, and if so, when? And can I prevent that from breaking?

That’s why operational awareness is more than situational awareness, which we can see as helping individuals – it’s aimed at the whole company, working with business awareness to drive organizational awareness. I’m not trying to invent concepts, but I am trying to be frank about what’s needed and how the different groups of tools apply. Operational awareness includes observability, monitoring, reporting and prediction, which is where AIOps comes in. You get all the pieces that we all know about, but when you put them together you get awareness of the operation of the company, not just IT. Observability and monitoring doesn’t include anything about business operations. 

Monitoring, Observability and AIOps

Jon: Is there another element? For the record, I hate maturity models because they never happen. But this is a kind of developmental model, isn’t it? From monitoring, to observability, and from this ability you want to improve to awareness. What you can also do is think upwards, from basic systems management, to IT service management to business service management. 

Business service management was great, because it said (for example) people can’t access the current accounts. That’s really important, but what it wasn’t telling you was whether or not that’s doing you any damage as a company, so you can work across monitoring, through observability to operational awareness.

Another question, then, where can you get this operational awareness thing? I don’t suppose you can go down to Woolworths, pick up some operational awareness, stick it on a pallet, and wheel it home, so what do you do? 

Ron: For a start, you must have all the pieces – if you don’t have monitoring, observability and all that you can’t get there, right? But then, one of the biggest pieces that’s missing is business awareness. The business, generally speaking, doesn’t communicate its operational state. This makes it hard – if my database isn’t running, what’s the impact of that? What does it mean to be fully aware? We can see this as a Venn diagram – if I draw another circle, it’s the whole circle, it’s the company.

Operational Awareness

Jon: Hang on, this is super important. If we go back to the origins of DevOps (we can argue whether or not it’s been successful since two thousand and seven, but bear with me on this), the origins of it were things like, “Black Friday’s coming up. How can we have the systems in place that we need to deliver on that?” It was very much from left to right – we need to deploy new features, so that we can maximize benefits, we need to set priorities and so on. 

But the way that you said it was the business is not closing the loop. It’s up to the business to say, “I’m not able to perform. I’m not able to sell as much as I should be at the moment. Let’s look into why that is, and let’s feed that back to IT, so that I can be doing that better.” You’ve got the marketing department., the sales department, upper management, all the different parts of the organization. Then all need to take responsibility for their part in telling everyone else how well they are doing. 

Ron: Absolutely. I almost put a fourth circle on my Venn diagram, which was the business side. But I decided to leave this, as it was about awareness as an intersection. It’s odd to me that many companies are not aware of all the things that are necessary to make them function as a company. They know that IT is a big deal, but they don’t know why or how or what IT’s impact is.

Jon: Yes, so bringing in elements of employee, experience and customer experience, and all those sorts of thing which then feeds the value stream management, strategic portfolio management aspects, knowing where to make a difference, shifting our needle according to the stakeholders that we have. 

Ron: Yes, and all of that’s in awareness, you know!

Jon: That’s a great point to leave this, that the business needs to recognize it has a role in this. It can’t be a passive consumer of IT. The business needs to be a supplier of information. I know we’ve said similar things before, but the context is different – cloud-native and so on, so it’s about aligning business information with a different architecture and set of variables. Thank you so much, Ron. It’s been great speaking to you.

Ron: Thank you for letting me share!

The post <strong>How APM, Observability and AIOps drive Operational Awareness</strong> appeared first on GigaOm.

Continue Reading

Security

Can low code process automation platforms fix healthcare?

Published

on

I was lucky enough to sit down with Appian’s healthcare industry lead, Fritz Haimberger. Fritz is someone who practices what he preaches — outside of his day job, he still spare-times as a medic and a firefighter in his hometown of Franklin, Tennessee (just outside Nashville). I’ve been lucky enough to work with various healthcare clients over the years, from hospitals to pharmaceutical firms and equipment manufacturers; I’ve also been involved in GigaOm’s low code tools and automation platforms reports. So, I was interested in getting his take on how this space has evolved since I last had my sleeves rolled up back in 2016. 

While we talked about a wide range of areas, what really caught my attention was the recognized, still huge, challenge being faced by healthcare organizations across the globe.  “If you look at healthcare over 15 years, starting with electronic medical record systems — for so long, we’ve had a continued expectation that those implementations might cost 500 million dollars and might be implemented in 14-16 months. Reality has never been like that: repeatedly, it’s been three years down the road, a billion dollars plus in expense, sometimes with no end in sight,” said Fritz. “The notion of implementation time to value was blown away, and organizations resigned themselves to think that it’s just not possible to deliver in a timely manner.”

In part, this comes from legacy tech, but equally, it is down to underestimating the scale of the challenge. When I was working on clinical pathways for Deep Vein Thrombosis (DVT), what started as a simple series of steps inevitably grew in complexity — what about if the patient was already being treated on other drugs? What if blood tests returned certain, conflicting information? So many of these questions rely on information stored in the heads of clinicians, doctors, nurses, pharmacists, and so on. 

The resulting impact is not only on data models and systems functionality but also the way in which information needs to be gathered. Keeping in mind that healthcare scenarios must, by their nature, be risk averse, it’s not possible to build a prototype via “fail fast” or “test and learn” — real patient lives may be involved. So, how can healthcare organizations square the circle between addressing unachievable expectations without having the “do it quick and cheap” option? 

Enter low code app development, integration, process, and other forms of automation development platforms. Let’s work back from the clichéd trick in the tale and agree that it won’t be a magic digital transformation bullet. You only have to look at a technical architecture map of the NHS to realize that you’d need an entire squadron of magic rockets to even dent the surface. But several elements of the low code process automation platform approach (okay, a bit of a mouthful, so I’ll stick with automation platforms from here) map onto the challenges faced by healthcare organizations in a way that might actually make a difference. 

First off, the low code development platforms are not looking to either directly replace or just integrate between existing systems. Rather, and given their heritage, they are aimed at accessing existing data to respond to new or changing needs. There’s an industry term – “land and expand” – which is largely about marketing but also helps from a technical perspective: unlike historical enterprise applications, which required organizations to adopt and adapt their processes (at vast cost), the automation platform approach is more about solving specific challenges first, then broadening use — without imposing external constraints on related development processes. 

Second, the nature of software development with automation platforms plays specifically to the healthcare context. Whilst the environment is absolutely safety critical, it’s also very complex, with a lot of knowledge in the heads of health care professionals… This plays to a collaborative approach, one way or another — clinicians need to be consulted at the beginning of a project but also along the way as clinical needs emerge. “The tribal knowledge breakdown is huge,” said Fritz. “With platforms such as Appian, professional developers, clinicians, and business owners can better collaborate on custom applications, so it’s bespoke to what they’re trying to achieve, in a quick iterative process.” Not only does this cut initial time to value down considerably – Fritz suggested 12-14 weeks – but also, it’s along the process that complexity emerges, and hence can be treated. 

Automation platforms align with the way it is possible to do things, but at the same time, they are, inherently platforms. This brings to a third pillar that they can bake in the capabilities healthcare organizations need without having to be bespoke — security hardening, mobile deployment, healthcare compliance, API-based integration, and so on. From experience, I know how complex these elements can be if either relying on other parts of the healthcare architecture or having to build bespoke or buy separately — the goal is to reduce the complexity of custom apps and dependencies rather than creating them. 

Perhaps automation platforms can, at the very least, unlock and unblock opportunities to make technology work for key healthcare stakeholders, from upper management to nursing staff and everyone in between. Of course, they can’t work miracles; you will also need to keep on top of your application governance — thinking generally, automation platforms aren’t always the best at version controls, configuration and test management, and other ancillary activity. 

Above all, when the platforms do what they do best, they are solving problems for people by creating new interfaces onto existing data and delivering new processes. “Honestly— if I’m looking at the individual, whether it’s a patient in clinical treatment, a life sciences trial participant or an insured member – if we’re improving their health outcomes, and easing the unnecessary burden on clinicians, scientists and others, that’s what makes it worthwhile putting two feet on the floor in the morning and coming to work for Appian!” 

Yes, platforms can help, but most of all, this is about recognizing that solving for business users is the answer: with this mindset, perhaps healthcare organizations really can start moving towards dealing with their legacy technical challenges to the benefit of all. 

The post <strong>Can low code process automation platforms fix healthcare?</strong> appeared first on GigaOm.

Continue Reading

Security

Now’s the Moment to be Thinking About Sovereign Cloud

Published

on

Sovereign Cloud was one of VMware’s big announcements at its annual VMware Explore Europe conference this year. Not that the company was announcing the still-evolving notion of sovereignty, but what it calls “sovereign-ready solutions.” Data sovereignty is the need to ensure data is managed according to local and national laws. This has always been important, so why has it become a thing now if it wasn’t three years ago?

Perhaps some of the impetus comes from General Data Protection Regulation (GDPR compliance), or at least the limitations revealed since its arrival in 2016. It isn’t possible to define a single set of laws or regulations around data that can apply globally. Different countries have different takes on what matters, move at different rates, and face different challenges. “Sovereignty was not specific to EMEA region but driven by it,” said Joe Baguley, EMEA CTO for VMware. 

GDPR requirements are a subset of data privacy and protection requirements, but increasingly, governments are defining their own or sticking with what they already have. Some countries favor more stringent rules (Germany and Ghana spring to mind), and technology platforms need to be able to work with a multitude of policies rather than enforcing just one. 

Enter the sovereign cloud, which is accelerating as a need even as it emerges as something concrete that organizations can use. In terms of the accelerating need, enterprises we speak to are talking of the increasing challenges faced when operating across national borders — as nations mature digitally, it’s no longer an option to ignore local data requirements. 

At the same time, pressure is increasing. “Most organizations have a feeling of a burning platform,” remarked Laurent Allard, head of Sovereign Cloud EMEA for VMware. As well as regulation, the threat of ransomware is highly prevalent, driving a need for organizations to respond. Equally less urgent but no less important is the continuing focus on digital transformation — if ransomware is a stick, so transformation offers the carrot of opportunity. 

Beyond these technical drivers is the very real challenge of rapidly shifting geopolitics. The conflict in Ukraine has caused irrecoverable damage to the idea that we might all get along, sharing data and offering services internationally without risk of change. Citizens and customers—that’s us—need a cast iron guarantee that the confidentiality, integrity, and availability of their data will be protected even as the world changes. And it’s not just about people—industrial and operational data subjects also need to be considered. 

It is worth considering the primary scenarios to which data protection laws and sovereignty need to apply. The public sector and regulated industries have broader constraints on data privacy, so organizations in these areas may well see the need to have “a sovereign cloud” within which they operate. Other organizations may have certain data classes that need special treatment and see sovereign cloud architecture as a destination for these. And meanwhile, multinational companies may operate in countries that impose specific restrictions on small yet important subsets of data. 

Despite the rapidly emerging need, the tech industry is not yet geared up to respond — not efficiently, anyway. I still speak to some US vendors who scratch their heads when the topic arises (though the European Data Act and other regulatory moves may drive more interest). Hyperscalers, in particular, are tussling with how to approach the challenge, given that US law already imposes requirements on data wherever it may be in the world. 

These are early days when it comes to solutions: as says Rajeev Bhardwaj, GM of VMware’s Cloud Provider Solutions division, “There is no standard for sovereign clouds.” Developing such a thing will not be straightforward, as (given the range of scenarios) solutions cannot be one-size-fits-all. Organizations must define infrastructure and data management capabilities that fit their own needs, considering how they move data and in which jurisdictions they operate. 

VMware has made some headway in this, defining a sovereign cloud stack with multiple controls, e.g., on data residency — it’s this which serves as a basis for its sovereign-ready solutions. “There’s work to be done. We’re not done yet,” says Sumit Dhawan, President of VMware. This work cannot exist in isolation, as the whole point of the sovereign cloud is that it needs to work across what is today a highly complex and distributed IT environment, whatever the organization’s size. 

Sure, it’s a work in progress, but at the same time, enterprises can think about the scenarios that matter to them, as well as the aforementioned carrot and stick. While the future may be uncertain, we can all be sure that we’ll need to understand our data assets and classify them, set policies according to our needs and the places where we operate, and develop our infrastructures to be more flexible and policy-driven.

I wouldn’t go as far as saying that enterprises need a chief sovereignty officer, but they should indeed be embedding the notion of data sovereignty into their strategic initiatives, both vertically (as a singular goal) and horizontally as a thread running through all aspects of business and IT. “What about data sovereignty aspects” should be a bullet point on the agenda of all digital transformation activity — sure, it is not a simple question to answer, but it is all the more important because of this. 

The post Now’s the Moment to be Thinking About Sovereign Cloud appeared first on GigaOm.

Continue Reading

Trending