Connect with us

Security

Firefox 65 released with AV1 and WebP support

Published

on

Mozilla released today Firefox version 65 for Windows, Mac, and Linux. Today’s new release contains two new major features, but also improvements to features added in previous releases.

The two new features are support for the WebP image format and the AV1 video codec. Both are high-quality image and video compression technologies, respectively, that are expected to reduce bandwidth usage and improve page loading times.

Work on supporting these two standards started years back, but Mozilla only recently shipped AV1 and WebP spport after the two technologies gained some traction with other browser vendors, websites, and software makers.

In addition to these two new feature, Mozilla also improved two others that it added to Firefox in v63 and v64.

Enhanced Tracking Protection, also known as Content Blocking, which shipped with Firefox 63, has received a facelift of its settings panel, which now does a better job at explaining to users what they’re blocking when enabling each of its three options.


image: Mozilla

Second, the new Task Manager-like about:performance page now shows the amount of RAM each Firefox page and extension is using, a feature it lacked when Mozilla launched it last month, with Firefox 64.

Firefox 65 performance view

Image: ZDNet

On the security front, Mozilla has also added protection against “stack smashing,” a common security attack in which malicious actors try to create stack buffer overflows on purpose to take control of a vulnerable part of the browser’s code.

According to Mozilla, stack smashing protection is currently being rolled out to macOS, Linux, and Android users only.

Also on the security front, Firefox 65’s popup blocker also received an update to better prevent multiple pop-up windows from being opened by websites at the same time.

On the enterprise front, answering customer requests, Mozilla has also made available for the very first time 32-bit and 64-bit MSI installers for Firefox for Windows. Enterprise system administrators should now be able to trigger updates and new installations across a large number of machines using automated tools.

And last, but not least, starting with v65, Firefox will now warn users when closing a window, regardless if the user has automatic session restore enabled in the Firefox settings section.

For more details on today’s changes, our readers can go to the Firefox 65 changelog here, the list of security fixes here, and the list of Web API and developer-related changes here.

More browser coverage:

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published.

Security

Achieve more with GigaOm

Published

on

As we have grown substantially over the past two years. We are often asked who (even) is GigaOm, what the company does, how it differentiates, and so on. These are fair questions—many people still remember what we can call GigaOm 1.0, that fine media company born of the blogging wave.

We’ve been through the GigaOm 2.0 “boutique analyst firm” phase, before deciding we wanted to achieve more. That decision put us on a journey to where we are today, ten times the size in terms of headcount and still growing, and covering as many technology categories as the biggest analyst firms. 

Fuelling our growth has been a series of interconnected decisions. First, we asked technology decision-makers —CIOs, CTOs, VPs of Engineering and Operations, and so on—what they needed, and what was missing: unanimously, they said they needed strategic technical information based on practical experience, that is, not just theory. Industry analysts, it has been said, can be like music critics who have never played in an orchestra. Sure, there’s a place for that, but it leaves a gap for practitioner-led insights. 

Second, and building on this, we went through a test-and-learn phase to try various report models. Enrico Signoretti, now our VP of Product, spearheaded the creation of the Key Criteria and Radar document pair, based on his experience in evaluating solutions for enterprise clients. As we developed this product set in collaboration with end-user strategists, we doubled down on the Key Criteria report as a how-to guide for writing a Request For Proposals. 

Doing this led to the third strand, expanding this thinking to the enterprise decision-making cycle. Technology decision-makers don’t wake up one morning and say, “I think I need some Object Storage.”

Rather, they will be faced with a challenge, a situation, or some other scenario – perhaps existing storage products are not scaling sufficiently, applications are being rationalized, or a solution has reached the end of life. These scenarios dictate a nhttps://gigaom.com/end-user-products/btis/eed: often, the decision maker will not only need to define a response but will also then have to justify the spending. 

This reality dictates the first product in the GigaOm portfolio, the GigaBrief, which is (essentially) a how-to guide for writing a business case. Once the decision maker has confirmed the budget, they can get on with writing an RFP (cf the Key Criteria and Radar), and then consider running a proof of concept (PoC).

We have a how-to guide for these as well, based on our Benchmarks, field tests, and Business Technology Impact (BTI) reports. We know that, alongside thought leadership, decision-makers need hard numbers for costs and benefits, so we double down on these. 

For end-user organizations, our primary audience, we have therefore created a set of tools to make decisions and unblock deployments: our subscribers come to us for clarity and practitioner-led advice, which helps them work both faster and smarter and achieve their goals more effectively. Our research is high-impact by design, which is why we have an expanding set of partner organizations using it to enable their clients. 

Specifically, learning companies such as Pluralsight and A Cloud Guru use GigaOm reports helping subscribers set direction and lock down the solutions they need to deliver. By its nature, our how-to approach to report writing has created a set of strategic training tools, which directly feed more specific technical training. 

Meanwhile, channel companies such as Ingram Micro and Transformation Continuum use our research to help their clients lock down the solutions they need, together with a practitioner-led starting point for supporting frameworks, architectures, and structures. And we work together with media partners like The Register and The Channel Company to support their audiences with research and insights. 

Technology vendors, too, benefit from end-user decision makers that are better equipped to make decisions. Rather than generic market making or long-listing potential vendors, our scenario-led materials directly impact buying decisions, taking procurement from a shortlist to a conclusion. Sales teams at systems, service, and software companies tell us how they use our reports when discussing options with prospects, not to evangelize but to explore practicalities and help reach a conclusion.

All these reasons and more enable us to say with confidence how end-user businesses, learning, channel and media companies, and indeed technology vendors are achieving more with GigaOm research. In a complex and constantly evolving landscape, our practitioner- and scenario-led approach brings specificity and clarity, helping organizations reach further, work faster and deliver more. 

Our driving force is the value we bring; at the same time, we maintain a connection with our media heritage, which enables us to scale beyond traditional analyst models. We also continue to learn, reflect, and change — our open and transparent model welcomes feedback from all stakeholders so that we can drive improvements in our products, our approach, and our outreach.

This is to say, if you have any thoughts, questions, raves, or rants, don’t hesitate to get in touch with me directly. My virtual door, and my calendar, are always open. 

The post Achieve more with GigaOm appeared first on GigaOm.

Continue Reading

Security

Pragmatic view of Zero Trust

Published

on

Traditionally we have taken the approach that we trust everything in the network, everything in the enterprise, and put our security at the edge of that boundary. Pass all of our checks and you are in the “trusted” group. That worked well when the opposition was not sophisticated, most end user workstations were desktops, the number of remote users was very small, and we had all our servers in a series of data centers that we controlled completely, or in part. We were comfortable with our place in the world, and the things we built. Of course, we were also asked to do more with less and this security posture was simple and less costly than the alternative.

Starting around the time of Stuxnet this started to change. Security went from a poorly understood, accepted cost, and back room discussion to one being discussed with interest in board rooms and at shareholder meetings. Overnight the executive level went from being able to be ignorant of cybersecurity to having to be knowledgable of the company’s disposition on cyber. Attacks increased, and the major news organizations started reporting on cyber incidents. Legislation changed to reflect this new world, and more is coming. How do we handle this new world and all of its requirements?

Zero Trust is that change in security. Zero Trust is a fundamental change in cybersecurity strategy. Whereas before we focused on boundary control and built all our security around the idea of inside and outside, now we need to focus on every component and every person potentially being a Trojan Horse. It may look legitimate enough to get through the boundary, but in reality it could be hosting a threat actor waiting to attack. Even better, your applications and infrastructure could be a time bomb waiting to blow, where the code used in those tools is exploited in a “Supply Chain” attack. Where through no fault of the organization they are vulnerable to attack. Zero Trust says – “You are trusted only to take one action, one time, in one place, and the moment that changes you are no longer trusted and must be validated again, regardless of your location, application, userID, etc”. Zero Trust is exactly what it says, “I do not trust anything, so I validate all the things”.

That is a neat theory, but what does that mean in practice? We need to restrict users to the absolute minimum required access to networks that have a tight series of ACL’s, to applications that can only communicate to those things they must communicate with, to devices segmented to the point they think they are alone on private networks, while being dynamic enough to have their sphere of trust changed as the organization evolves, and still enable management of those devices. The overall goal is to reduce the “blast radius” any compromise would allow in the organization, since it is not a question of “if” but “when” for a cyber attack.

So if my philosophy changes from “I know that and trust it” to “I cannot believe that is what it says it is” then what can I do? Especially when I consider I did not get 5x budget to deal with 5x more complexity. I look to the market. Good news! Every single security vendor is now telling me how they solve Zero Trust with their tool, platform, service, new shiny thing. So I ask questions. It seems to me they only really solve it according to marketing. Why? Because Zero Trust is hard. It is very hard. Complex, it requires change across the organization, not just tools, but the full trifecta of people, process, and technology, and not restricted to my technology team, but the entire organization, not one region, but globally. It is a lot.

All is not lost though, because Zero Trust isn’t a fixed outcome, it is a philosophy. It is not a tool, or an audit, or a process. I cannot buy it, nor can I certify it (no matter what people selling things will say). So that shows hope. Additionally, I always remember the truism; “Perfection is the enemy of Progress”, and I realize I can move the needle.

So I take a pragmatic view of security, through the lens of Zero Trust. I don’t aim to do everything all at once. Instead I look at what I am able to do and where I have existing skills. How is my organization designed, am I a hub and spoke where I have a core organization with shared services and largely independent business units? Maybe I have a mesh where the BU’s are distributed to where we organically integrated and staffed as we went through years of M&A, maybe we are fully integrated as an organization with one standard for everything. Maybe it is none of those.

I start by considering my capabilities and mapping my current state. Where is my organization on the NIST security framework model? Where do I think I could get with my current staff? Who do I have in my partner organization that can help me? Once I know where I am I then fork my focus.

One fork is on low hanging fruit that can be resolved in the short term.  Can I add some firewall rules to better restrict VLAN’s that do not need to communicate? Can I audit user accounts and make sure we are following best practices for organization and permission assignment? Does MFA exist, and can I expand it’s use, or implement it for some critical systems?

My second fork is to develop an ecosystem of talent, organized around a security focused operating model, otherwise known as my long term plan. DevOps becomes SecDevOps, where security is integrated and first. My partners become more integrated and I look for, and acquire relationships with, new partners that fill my gaps. My teams are reorganized to support security by design AND practice. And I develop a training plan that includes the same focus on what we can do today (partner lunch and learns) with long term strategy (which may be up skilling my people with certifications).

This is the phase where we begin looking at a tools rationalization project. What do my existing tools not perform as needed in the new Zero Trust world, these will likely need to be replaced in the near term. What tools do I have that work well enough, but will need to be replaced at termination of the contract. What tools do I have that we will retain.

Finally where do we see the big, hard rocks being placed in our way?  It is a given that our networks will need some redesign, and will need to be designed with automation in mind, because the rules, ACL’s, and VLAN’s will be far more complex than before, and changes will happen at a far faster pace than before. Automation is the only way this will work. The best part is modern automation is self documenting.

The wonderful thing about being pragmatic is we get to make positive change, have a long term goal in mind that we can all align on, focus on what we can change, while developing for the future. All wrapped in a communications layer for executive leadership, and an evolving strategy for the board. Eating the elephant one bite at a time.

The post Pragmatic view of Zero Trust appeared first on GigaOm.

Continue Reading

Security

Retrospective thoughts on KubeCon Europe 2022

Published

on

I’m not going to lie. As I sit on a plane flying away from Valencia, I confess to have been taken aback by the scale of Kubecon Europe this year. In my defence, I wasn’t alone the volume of attendees appeared to take conference organisers and exhibitors by surprise, illustrated by the notable lack of water, (I was told) t-shirts and (at various points) taxis.

Keynotes were filled to capacity, and there was a genuine buzz from participants which seemed to fall into two camps: the young and cool, and the more mature and soberly dressed.

My time was largely spent in one-on-one meetings, analyst/press conferences and walking the stands, so I can’t comment on the engineering sessions. Across the piece however, there was a genuine sense of Kubernetes now being about the how, rather than the whether. For one reason or another, companies have decided they want to gain the benefits of building and deploying distributed, container-based applications.

Strangely enough, this wasn’t being seen as some magical sword that can slay the dragons of legacy systems and open the way to digital transformation the kool-aid was as absent as the water. Ultimately, enterprises have accepted that, from an architectural standpoint and for applications in general, the Kubernetes model is as good as any available right now, as a non-proprietary, well-supported open standard that they can get behind.

Virtualisation-based options and platform stacks are too heavyweight; serverless architectures are more applicable to specific use cases. So, if you want to build an application and you want it to be future-safe, the Kubernetes target is the one to aim for.

Whether to adopt Kubernetes might be a done deal, but how to adopt certainly is not. The challenge is not with Kubernetes itself, but everything that needs to go around it to make resulting applications enterprise-ready.

For example, they need to operate in compliance environments; data needs to be managed, protected, and served into an environment that doesn’t care too much about the state; integration tools are required with external and legacy systems; development pipelines need to be in place, robust and value-focused; IT Operations need a clear view of what’s running whereas a bill of materials, and the health of individual clusters; and disaster recovery is a must.

Kubernetes doesn’t do these things, opening the door to an ecosystem of solution vendors and (often CNCF-backed) open source projects. I could drill into these areas Service Mesh, GitOps, orchestration, observability, and backup but the broader point is that they are all evolving and coalescing around the need. As they increase in capability, barriers to adoption reduce and the number of potential use cases grows.

All of which puts the industry at an interesting juncture. It’s not that tooling isn’t ready: organizations are already successfully deploying applications based on Kubernetes. In many cases, however, they are doing more work than they need developers need insider knowledge of target environments, interfaces need to be integrated rather than using third-party APIs, higher-order management tooling (such as AIOps) has to be custom-deployed rather than recognising the norms of Kubernetes operations.

Solutions do exist, but they tend to be coming from relatively new vendors that are feature rather than platform players, meaning that end-user organisations have to choose their partners wisely, then build and maintain development and management platforms themselves rather than using pre-integrated tools from a singe vendor.

None of this is a problem per se, but it does create overheads for adopters, even if they gain earlier benefits from adopting the Kubernetes model. The value of first-mover advantage has to be weighed against that of investing time and effort in the current state of tooling: as a travel company once told me, “we want to be the world’s best travel site, not the world’s best platform engineers.”

So, Kubernetes may be inevitable, but equally, it will become simpler, enabling organisations to apply the architecture to an increasingly broad set of scenarios. For organisations yet to make the step towards Kubernetes, now may still be a good time to run a proof of concept though in some ways, that sip has sailed perhaps focus the PoC on what it means for working practices and structures, rather than determining whether the concepts work at all.

Meanwhile and perhaps most importantly, now is a very good moment for organisations to look for what scenarios Kubernetes works best “out of the box”, working with providers and reviewing architectural patterns to deliver proven results against specific, high-value needs these are likely to be by industry and by the domain (I could dig into this, but did I mention that I’m sitting on a plane? ).

Jon Collins from Kubecon 2022

Kubernetes might be a done deal, but that doesn’t mean it should be adopted wholesale before some of the peripheral detail is ironed out.

The post Retrospective thoughts on KubeCon Europe 2022 appeared first on GigaOm.

Continue Reading

Trending