Connect with us

Security

Google is running an auto-update-to-HTTPS experiment in Chrome

Published

on

The Google Chrome team will be running an experiment this week in an attempt to find solutions to an HTTPS problem that Mozilla also attempted to solve last year.

The problem that Google is trying to solve is called “mixed content,” which Google describes as below:

Mixed content occurs when initial HTML [a web page] is loaded over a secure HTTPS connection, but other resources (such as images, videos, stylesheets, scripts) are loaded over an insecure HTTP connection. This is called mixed content because both HTTP and HTTPS content are being loaded to display the same page, and the initial request was secure over HTTPS. Modern browsers display warnings about this type of content to indicate to the user that this page contains insecure resources.

For the past few years, mixed content has been a big problem for browser makers and other organizations that have been pushing HTTPS adoption.

Mixed content browser errors –which sometimes are known to block users from accessing a website altogether– have scared many site operators from migrating to HTTPS, many fearing they’d lose traffic revenue for no tangible benefit for supporting HTTPS.

Addressing mixed content errors that appear in web browsers is probably the last major hurdle in convincing site operators to move to HTTPS.

This week, Google engineers rolled out an experiment in Chrome where they configured the browser to automatically upgrade any mixed content to full HTTPS.

Chrome would do this by secretly changing the URL of resources (such as images, videos, stylesheets, scripts) from their HTTP version to an HTTPS alternative.

If the same resource exists on an HTTPS link, then everything loads as normal. If the resource doesn’t exist on an alternative HTTPS linl, Chrome logs the error and executes one of the many scenarios configured for this experiment (detailed in this document).

The general idea is that when website owners updated their sites to use HTTPS, they might have forgotten to change their sites’ source code, and some content was left to load via HTTP, even it could have loaded via HTTPS just fine.

The purpose of this experiment is so Google engineers can gain insight into how many websites would break if Chrome would auto-update all mixed content sites to HTTPS by default, and what’s the best fallback strategy for mixed content HTTP URLs that break.

If the percentage of broken links and sites is small, Google engineers would most likely think about shipping this auto-update-to-HTTPS feature in the main Chrome browser and take yet another step towards a more secure web.

For now, Google intends to roll out the experiment to roughly one percent of its Chrome Canary userbase (who’ve enabled the chrome://flags/#enable-origin-trials flag).

Google’s experiment will not be the first of its kind. Mozilla tested with a similar mixed content auto-update in Firefox last year.

“They found a lot of breakage, but we’re hoping things have improved since their experiment,” said Emily Stark, a Google security engineer.

Other experiments for dealing with mixed content are also scheduled.

More browser coverage:

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Security

Phish Fight: Securing Enterprise Communications

Published

on

Yes, much of the world may have moved on from email to social media and culturally dubious TikTok dances, yet traditional electronic mail remains a foundation of business communication. And sadly, it remains a prime vector for malware, data leakage, and phishing attacks that can undermine enterprise protections. It doesn’t have to be that way.

In a just released report titled “GigaOm Radar for Phishing Prevention and Detection,” GigaOm Analyst Simon Gibson surveyed more than a dozen enterprise-focused email security solutions. He found a range of approaches to securing communications that often can be fitted together to provide critical, defense-in-depth protection against even determined attackers.

Figure 1. GigaOm Radar for Email Phishing Prevention and Detection

“When evaluating these vendors and their solutions, it is important to consider your own business and workflow,” Gibson writes in the report, stressing the need to deploy solutions that best address your organization’s business workflow and email traffic. “For some it may be preferable to settle on one comprehensive solution, while for others building a best-of-breed architecture from multiple vendors may be preferable.”

In a field of competent solutions, Gibson found that Forcepoint, purchased recently by Raytheon, stood apart thanks to the layered protections provided by its Advanced Classification Engine. Area 1 and Zimperium, meanwhile, are both leaders that exhibit significant momentum, with Area 1 boosted by its recent solution partnership with Virtru, and Zimperium excelling in its deep commitment to mobile message security.

A mobile focus is timely, Gibson says in a video interview for GigaOm. He says companies are “tuning the spigot on” and enabling unprecedented access and reliance on mobile devices, which is creating an urgent need to get ahead of threats.

Gibson’s conclusion in the report? He singles out three things: Defense in depth, awareness of existing patterns and infrastructure, and a healthy respect for the “human factor” that can make security so hard to lock down.

Continue Reading

Security

When Is a DevSecOps Vendor Not a DevSecOps Vendor?

Published

on

DevOps’ general aim is to enable a more efficient process for producing software and technology solutions and bringing stakeholders together to speed up delivery. But we know from experience that this inherently creative, outcome-driven approach often forgets about one thing until too late in the process—security. Too often, security is brought into the timeline just before deployment, risking last minute headaches and major delays. The security team is pushed into being the Greek chorus of the process, “ruining everyone’s fun” by demanding changes and slowing things down.

But as we know, in the complex, multi-cloud and containerized environment we find ourselves in, security is becoming more important and challenging than ever. And the costs of security failure are not only measured in slower deployment, but in compliance breaches and reputational damage.

The term “DevSecOps” has been coined to characterize how security needs to be at the heart of the DevOps process. This is in part principle and part tools. As a principle, DevSecOps fits with the concept of “shifting left,” that is, ensuring that security is treated as early as possible in the development process. So far, so simple.

From a tooling perspective, however, things get more complicated, not least because the market has seen a number of platforms marketing themselves as DevSecOps. As we have been writing our Key Criteria report on the subject, we have learned that not all DevSecOps vendors are necessarily DevSecOps vendors. Specifically, we have learned to distinguish capabilities that directly enable the goals of DevSecOps from a process perspective, from those designed to support DevSecOps practices. We could define them as: “Those that do, and those that help.”

This is how to tell the two types of vendor apart and how to use them.

Vendors Enabling DevSecOps: “Tools That Do”

A number of tools work to facilitate the DevSecOps process -– let’s bite the bullet and call them DevSecOps tools. They help teams set out each stage of software development, bringing siloed teams together behind a unified vision that allows fast, high-quality development, with security considerations at its core. DevSecOps tools work across the development process, for example:

  • Create: Help to set and implement policy
  • Develop: Apply guidance to the process and aid its implementation
  • Test: Facilitate and guide security testing procedures
  • Deploy: Provide reports to assure confidence to deploy the application

The key element that sets these tool sets apart is the ability to automate and reduce friction within the development process. They will prompt action, stop a team from moving from one stage to another if the process has not adequately addressed security concerns, and guide the roadmap for the development from start to finish.

Supporting DevSecOps: “Tools That Help”

In this category we place those tools which aid the execution, and monitoring, of good DevSecOps principles. Security scanning and application/infrastructure hardening tools are a key element of these processes: Software composition analysis (SCA) forms a part of the development stage, static/dynamic application security testing (SAST/DAST) is integral to the test stage and runtime app protection (RASP) is a key to the Deploy stage.

Tools like this are a vital part of the security layer of security tooling, especially just before deployment – and they often come with APIs so they can be plugged into the CI/CD process. However, while these capabilities are very important to DevSecOps, they can be seen in more of a supporting role, rather than being DevSecOps tools per se.

DevSecOps-washing is not a good idea for the enterprise

While one might argue that security should never have been shifted right, DevSecOps exists to ensure that security best practices take place across the development lifecycle. A corollary exists to the idea of “tools that help,” namely that organizations implementing these tools are not “doing DevSecOps,” any more than vendors providing these tools are DevSecOps vendors.

The only way to “do” DevSecOps is to fully embrace security at a process management and governance level: This means assessing risk, defining policy, setting review gates, and disallowing progress for insecure deliverables. Organizations that embrace DevSecOps can get help from what we are calling DevSecOps tools, as well as from scanning and hardening tools that help support its goals.

At the end of the day, all security and governance boils down to risk: If you buy a scanning tool so you can check a box that says “DevSecOps,” you are potentially adding to your risk posture, rather than mitigating it. So, get your DevSecOps strategy fixed first, then consider how you can add automation, visibility, and control using “tools that do,” as well as benefit from “tools that help.”

Continue Reading

Security

High Performance Application Security Testing

Published

on

This free 1-hour webinar from GigaOm Research. It is hosted by an expert in Application and API testing, and GigaOm analyst, Jake Dolezal. His presentation will focus on the results of high performance testing we completed against two security mechanisms: ModSecurity on NGINX and NGINX App Protect. Additionally, we tested the AWS Web Application Firewall (WAF) as a fully managed security offering.

While performance is important, it is only one criterion for a Web Application Firewall selection. The results of the report are revealing about these platforms. The methodology will be shown with clarity and transparency on how you might replicate these tests to mimic your own workloads and requirements.

Register now to join GigaOm and sponsor NGINX for this free expert webinar.

Continue Reading

Trending