Connect with us

Security

Around 62% of all Internet sites will run an unsupported PHP version in 10 weeks

Published

on


According to statistics from W3Techs, roughly 78.9 percent of all Internet sites today run on PHP.

But on December 31, 2018, security support for PHP 5.6.x will officially cease, marking the end of all support for any version of the ancient PHP 5.x branch.

This means that starting with next year, around 62 percent of all Internet sites still running a PHP 5.x version will stop receiving security updates for their server and website’s underlying technology, exposing hundreds of millions of websites, if not more, to serious security risks.

If a hacker finds a vulnerability in PHP after the New Year, lots of sites and users would be at risk.

“This is a huge problem for the PHP ecosystem,” Scott Arciszewski, Chief Development Officer at Paragon Initiative Enterprise, told ZDNet in an interview. “While many feel that they can ‘get away with’ running PHP 5 in 2019, the simplest way to describe this choice is: Negligent.”

“To be totally fair: It’s likely that any major, mass-exploitable flaw in PHP 5.6 would also affect the newer versions of PHP,” Arciszewski added.

“PHP 7.2 will get a patch from the PHP team, for free, in a timely manner; PHP 5.6 will only get one if you’re paying for ongoing support from your OS vendor.

“If anyone finds themselves running PHP 5 after the end of the year, ask yourself: Do you feel lucky? Because I sure wouldn’t.”

php-eols.pngphp-eols.png

The PHP community has known of this deadline for quite a while. After PHP 5.6 became the most widely used PHP version back in the spring of 2017, PHP maintainers realized it would be a disaster if they stopped security updates right when PHP 5.6 became the most popular PHP version –so they extended the EOL date to the end of 2018.

Since then, there have been several developers and security researchers who warned about the “ticking PHP time bomb,” although not as many as infosec community would have wished.

There has not been a concerted effort to get people to move to the newer PHP 7.x, but some website content management systems (CMS) projects, one by one, have started modifying minimum requirements, and warning users to use more modern hosting environments.

Of the big three –WordPress, Joomla, and Drupal– only Drupal has made the official step to adjust its minimum requirements to PHP 7, but that move will come in March 2019. Ironically, the 7.0.x branch has reached EOL on December 3, 2017, which doesn’t actually solve anything, but it’s still a step forward.

Joomla’s minimum requirement remains PHP 5.3, while WordPress’ minimum requirement remains PHP 5.2.

“The biggest source of inertia in the PHP ecosystem regarding versions is undoubtedly WordPress, which still refuses to drop support for PHP 5.2 because there are more than zero systems in the universe that still run WordPress on an ancient, unsupported version of PHP,” Arciszewski said, describing the WordPress team’s infamous strongheadedness of keeping its minimum requirement at a PHP version that went EOL in 2011.

WordPress –which is used for more than a quarter of all sites on the Internet– would, without a doubt, shift a lot of people’s views on the necessity of using modern PHP versions if the project would move its minimum PHP requirement to the newer PHP 7.x branch.

“What PHP versions should be supported [by WordPress], however, has been a major debate for some time,” said Sean Murphy, Director of Threat Intelligence at Defiant, the company behind the WordFence security plugin for WordPress, in an email exchange with ZDNet.

“There is an ongoing initiative by the WordPress team to notify users when they are using a legacy version of PHP and give them the information and tools they need to request a newer version from their hosting provider,” he added. “Here are notes from this team’s recent meeting.”

Murphy believes that one of the biggest challenges of rolling out PHP version upgrades to a large number of sites is the flood of support requests that come as a result, a reason why many CMS projects and web hosting providers are reticent and unwilling to do so.

But Murphy also points out that “good hosting providers” will always deploy new users on new versions of PHP by default, instead of letting customers choose, and will update existing clients to new versions of PHP only when requested.

But unless customers are aware that their version of PHP has reached end-of-life, very few will ask to be moved to a newer version.

Here’s where WordPress’ notifications for users who are running sites on outdated PHP versions will come to help –making people either update their server or ask their hosting provider for a more modern hosting environment.

While some WordPress security experts are alarmed about the impending EOL for the PHP 5.6 branch and the entire PHP 5.x, indirectly, Murphy is not one of them.

“A PHP vulnerability […] would indeed be very bad, but there hasn’t been any that I know of in recent history,” he said.

“Based on past PHP vulnerabilities, the threat is mostly with PHP applications,” Murphy added, suggesting that attackers would likely continue to focus on PHP libraries and CMS systems.

But not all share Murphy’s opinion. For example, Arciszewski believes that PHP 5.6 and the older branches will be prodded for new vulnerabilities more than the usual. These branches are now EOL, are insanely popular, and are unsupported –the perfect conditions of plentiful targets with bad security that draw in attackers.

“Yes, that is absolutely a risk factor,” Arciszewski said. “We saw something similar happen after Windows XP support was dropped, and I suspect we’ll see the same happen to the PHP 5 branch.

“Maybe that will be the necessary catalyst for companies to take PHP 7 adoption seriously? I can only hope.”

And if server administrators and website owners need more convincing, we’ll end this article with the same ending that Martin Wheatley used for his “ticking PHP time bomb” piece from over the summer.

Yes it does cost time and money, but what’s worse, a small monthly support fee, or a headline “Site hacked, thousands of user details stolen” followed by a fine for up to 20 million euros or 4% of your turnover under GDPR… I know what I’d rather pay.

RELATED COVERAGE:

Source link



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published.

Security

Retrospective thoughts on KubeCon Europe 2022

Published

on

I’m not going to lie. As I sit on a plane flying away from Valencia, I confess to have been taken aback by the scale of Kubecon Europe this year. In my defence, I wasn’t alone the volume of attendees appeared to take conference organisers and exhibitors by surprise, illustrated by the notable lack of water, (I was told) t-shirts and (at various points) taxis.

Keynotes were filled to capacity, and there was a genuine buzz from participants which seemed to fall into two camps: the young and cool, and the more mature and soberly dressed.

My time was largely spent in one-on-one meetings, analyst/press conferences and walking the stands, so I can’t comment on the engineering sessions. Across the piece however, there was a genuine sense of Kubernetes now being about the how, rather than the whether. For one reason or another, companies have decided they want to gain the benefits of building and deploying distributed, container-based applications.

Strangely enough, this wasn’t being seen as some magical sword that can slay the dragons of legacy systems and open the way to digital transformation the kool-aid was as absent as the water. Ultimately, enterprises have accepted that, from an architectural standpoint and for applications in general, the Kubernetes model is as good as any available right now, as a non-proprietary, well-supported open standard that they can get behind.

Virtualisation-based options and platform stacks are too heavyweight; serverless architectures are more applicable to specific use cases. So, if you want to build an application and you want it to be future-safe, the Kubernetes target is the one to aim for.

Whether to adopt Kubernetes might be a done deal, but how to adopt certainly is not. The challenge is not with Kubernetes itself, but everything that needs to go around it to make resulting applications enterprise-ready.

For example, they need to operate in compliance environments; data needs to be managed, protected, and served into an environment that doesn’t care too much about the state; integration tools are required with external and legacy systems; development pipelines need to be in place, robust and value-focused; IT Operations need a clear view of what’s running whereas a bill of materials, and the health of individual clusters; and disaster recovery is a must.

Kubernetes doesn’t do these things, opening the door to an ecosystem of solution vendors and (often CNCF-backed) open source projects. I could drill into these areas Service Mesh, GitOps, orchestration, observability, and backup but the broader point is that they are all evolving and coalescing around the need. As they increase in capability, barriers to adoption reduce and the number of potential use cases grows.

All of which puts the industry at an interesting juncture. It’s not that tooling isn’t ready: organizations are already successfully deploying applications based on Kubernetes. In many cases, however, they are doing more work than they need developers need insider knowledge of target environments, interfaces need to be integrated rather than using third-party APIs, higher-order management tooling (such as AIOps) has to be custom-deployed rather than recognising the norms of Kubernetes operations.

Solutions do exist, but they tend to be coming from relatively new vendors that are feature rather than platform players, meaning that end-user organisations have to choose their partners wisely, then build and maintain development and management platforms themselves rather than using pre-integrated tools from a singe vendor.

None of this is a problem per se, but it does create overheads for adopters, even if they gain earlier benefits from adopting the Kubernetes model. The value of first-mover advantage has to be weighed against that of investing time and effort in the current state of tooling: as a travel company once told me, “we want to be the world’s best travel site, not the world’s best platform engineers.”

So, Kubernetes may be inevitable, but equally, it will become simpler, enabling organisations to apply the architecture to an increasingly broad set of scenarios. For organisations yet to make the step towards Kubernetes, now may still be a good time to run a proof of concept though in some ways, that sip has sailed perhaps focus the PoC on what it means for working practices and structures, rather than determining whether the concepts work at all.

Meanwhile and perhaps most importantly, now is a very good moment for organisations to look for what scenarios Kubernetes works best “out of the box”, working with providers and reviewing architectural patterns to deliver proven results against specific, high-value needs these are likely to be by industry and by the domain (I could dig into this, but did I mention that I’m sitting on a plane? ).

Jon Collins from Kubecon 2022

Kubernetes might be a done deal, but that doesn’t mean it should be adopted wholesale before some of the peripheral detail is ironed out.

The post Retrospective thoughts on KubeCon Europe 2022 appeared first on GigaOm.

Continue Reading

Security

Retrospective thoughts on Kubecon

Published

on

I’m not going to lie. As I sit on a plane flying away from Valencia, I confess to have been taken aback by the scale of Kubecon Europe this year. In my defence, I wasn’t alone the volume of attendees appeared to take conference organisers and exhibitors by surprise, illustrated by the notable lack of water, (I was told) t-shirts and (at various points) taxis.

Keynotes were filled to capacity, and there was a genuine buzz from participants which seemed to fall into two camps: the young and cool, and the more mature and soberly dressed.

My time was largely spent in one-on-one meetings, analyst/press conferences and walking the stands, so I can’t comment on the engineering sessions. Across the piece however, there was a genuine sense of Kubernetes now being about the how, rather than the whether. For one reason or another, companies have decided they want to gain the benefits of building and deploying distributed, container-based applications.

Strangely enough, this wasn’t being seen as some magical sword that can slay the dragons of legacy systems and open the way to digital transformation the kool-aid was as absent as the water. Ultimately, enterprises have accepted that, from an architectural standpoint and for applications in general, the Kubernetes model is as good as any available right now, as a non-proprietary, well-supported open standard that they can get behind.

Virtualisation-based options and platform stacks are too heavyweight; serverless architectures are more applicable to specific use cases. So, if you want to build an application and you want it to be future-safe, the Kubernetes target is the one to aim for.

Whether to adopt Kubernetes might be a done deal, but how to adopt certainly is not. The challenge is not with Kubernetes itself, but everything that needs to go around it to make resulting applications enterprise-ready.

For example, they need to operate in compliance environments; data needs to be managed, protected, and served into an environment that doesn’t care too much about the state; integration tools are required with external and legacy systems; development pipelines need to be in place, robust and value-focused; IT Operations need a clear view of what’s running whereas a bill of materials, and the health of individual clusters; and disaster recovery is a must.

Kubernetes doesn’t do these things, opening the door to an ecosystem of solution vendors and (often CNCF-backed) open source projects. I could drill into these areas Service Mesh, GitOps, orchestration, observability, and backup but the broader point is that they are all evolving and coalescing around the need. As they increase in capability, barriers to adoption reduce and the number of potential use cases grows.

All of which puts the industry at an interesting juncture. It’s not that tooling isn’t ready: organizations are already successfully deploying applications based on Kubernetes. In many cases, however, they are doing more work than they need developers need insider knowledge of target environments, interfaces need to be integrated rather than using third-party APIs, higher-order management tooling (such as AIOps) has to be custom-deployed rather than recognising the norms of Kubernetes operations.

Solutions do exist, but they tend to be coming from relatively new vendors that are feature rather than platform players, meaning that end-user organisations have to choose their partners wisely, then build and maintain development and management platforms themselves rather than using pre-integrated tools from a singe vendor.

None of this is a problem per se, but it does create overheads for adopters, even if they gain earlier benefits from adopting the Kubernetes model. The value of first-mover advantage has to be weighed against that of investing time and effort in the current state of tooling: as a travel company once told me, “we want to be the world’s best travel site, not the world’s best platform engineers.”

So, Kubernetes may be inevitable, but equally, it will become simpler, enabling organisations to apply the architecture to an increasingly broad set of scenarios. For organisations yet to make the step towards Kubernetes, now may still be a good time to run a proof of concept though in some ways, that sip has sailed perhaps focus the PoC on what it means for working practices and structures, rather than determining whether the concepts work at all.

Meanwhile and perhaps most importantly, now is a very good moment for organisations to look for what scenarios Kubernetes works best “out of the box”, working with providers and reviewing architectural patterns to deliver proven results against specific, high-value needs these are likely to be by industry and by the domain (I could dig into this, but did I mention that I’m sitting on a plane? ).

Jon Collins from Kubecon 2022

Kubernetes might be a done deal, but that doesn’t mean it should be adopted wholesale before some of the peripheral detail is ironed out.

The post Retrospective thoughts on Kubecon appeared first on GigaOm.

Continue Reading

Security

Defeating Distributed Denial of Service Attacks

Published

on

It seems like every day the news brings new stories of cyberattacks. Whether ransomware, malware, crippling viruses, or more frequently of late—distributed denial of service (DDoS) attacks. According to Infosec magazine, in the first half of 2020, there was a 151% increase in the number of DDoS attacks compared to the same period the previous year. That same report states experts predict as many as 15.4 million DDoS attacks within the next two years.

These attacks can be difficult to detect until it’s too late, and then they can be challenging to defend against. There are solutions available, but there is no one magic bullet. As Alastair Cooke points out in his recent “GigaOm Radar for DDoS Protection” report, there are different categories of DDoS attacks.

And different types of attacks require different types of defenses. You’ll want to adopt each of these three defense strategies against DDoS attacks to a certain degree, as attackers are never going to limit themselves to a single attack vector:

Network Defense: Attacks targeting the OS and network operate at either Layer 3 or Layer 4 of the OSI stack. These attacks don’t flood the servers with application requests but attempt to exhaust TCP/IP resources on the supporting infrastructure. DDoS protection solutions defending against network attacks identify the attack behavior and absorb it into the platform.

Application Defense: Other DDoS attacks target the actual website itself or the web server application by overwhelming the site with random data and wasting resources. DDoS protection against these attacks might handle SSL decryption with hardware-based cryptography and prevent invalid data from reaching web servers.

Defense by Scale: There have been massive DDoS attacks, and they show no signs of stopping. The key to successfully defending against a DDoS attack is to have a scalable platform capable of deflecting an attack led by a million bots with hundreds of gigabits per second of network throughput.

Table 1. Impact of Features on Metrics
[chart id=”1001387″ show=”table”]

DDoS attacks are growing more frequent and more powerful and sophisticated. Amazon reports mitigating a massive DDoS attack a couple of years ago in which peak traffic volume reached 2.3 Tbps. Deploying DDoS protection across the spectrum of attack vectors is no longer a “nice to have,” but a necessity.

In his report, Cooke concludes that “Any DDoS protection product is only part of an overall strategy, not a silver bullet for denial-of-service hazards.” Evaluate your organization and your needs, read more about each solution evaluated in the Radar report, and carefully match the right DDoS solutions to best suit your needs.

Learn More About the Reports: Gigaom Key Criteria for DDoS, and Gigaom Radar for DDoS

The post Defeating Distributed Denial of Service Attacks appeared first on GigaOm.

Continue Reading

Trending