Connect with us

Biz & IT

Hackers are exploiting a critical flaw affecting >350,000 WordPress sites

Published

on

Hackers are actively exploiting a vulnerability that allows them to execute commands and malicious scripts on Websites running File Manager, a WordPress plugin with more than 700,000 active installations, researchers said on Tuesday. Word of the attacks came a few hours after the security flaw was patched.

Attackers are using the exploit to upload files that contain webshells that are hidden in an image. From there, they have a convenient interface that allows them to run commands in plugins/wp-file-manager/lib/files/, the directory where the File Manager plugin resides. While that restriction prevents hackers from executing commands on files outside of the directory, hackers may be able to exact more damage by uploading scripts that can carry out actions on other parts of a vulnerable site.

NinTechNet, a website security firm in Bangkok, Thailand, was among the first to report the in-the-wild attacks. The post said that a hacker was exploiting the vulnerability to upload a script titled hardfork.php and then using it to inject code into the WordPress scripts /wp-admin/admin-ajax.php and /wp-includes/user.php.

Backdooring vulnerable sites at scale

In email, NinTechNet CEO Jerome Bruandet wrote:

It’s a bit too early to know the impact because when we caught the attack, hackers were just trying to backdoor websites. However, one interesting thing we noticed is that attackers were injecting some code to password-protect the access to the vulnerable file (connector.minimal.php) so that other groups of hackers could not exploit the vulnerability on the sites that were already infected.

All commands can be run in the /lib/files folder (create folders, delete files etc), but the most important issue is that they can upload PHP scripts into that folder too, and then run them and do whatever they want to the blog.

So far, they are uploading “FilesMan”, another file manager often used by hackers. This one is heavily obfuscated. In the next few hours and days we’ll see exactly what they will do, because if they password-protected the vulnerable file to prevent other hackers to exploit the vulnerability it is likely they are expecting to come back to visit the infected sites.

Fellow website security firm Wordfence, meanwhile, said in its own post that it had blocked more than 450,000 exploit attempts in the past few days. The post said that the attackers are trying to inject various files. In some cases, those files were empty, most likely in an attempt to probe for vulnerable sites and, if successful, inject a malicious file later. Files being uploaded had names including hardfork.php, hardfind.php, and x.php.

“A file manager plugin like this would make it possible for an attacker to manipulate or upload any files of their choosing directly from the WordPress dashboard, potentially allowing them to escalate privileges once in the site’s admin area,” Chloe Chamberland, a researcher with security firm Wordfence, wrote in Tuesday’s post. “For example, an attacker could gain access to the admin area of the site using a compromised password, then access this plugin and upload a webshell to do further enumeration of the server and potentially escalate their attack using another exploit.”

52% of 700,000 = potential for damage

The File Manager plugin helps administrators manage files on sites running the WordPress content management system. The plugin contains an additional file manager known as elFinder, an open source library that provides the core functionality in the plugin, along with a user interface for using it. The vulnerability arises from the way the plugin implemented elFinder.

“The core of the issue began with the File Manager plugin renaming the extension on the elFinder library’s connector.minimal.php.dist file to .php so it could be executed directly, even though the connector file was not used by the File Manager itself,” Chamberland explained. “Such libraries often include example files that are not intended to be used ‘as is’ without adding access controls, and this file had no direct access restrictions, meaning the file could be accessed by anyone. This file could be used to initiate an elFinder command and was hooked to the elFinderConnector.class.php file.”

Sal Aguilar, a contractor who sets up and secures WordPress sites, took to Twitter to warn of attacks he’s seeing.

“Oh crap!!!” he wrote. “The WP File Manager vulnerability is SERIOUS. Its spreading fast and I’m seeing hundreds of sites getting infected. Malware is being uploaded to /wp-content/plugins/wp-file-manager/lib/files.”

The security flaw is in File Manager versions ranging from 6.0 to 6.8. Statistics from WordPress show that currently about 52 percent of installations are vulnerable. With more than half of File Manager’s installed base of 700,000 sites vulnerable, the potential for damage is high. Sites running any of these versions should updated to 6.9 as soon as possible.

Continue Reading

Biz & IT

Fearing “loss of control,” AI critics call for 6-month pause in AI development

Published

on

Enlarge / An AI-generated image of a globe that has stopped spinning.

Stable Diffusion

On Wednesday, the Future of Life Institute published an open letter on its website calling on AI labs to “immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” Signed by Elon Musk and several prominent AI researchers, the letter quickly began to draw attention in the press—and some criticism on social media.

Earlier this month, OpenAI released GPT-4, an AI model that can perform compositional tasks and allegedly pass standardized tests at a human level, although those claims are still being evaluated by research. Regardless, GPT-4 and Bing Chat’s advancement in capabilities over previous AI models spooked some experts who believe we are heading toward super-intelligent AI systems faster than previously expected.

Along these lines, the Future of Life Institute argues that recent advancements in AI have led to an “out-of-control race” to develop and deploy AI models that are difficult to predict or control. They believe that the lack of planning and management of these AI systems is concerning and that powerful AI systems should only be developed once their effects are well-understood and manageable. As they write in the letter:

AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

In particular, the letter poses four loaded questions, some of which presume hypothetical scenarios that are highly controversial in some quarters of the AI community, including the loss of “all the jobs” to AI and “loss of control” of civilization:

  • “Should we let machines flood our information channels with propaganda and untruth?”
  • “Should we automate away all the jobs, including the fulfilling ones?
  • “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us?”
  • “Should we risk loss of control of our civilization?”

To address these potential threats, the letter calls on AI labs to “immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” During the pause, the authors propose that AI labs and independent experts collaborate to establish shared safety protocols for AI design and development. These protocols would be overseen by independent outside experts and should ensure that AI systems are “safe beyond a reasonable doubt.”

However, it’s unclear what “more powerful than GPT-4” actually means in a practical or regulatory sense. The letter does not specify a way to ensure compliance by measuring the relative power of a multimodal or large language model. In addition, OpenAI has specifically avoided publishing technical details about how GPT-4 works.

The Future of Life Institute is a nonprofit founded in 2014 by a group of scientists concerned about existential risks facing humanity, including biotechnology, nuclear weapons, and climate change. In addition, the hypothetical existential risk from AI has been a key focus for the group. According to Reuters, the organization is primarily funded by the Musk Foundation, London-based effective altruism group Founders Pledge, and Silicon Valley Community Foundation.

Notable signatories to the letter confirmed by a Reuters reporter include the aforementioned Tesla CEO Elon Musk, AI pioneers Yoshua Bengio and Stuart Russell, Apple co-founder Steve Wozniak, Stability AI CEO Emad Mostaque, and author Yuval Noah Harari. The open letter is available for anyone on the Internet to sign without verification, which initially led to the inclusion of some falsely added names, such as former Microsoft CEO Bill Gates, OpenAI CEO Sam Altman, and fictional character John Wick. Those names were later removed.

Continue Reading

Biz & IT

Ransomware crooks are exploiting IBM file exchange bug with a 9.8 severity

Published

on

Threat actors are exploiting a critical vulnerability in an IBM file-exchange application in hacks that install ransomware on servers, security researchers have warned.

The IBM Aspera Faspex is a centralized file-exchange application that large organizations use to transfer large files or large volumes of files at very high speeds. Rather than relying on TCP-based technologies such as FTP to move files, Aspera uses IBM’s proprietary FASP—short for Fast, Adaptive, and Secure Protocol—to better utilize available network bandwidth. The product also provides fine-grained management that makes it easy for users to send files to a list of recipients in distribution lists or shared inboxes or workgroups, giving transfers a workflow that’s similar to email.

In late January, IBM warned of a critical vulnerability in Aspera versions 4.4.2 Patch Level 1 and earlier and urged users to install an update to patch the flaw. Tracked as CVE-2022-47986, the vulnerability makes it possible for unauthenticated threat actors to remotely execute malicious code by sending specially crafted calls to an outdated programming interface. The ease of exploiting the vulnerability and the damage that could result earned CVE-2022-47986 a severity rating of 9.8 out of a possible 10.

On Tuesday, researchers from security firm Rapid7 said they recently responded to an incident in which a customer was breached using the vulnerability.

“Rapid7 is aware of at least one recent incident where a customer was compromised via CVE-2022-47986,” company researchers wrote. “In light of active exploitation and the fact that Aspera Faspex is typically installed on the network perimeter, we strongly recommend patching on an emergency basis, without waiting for a typical patch cycle to occur.”

According to other researchers, the vulnerability is being exploited to install ransomware. Sentinel One researchers, for instance, said recently that a ransomware group known as IceFire was exploiting CVE-2022-47986 to install a newly minted Linux version of its file-encrypting malware. Previously, the group pushed only a Windows version that got installed using phishing emails. Because phishing attacks are harder to pull off on Linux servers, IceFire pivoted to the IBM vulnerability to spread its Linux version. Researchers have also reported the vulnerability is being exploited to install ransomware known as Buhti.

As noted earlier, IBM patched the vulnerability in January. IBM republished its advisory earlier this month to ensure no one missed it. People who want to better understand the vulnerability and how to mitigate potential attacks against Aspera Faspex servers should check posts here and here from security firms Assetnote and Rapid7.

Continue Reading

Biz & IT

Generative AI set to affect 300 million jobs across major economies

Published

on

The latest breakthroughs in artificial intelligence could lead to the automation of a quarter of the work done in the US and eurozone, according to research by Goldman Sachs.

The investment bank said on Monday that “generative” AI systems such as ChatGPT, which can create content that is indistinguishable from human output, could spark a productivity boom that would eventually raise annual global gross domestic product by 7 percent over a 10-year period.

But if the technology lived up to its promise, it would also bring “significant disruption” to the labor market, exposing the equivalent of 300 million full-time workers across big economies to automation, according to Joseph Briggs and Devesh Kodnani, the paper’s authors. Lawyers and administrative staff would be among those at greatest risk of becoming redundant.

They calculate that roughly two-thirds of jobs in the US and Europe are exposed to some degree of AI automation, based on data on the tasks typically performed in thousands of occupations.

Most people would see less than half of their workload automated and would probably continue in their jobs, with some of their time freed up for more productive activities.

In the US, this should apply to 63 percent of the workforce, they calculated. A further 30 percent working in physical or outdoor jobs would be unaffected, although their work might be susceptible to other forms of automation.

But about 7 percent of US workers are in jobs where at least half of their tasks could be done by generative AI and are vulnerable to replacement.

Goldman said its research pointed to a similar impact in Europe. At a global level, since manual jobs are a bigger share of employment in the developing world, it estimates about a fifth of work could be done by AI—or about 300 million full-time jobs across big economies.

The report will stoke debate over the potential of AI technologies both to revive the rich world’s flagging productivity growth and to create a new class of dispossessed white-collar workers, who risk suffering a similar fate to that of manufacturing workers in the 1980s.

Goldman’s estimates of the impact are more conservative than those of some academic studies, which included the effects of a wider range of related technologies.

A paper published last week by OpenAI, the creator of GPT-4, found that 80 percent of the US workforce could see at least 10 percent of their tasks performed by generative AI, based on analysis by human researchers and the company’s machine large language model (LLM).

Europol, the law enforcement agency, also warned this week that rapid advances in generative AI could aid online fraudsters and cyber criminals, so that “dark LLMs…  may become a key criminal business model of the future.”

Goldman said that if corporate investment in AI continued to grow at a similar pace to software investment in the 1990s, US investment alone could approach 1 percent of US GDP by 2030.

The Goldman estimates are based on an analysis of US and European data on the tasks typically performed in thousands of different occupations. The researchers assumed that AI would be capable of tasks such as completing tax returns for a small business; evaluating a complex insurance claim; or documenting the results of a crime scene investigation.

They did not envisage AI being adopted for more sensitive tasks such as making a court ruling, checking the status of a patient in critical care, or studying international tax laws.

© 2023 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Continue Reading

Trending