Connect with us

Biz & IT

New “red team as a service” platform aims to automate hacking tests for company networks



Enlarge / Randori’s Attack platform aims to automate the “red team” adversarial security role so that more companies can afford to constantly check their security.

CSA Images via Getty Images

Attack simulation and “red teaming as a service” have become a hot area of development over the past few years as companies continue to seek ways to better train their network defenders and find problems before attackers do. Randori, a company pulling together red-teaming skills and security software experience, today is launching a new platform that attempts to capture the expertise of a high-budget security testing team as a cloud-based service—giving chief information security officers a way to continuously take the pulse of their companies’ defenses.

Red teaming, the practice of actively researching and exploiting vulnerabilities in systems to help find and fix gaps in their security, has long been the realm of high-paid security consulting firms with hands-on-keyboard (and occasionally, with full penetration testing, hands-on-lockpick) engagements, and not something most companies can afford to do regularly. Large organizations and software firms with a business imperative to keep their systems secure have typically maintained internal red teams, but smaller organizations that need red teams for things like getting credit card compliance certification or checking the security of other financial systems often rely on hit-and-run engagements with outside specialists.

There have been other efforts to streamline and automate components of red teaming to make it a more regular part of companies’ security programs. For example, Scythe, a firm that spun out of the security research company Grimm, has focused on providing attack simulation as a service—allowing a company to test the mettle of its “blue team” defenders and users by running modular “attacks” that mimic the techniques of known threat groups, while creating a marketplace for security testing modules. And other companies, such as Pwnie Express, have used passive and “offensive” security tools to scan and audit networks for potential attack vectors.

Randori takes the red-teaming mission several steps further. Instead of running simulations of attacks based on known threats, Randori Attack runs real, novel attacks based on emerging vulnerabilities—much like a human red team would. Founded by CEO Brian Hazzard (formerly of Carbon Black) and CTO David “Moose” Wolpoff (a reverse-engineering and red-teaming veteran of the specialist security firm Kyrus Tech), Randori’s “flagship” service is the Attack Platform—a cloud-based system that, when combined with Randori’s Internet-based reconnaissance system, will constantly discover and attempt to exploit a customer company’s system, playing the role of what Hazzard describes as “trusted adversary.”

"Runbooks" are automated packages containing tested attacks against specific vulnerabilities. They can go as far as required to demonstrate a vulnerability in systems, based on the scope set by the customer.
Enlarge / “Runbooks” are automated packages containing tested attacks against specific vulnerabilities. They can go as far as required to demonstrate a vulnerability in systems, based on the scope set by the customer.

The inspiration for Randori began while Hazzard was vice president of product management at Bit9, the company that would acquire the original Carbon Black in 2013 and later take its name. Bit9 was hit by a nation-state backed cyberattack in 2012, in which the attacker leveraged the company’s software reputation service and certificates to distribute malware to targeted customers. “After we got hacked, we made a huge investment in cybersecurity,” Hazzard told Ars, “but that clearly wasn’t enough.”

Hazzard’s team brought in Wolpoff’s company to “come at us at a nation-state level” to help harden its defenses. “Moose came after us hard, and we learned a two things started happening—we got a much better handle on what our attack surface was, and we got a way better understanding and more effective at protecting our crown jewels—that was important to the business.”

In 2018, Hazzard left Carbon Black, which was acquired by VMWare (a deal that completed in October of 2019). “I knew I was going to start another company and knew [the red-teaming business] needed to be modernized,” he said. Hazzard reached out again to Wolpoff with the idea of bringing software-as-a-service scalability to the security-testing world. “We’re trying to get the red-team experience in the hands of every CISO,” he said. “How do you build defenses if you don’t know how the attacker is going to come after you? The whole objective of Randori Attack is that it’s a SaaS platform that mirrors the adversary and how they would come after you.”

Wolpoff explained that the SaaS model allowed for a greater level of investment in research and the development of attacks than the traditional economics of the red-teaming business—”the same level of investment as a state actor.” Instead of building custom tools for each engagement, Randori’s researchers and developers can build a “run book” for each new type of vulnerability that emerges and then convert it into an automated set of software that can be deployed via Kubernetes instances or other cloud-based computing resources to mimic how a real attack would look to their customers.

Randori’s reconnaissance system and the Attack platform work together to continuously scan for, discover, and exploit weaknesses in customers’ networks from the outside, allowing CISOs to control the scope of tests dynamically as new vulnerabilities are discovered. All of the service is manageable through a Web console, with a dashboard that alerts security teams to the latest findings made by Attack.

Greenhill & Co., a New York-based independent investment bank, is one of Randori’s early customers, and it’s an example of the kind of company Randori is targeting for its product—a company with about 500 employees in an industry that has the need for strong security, but without the resources for an internal red team. “Red team engagements are the gold standard in security testing, but they are too expensive to do frequently,” said John Shaffer, Greenhill’s CIO, in a statement provided by Randori. “Randori’s automated methodology bridges the gap, giving me the ability to continuously test my tools, people and processes against real-world scenarios. Over the past year, Randori has greatly enhanced my visibility into our security stack and been an agent to change our internal culture of security.”

Continue Reading

Biz & IT

Three iOS 0-days revealed by researcher frustrated with Apple’s bug bounty



Enlarge / Pseudonymous researcher illusionofchaos joins a growing legion of security researchers frustrated with Apple’s slow response and inconsistent policy adherence when it comes to security flaws.

Aurich Lawson | Getty Images

Yesterday, a security researcher who goes by illusionofchaos dropped public notice of three zero-day vulnerabilities in Apple’s iOS mobile operating system. The vulnerability disclosures are mixed in with the researcher’s frustration with Apple’s Security Bounty program, which illusionofchaos says chose to cover up an earlier-reported bug without giving them credit.

This researcher is by no means the first to publicly express their frustration with Apple over its security bounty program.

Nice bug—now shhh

illusionofchaos says that they’ve reported four iOS security vulnerabilities this year—the three zero-days they publicly disclosed yesterday plus an earlier bug that they say Apple fixed in iOS 14.7. It appears that their frustration largely comes from how Apple handled that first, now-fixed bug in analyticsd.

This now-fixed vulnerability allowed arbitrary user-installed apps to access iOS’s analytics data—the stuff that can be found in Settings --> Privacy --> Analytics & Improvements --> Analytics Data—without any permissions granted by the user. illusionofchaos found this particularly disturbing, because this data includes medical data harvested by Apple Watch, such as heart rate, irregular heart rhythm, atrial fibrillation detection, and so forth.

Analytics data was available to any application, even if the user disabled the iOS Share Analytics setting.

According to illusionofchaos, they sent Apple the first detailed report of this bug on April 29. Although Apple responded the next day, it did not respond to illusionofchaos again until June 3, when it said it planned to address the issue in iOS 14.7. On July 19, Apple did indeed fix the bug with iOS 14.7, but the security content list for iOS 14.7 acknowledged neither the researcher nor the vulnerability.

Apple told illusionofchaos that its failure to disclose the vulnerability and credit them was just a “processing issue” and that proper notice would be given in “an upcoming update.” The vulnerability and its resolution still were not acknowledged as of iOS 14.8 on September 13 or iOS 15.0 on September 20.

Frustration with this failure of Apple to live up to its own promises led illusionofchaos to first threaten, then publicly drop this week’s three zero-days. In illusionofchaos‘ own words: “Ten days ago I asked for an explanation and warned then that I would make my research public if I don’t receive an explanation. My request was ignored so I’m doing what I said I would.”

We do not have concrete timelines for illusionofchaos‘ disclosure of the three zero-days, or of Apple’s response to them—but illusionofchaos says the new disclosures still adhere to responsible guidelines: “Google Project Zero discloses vulnerabilities in 90 days after reporting them to vendor, ZDI – in 120. I have waited much longer, up to half a year in one case.”

New vulnerabilities: Gamed, nehelper enumerate, nehelper Wi-Fi

The zero-days illusionofchaos dropped yesterday can be used by user-installed apps to access data that those apps should not have or have not been granted access to. We’ve listed them below—along with links to illusionofchaos‘ Github repos with proof of concept code—in order of (our opinion of) their severity:

  • Gamed zero-day exposes Apple ID email and full name, exploitable Apple ID authentication tokens, and read access to Core Duet and Speed Dial databases
  • Nehelper Wi-Fi zero-day exposes Wi-Fi information to apps that have not been granted that access
  • Nehelper Enumerate zero-day exposes information about what apps are installed on the iOS device

The Gamed 0-day is obviously the most severe, since it both exposes Personal Identifiable Information (PII) and may be used in some cases to be able to perform actions at * that would normally need to be either instigated by the iOS operating system itself, or by direct user interactions.

The Gamed zero-day’s read access to Core Duet and Speed Dial databases is also particularly troubling, since that access can be used to gain a pretty complete picture of the user’s entire set of interactions with others on the iOS device—who is in their contact list, who they’ve contacted (using both Apple and third-party applications) and when, and in some cases even file attachments to individual messages.

The Wi-Fi zero-day is next on the list, since unauthorized access to the iOS device’s Wi-Fi info might be used to track the user—or, possibly, learn the credentials necessary to access the user’s Wi-Fi network. The tracking is typically a more serious concern, since physical proximity is generally required to make Wi-Fi credentials themselves useful.

One interesting thing about the Wi-Fi zero-day is the simplicity of both the flaw and the method by which it can be exploited: “XPC endpoint accepts user-supplied parameter sdk-version, and if its value is less than or equal to 524288, entitlement check is skipped.” In other words, all you need to do is claim to be using an older software development kit—and if so, your app gets to ignore the check that should disclose whether the user consented to access.

The Nehelper Enumerate zero-day appears to be the least damaging of the three. It simply allows an app to check whether another app is installed on the device by querying for the other app’s bundleID. We haven’t come up with a particularly scary use of this bug on its own, but a hypothetical malware app might leverage such a bug to determine whether a security or antivirus app is installed and then use that information to dynamically adapt its own behavior to better avoid detection.


Assuming illusionofchaos‘ description of their disclosure timeline is correct—that they’ve waited for longer than 30 days, and in one case 180 days, to publicly disclose these vulnerabilities—it’s hard to fault them for the drop. We do wish they had included full timelines for their interaction with Apple on all four vulnerabilities, rather than only the already-fixed one.

We can confirm that this frustration of researchers with Apple’s security bounty policies is by no means limited to this one pseudonymous researcher. Since Ars published a piece earlier this month about Apple’s slow and inconsistent response to security bounties, several researchers have contacted us privately to express their own frustration. In some cases, researchers included video clips demonstrating exploits of still-unfixed bugs.

We have reached out to Apple for comment, but we have yet to receive any response as of press time. We will update this story with any response from Apple as it arrives.

Continue Reading

Biz & IT

Exchange/Outlook autodiscover bug exposed 100,000+ email passwords



Enlarge / If you own the right domain, you can intercept hundreds of thousands of innocent third parties’ email credentials, just by operating a standard webserver.

Security researcher Amit Serper of Guardicore discovered a severe flaw in Microsoft’s autodiscover—the protocol which allows automagical configuration of an email account with only the address and password required. The flaw allows attackers who purchase domains named “autodiscover”—for example, or—to intercept the clear-text account credentials of users who are having network difficulty (or whose admins incorrectly configured DNS).

Guardicore purchased several such domains and operated them as proof-of-concept credential traps from April 16 to August 25 of this year:


A web server connected to these domains received hundreds of thousands of email credentials—many of which also double as Windows Active Directory domain credentials—in clear text. The credentials are sent from clients which request the URL /Autodiscover/autodiscover.xml, with an HTTP Basic authentication header which already includes the hapless user’s Base64-encoded credentials.

Three major flaws contribute to the overall vulnerability: the Autodiscover protocol’s “backoff and escalate” behavior when authentication fails, its failure to validate Autodiscover servers prior to giving up user credentials, and its willingness to use insecure mechanisms such as HTTP Basic in the first place.

Failing upward with autodiscover

The Autodiscover protocol’s real job is the simplification of account configuration—you can perhaps rely on a normal user to remember their email address and password, but decades of computing have taught us that asking them to remember and properly enter details like POP3 or IMAP4, TLS or SSL, TCP 465 or TCP 587, and the addresses of actual mail servers are several bridges too far.

The Autodiscover protocol allows normal users to configure their own email accounts without help, by storing all of the nonprivate portions of account configuration on publicly accessible servers. When you set up an Exchange account in Outlook, you feed it an email address and a password: for example, with password Hunter2.

Armed with the user’s email address, Autodiscover sets about finding configuration information in a published XML document. It will try both HTTP and HTTPS connections, to the following URLs. (Note: contoso is a Microsoftism, representing an example domain name rather than any specific domain.)

  • http(s)://
  • http(s)://

So far, so good—we can reasonably assume that anyone allowed to place resources in either or its Autodiscover subdomain has been granted explicit trust by the owner of itself. Unfortunately, if these initial connection attempts fail, Autodiscover will back off and try to find resources at a higher-level domain.

In this case, Autodiscover’s next step would be to look for /Autodiscover/Autodiscover.xml on itself, as well as If this fails, Autodiscover fails upward yet again—this time sending email and password information to itself.

This would be bad enough if Microsoft owned—but the reality is considerably murkier. That domain was originally registered in 2002 and is currently owned by an unknown individual or organization using GoDaddy’s WHOIS privacy shield.

Guardicore’s results

In the approximately four months Guardicore ran its test credential trap, it collected 96,671 unique sets of email username and passwords in clear text. These credentials came from a wide array of organizations—publicly traded companies, manufacturers, banks, power companies, and more.

Affected users don’t see HTTPS/TLS errors in Outlook—when the Autodiscover protocol fails up from to, the protection afforded by contoso‘s ownership of its own SSL cert vanishes. Whoever purchased—in this case, Guardicore—simply provides their own certificate, which satisfies TLS warnings despite not belonging to contoso at all.

In many cases, the Outlook or similar client will offer its user’s credentials initially in a more secure format, such as NTLM. Unfortunately, a simple HTTP 401 from the web server requesting HTTP Basic auth in its place is all that’s necessary—upon which the client using Autodiscover will comply (typically without error or warning to the user) and send the credentials in Base64 encoded plain text, completely readable by the web server answering the Autodiscover request.


The truly bad news here is that, from the general public’s perspective, there is no mitigation strategy for this Autodiscover bug. If your organization’s Autodiscover infrastructure is having a bad day, your client will “fail upward” as described, potentially exposing your credentials. This flaw has not yet been patched—according to Microsoft Senior Director Jeff Jones, Guardicore disclosed the flaw publicly prior to reporting it to Microsoft.

If you’re a network administrator, you can mitigate the issue by refusing DNS requests for Autodiscover domains—if every request to resolve a domain beginning in “Autodiscover” is blocked, the Autodiscover protocol won’t be able to leak credentials. Even then, you must be careful: you might be tempted to “block” such requests by returning, but this might allow a clever user to discover someone else’s email and/or Active Directory credentials, if they can trick the target into logging into the user’s PC.

If you’re an application developer, the fix is simpler: don’t implement the flawed part of the Autodiscover spec in the first place. If your application never attempts to authenticate against an “upstream” domain in the first place, it won’t leak your users’ credentials via Autodiscover.

For more technical detail, we highly recommend Guardicore’s own blog post as well as Microsoft’s own Autodiscover documentation.

Listing image by Just_Super via Getty Images

Continue Reading

Biz & IT

Semiconductor firms can’t find enough workers, worsening chip shortage



Enlarge / Don’t expect cheaper chips anytime soon.

The semiconductor chip shortage that has so vexed the auto industry looks set to continue for quite some time, according to a new industry survey. More than half of the companies that were surveyed by IPC said they expected the shortage to last until at least the second half of 2022. And right now, the chip shortage is being exacerbated by rising costs and a shortage of workers.

According to the survey, 80 percent of chip makers say that it’s become hard to find workers who have to be specially trained to handle the highly toxic compounds used in semiconductor manufacturing. The problem is worse in North America and in Asia, where more companies are reporting rising labor costs compared to those in Europe.

But only a third of Asian chip makers say they are finding it harder to find qualified workers, compared to 67 percent of North American companies and 63 percent of European companies. That may well explain why fewer Asian semiconductor companies (42 percent) are reporting increasing order backlogs, compared to 65 percent of North American and 60 percent of European companies.

Just under half (46 percent) said they were retraining their current workers to fill the gaps, and nearly as many (44 percent) said they were increasing wages to make the jobs more attractive. Other popular measures include more flexible hours and more training opportunities for workers.

Even more of the companies surveyed said that rising material costs were a problem, too—90 percent globally, with nearly as many suggesting that trend will continue for another six months at least. IPC says that chip makers’ profit margins are shrinking as a result.

That’s probably already being felt by some of their customers. According to a report by AlixPartners, the auto industry will lose out on $210 billion in revenue in 2021, forecasting a shortfall in production of 7.7 million vehicles worldwide. That’s got the US government’s attention, too. On Thursday, Commerce Secretary Gina Raimondo is meeting automakers and tech firms, as well as semiconductor companies, to see if the federal government can help.

Continue Reading