Connect with us

Biz & IT

This early GDPR adtech strike puts the spotlight on consent

Published

on

What does consent as a valid legal basis for processing personal data look like under Europe’s updated privacy rules? It may sound like an abstract concern but for online services that rely on things being done with user data in order to monetize free-to-access content this is a key question now the region’s General Data Protection Regulation is firmly fixed in place.

The GDPR is actually clear about consent. But if you haven’t bothered to read the text of the regulation, and instead just go and look at some of the self-styled consent management platforms (CMPs) floating around the web since May 25, you’d probably have trouble guessing it.

Confusing and/or incomplete consent flows aren’t yet extinct, sadly. But it’s fair to say those that don’t offer full opt-in choice are on borrowed time.

Because if your service or app relies on obtaining consent to process EU users’ personal data — as many free at the point-of-use, ad-supported apps do — then the GDPR states consent must be freely given, specific, informed and unambiguous.

That means you can’t bundle multiple uses for personal data under a single opt-in.

Nor can you obfuscate consent behind opaque wording that doesn’t actually specify the thing you’re going to do with the data.

You also have to offer users the choice not to consent. So you cannot pre-tick all the consent boxes that you really wish your users would freely choose — because you have to actually let them do that.

It’s not rocket science but the pushback from certain quarters of the adtech industry has been as awfully predictable as it’s horribly frustrating.

This has not gone unnoticed by consumers either. Europe’s Internet users have been filing consent-based complaints thick and fast this year. And a lot of what is being claimed as ‘GDPR compliant’ right now likely is not.

So, some six months in, we’re essentially in a holding pattern waiting for the regulatory hammers to come down.

But if you look closely there are some early enforcement actions that show some consent fog is starting to shift.

Yes, we’re still waiting on the outcomes of major consent-related complaints against tech giants. (And stockpile popcorn to watch that space for sure.)

But late last month French data protection watchdog, the CNIL, announced the closure of a formal warning it issued this summer against drive-to-store adtech firm, Fidzup — saying it was satisfied it was now GDPR compliant.

Such a regulatory stamp of approval is obviously rare this early in the new legal regime.

So while Fidzup is no adtech giant its experience still makes an interesting case study — showing how the consent line was being crossed; how, working with CNIL, it was able to fix that; and what being on the right side of the law means for a (relatively) small-scale adtech business that relies on consent to enable a location-based mobile marketing business.

From zero to GDPR hero?

Fidzup’s service works like this: It installs kit inside (or on) partner retailers’ physical stores to detect the presence of user-specific smartphones. At the same time it provides an SDK to mobile developers to track app users’ locations, collecting and sharing the advertising ID and wi-fi ID of users’ smartphone (which, along with location, are judged personal data under GDPR.)

Those two elements — detectors in physical stores; and a personal data-gathering SDK in mobile apps — come together to power Fidzup’s retail-focused, location-based ad service which pushes ads to mobile users when they’re near a partner store. The system also enables it to track ad-to-store conversions for its retail partners.

The problem Fidzup had, back in July, was that after an audit of its business the CNIL deemed it did not have proper consent to process users’ geolocation data to target them with ads.

Fidzup says it had thought its business was GDPR compliant because it took the view that app publishers were the data processors gathering consent on its behalf; the CNIL warning was a wake up call that this interpretation was incorrect — and that it was responsible for the data processing and so also for collecting consents.

The regulator found that when a smartphone user installed an app containing Fidzup’s SDK they were not informed that their location and mobile device ID data would be used for ad targeting, nor the partners Fidzup was sharing their data with.

CNIL also said users should have been clearly informed before data was collected — so they could choose to consent — instead of information being given via general app conditions (or in store posters), as was the case, after the fact of the processing.

It also found users had no choice to download the apps without also getting Fidzup’s SDK, with use of such an app automatically resulting in data transmission to partners.

Fidzup’s approach to consent had also only been asking users to consent to the processing of their geolocation data for the specific app they had downloaded — not for the targeted ad purposes with retail partners which is the substance of the firm’s business.

So there was a string of issues. And when Fidzup was hit with the warning the stakes were high, even with no monetary penalty attached. Because unless it could fix the core consent problem, the 2014-founded startup might have faced going out of business. Or having to change its line of business entirely.

Instead it decided to try and fix the consent problem by building a GDPR-compliant CMP — spending around five months liaising with the regulator, and finally getting a green light late last month.

A core piece of the challenge, as co-founder and CEO Olivier Magnan-Saurin tells it, was how to handle multiple partners in this CMP because its business entails passing data along the chain of partners — each new use and partner requiring opt-in consent.

“The first challenge was to design a window and a banner for multiple data buyers,” he tells TechCrunch. “So that’s what we did. The challenge was to have something okay for the CNIL and GDPR in terms of wording, UX etc. And, at the same time, some things that the publisher will allow to and will accept to implement in his source code to display to his users because he doesn’t want to scare them or to lose too much.

“Because they get money from the data that we buy from them. So they wanted to get the maximum money that they can, because it’s very difficult for them to live without the data revenue. So the challenge was to reconcile the need from the CNIL and the GDPR and from the publishers to get something acceptable for everyone.”

As a quick related aside, it’s worth noting that Fidzup does not work with the thousands of partners an ad exchange or demand-side platform most likely would be.

Magnan-Saurin tells us its CMP lists 460 partners. So while that’s still a lengthy list to have to put in front of consumers — it’s not, for example, the 32,000 partners of another French adtech firm, Vectaury, which has also recently been on the receiving end of an invalid consent ruling from the CNIL.

In turn, that suggests the ‘Fidzup fix’, if we can call it that, only scales so far; adtech firms that are routinely passing millions of people’s data around thousands of partners look to have much more existential problems under GDPR — as we’ve reported previously re: the Vectaury decision.

No consent without choice

Returning to Fidzup, its fix essentially boils down to actually offering people a choice over each and every data processing purpose, unless it’s strictly necessary for delivering the core app service the consumer was intending to use.

Which also means giving app users the ability to opt out of ads entirely — and not be penalized by not being able to use the app features itself.

In short, you can’t bundle consent. So Fidzup’s CMP unbundles all the data purposes and partners to offer users the option to consent or not.

“You can unselect or select each purpose,” says Magnan-Saurin of the now compliant CMP. “And if you want only to send data for, I don’t know, personalized ads but you don’t want to send the data to analyze if you go to a store or not, you can. You can unselect or select each consent. You can also see all the buyers who buy the data. So you can say okay I’m okay to send the data to every buyer but I can also select only a few or none of them.”

“What the CNIL ask is very complicated to read, I think, for the final user,” he continues. “Yes it’s very precise and you can choose everything etc. But it’s very complete and you have to spend some time to read everything. So we were [hoping] for something much shorter… but now okay we have something between the initial asking for the CNIL — which was like a big book — and our consent collection before the warning which was too short with not the right information. But still it’s quite long to read.”

Fidzup’s CNIL approved GDPR-compliant consent management platform

“Of course, as a user, I can refuse everything. Say no, I don’t want my data to be collected, I don’t want to send my data. And I have to be able, as a user, to use the app in the same way as if I accept or refuse the data collection,” he adds.

He says the CNIL was very clear on the latter point — telling it they could not require collection of geolocation data for ad targeting for usage of the app.

“You have to provide the same service to the user if he accepts or not to share his data,” he emphasizes. “So now the app and the geolocation features [of the app] works also if you refuse to send the data to advertisers.”

This is especially interesting in light of the ‘forced consent’ complaints filed against tech giants Facebook and Google earlier this year.

These complaints argue the companies should (but currently do not) offer an opt-out of targeted advertising, because behavioural ads are not strictly necessary for their core services (i.e. social networking, messaging, a smartphone platform etc).

Indeed, data gathering for such non-core service purposes should require an affirmative opt-in under GDPR. (An additional GDPR complaint against Android has also since attacked how consent is gathered, arguing it’s manipulative and deceptive.)

Asked whether, based on his experience working with the CNIL to achieve GDPR compliance, it seems fair that a small adtech firm like Fidzup has had to offer an opt-out when a tech giant like Facebook seemingly doesn’t, Magnan-Saurin tells TechCrunch: “I’m not a lawyer but based on what the CNIL asked us to be in compliance with the GDPR law I’m not sure that what I see on Facebook as a user is 100% GDPR compliant.”

“It’s better than one year ago but [I’m still not sure],” he adds. “Again it’s only my feeling as a user, based on the experience I have with the French CNIL and the GDPR law.”

Facebook of course maintains its approach is 100% GDPR compliant.

Even as data privacy experts aren’t so sure.

One thing is clear: If the tech giant was forced to offer an opt out for data processing for ads it would clearly take a big chunk out of its business — as a sub-set of users would undoubtedly say no to Zuckerberg’s “ads”. (And if European Facebook users got an ads opt out you can bet Americans would very soon and very loudly demand the same, so…)

Bridging the privacy gap

In Fidzup’s case, complying with GDPR has had a major impact on its business because offering a genuine choice means it’s not always able to obtain consent. Magnan-Saurin says there is essentially now a limit on the number of device users advertisers can reach because not everyone opts in for ads.

Although, since it’s been using the new CMP, he says a majority are still opting in (or, at least, this is the case so far) — showing one consent chart report with a ~70:30 opt-in rate, for example.

He expresses the change like this: “No one in the world can say okay I have 100% of the smartphones in my data base because the consent collection is more complete. No one in the world, even Facebook or Google, could say okay, 100% of the smartphones are okay to collect from them geolocation data. That’s a huge change.”

“Before that there was a race to the higher reach. The biggest number of smartphones in your database,” he continues. “Today that’s not the point.”

Now he says the point for adtech businesses with EU users is figuring out how to extrapolate from the percentage of user data they can (legally) collect to the 100% they can’t.

And that’s what Fidzup has been working on this year, developing machine learning algorithms to try to bridge the data gap so it can still offer its retail partners accurate predictions for tracking ad to store conversions.

“We have algorithms based on the few thousand stores that we equip, based on the few hundred mobile advertising campaigns that we have run, and we can understand for a store in London in… sports, fashion, for example, how many visits we can expect from the campaign based on what we can measure with the right consent,” he says. “That’s the first and main change in our market; the quantity of data that we can get in our database.”

“Now the challenge is to be as accurate as we can be without having 100% of real data — with the consent, and the real picture,” he adds. “The accuracy is less… but not that much. We have a very, very high standard of quality on that… So now we can assure the retailers that with our machine learning system they have nearly the same quality as they had before.

“Of course it’s not exactly the same… but it’s very close.”

Having a CMP that’s had regulatory ‘sign-off’, as it were, is something Fidzup is also now hoping to turn into a new bit of additional business.

“The second change is more like an opportunity,” he suggests. “All the work that we have done with CNIL and our publishers we have transferred it to a new product, a CMP, and we offer today to all the publishers who ask to use our consent management platform. So for us it’s a new product — we didn’t have it before. And today we are the only — to my knowledge — the only company and the only CMP validated by the CNIL and GDPR compliant so that’s useful for all the publishers in the world.”

It’s not currently charging publishers to use the CMP but will be seeing whether it can turn it into a paid product early next year.

How then, after months of compliance work, does Fidzup feel about GDPR? Does it believe the regulation is making life harder for startups vs tech giants — as is sometimes suggested, with claims put forward by certain lobby groups that the law risks entrenching the dominance of better resourced tech giants. Or does he see any opportunities?

In Magnan-Saurin’s view, six months in to GDPR European startups are at an R&D disadvantage vs tech giants because U.S. companies like Facebook and Google are not (yet) subject to a similarly comprehensive privacy regulation at home — so it’s easier for them to bag up user data for whatever purpose they like.

Though it’s also true that U.S. lawmakers are now paying earnest attention to the privacy policy area at a federal level. (And Google’s CEO faced a number of tough questions from Congress on that front just this week.)

“The fact is Facebook-Google they own like 90% of the revenue in mobile advertising in the world. And they are American. So basically they can do all their research and development on, for example, American users without any GDPR regulation,” he says. “And then apply a pattern of GDPR compliance and apply the new product, the new algorithm, everywhere in the world.

“As a European startup I can’t do that. Because I’m a European. So once I begin the research and development I have to be GDPR compliant so it’s going to be longer for Fidzup to develop the same thing as an American… But now we can see that GDPR might be beginning a ‘world thing’ — and maybe Facebook and Google will apply the GDPR compliance everywhere in the world. Could be. But it’s their own choice. Which means, for the example of the R&D, they could do their own research without applying the law because for now U.S. doesn’t care about the GDPR law, so you’re not outlawed if you do R&D without applying GDPR in the U.S. That’s the main difference.”

He suggests some European startups might relocate R&D efforts outside the region to try to workaround the legal complexity around privacy.

“If the law is meant to bring the big players to better compliance with privacy I think — yes, maybe it goes in this way. But the first to suffer is the European companies, and it becomes an asset for the U.S. and maybe the Chinese… companies because they can be quicker in their innovation cycles,” he suggests. “That’s a fact. So what could happen is maybe investors will not invest that much money in Europe than in U.S. or in China on the marketing, advertising data subject topics. Maybe even the French companies will put all the R&D in the U.S. and destroy some jobs in Europe because it’s too complicated to do research on that topics. Could be impacts. We don’t know yet.”

But the fact of GDPR enforcement having — perhaps inevitably — started small, with so far a small bundle of warnings against relative data minnows, rather than any swift action against the industry dominating adtech giants, that’s being felt as yet another inequality at the startup coalface.

“What’s sure is that the CNIL started to send warnings not to Google or Facebook but to startups. That’s what I can see,” he says. “Because maybe it’s easier to see I’m working on GDPR and everything but the fact is the law is not as complicated for Facebook and Google as it is for the small and European companies.”

Source link

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Biz & IT

Ransomware crooks post cops’ psych evaluations after talks with DC police stall

Published

on

A ransomware gang that hacked the District of Columbia’s Metropolitan Police Department in April posted personnel records on Tuesday that revealed highly sensitive details for almost two dozen officers, including the results of psychological assessments and polygraph tests, driver license images, fingerprints, social security numbers, dates of birth, and residential, financial and marriage histories.

The data, included in a 161GB download from a website on the dark web, was made available after negotiations broke down between members of the Babuk ransomware group and MDP officials, according to screenshots purporting to be chat transcripts between the two organizations. After earlier threatening to leak the names of confidential informants to crime gangs, the operators agreed to remove the data while they carried out the now-aborted negotiations, the transcripts showed.

This is unacceptable

The operators demanded $4 million in exchange for a promise not to publish any more information and provide a decryption key that would restore the data.

“You are a state institution, treat your data with respect and think about their price,” the operators said, according to the transcript. “They cost even more than 4,000,000, do you understand that?”

“Our final proposal is to offer to pay $100,000 to prevent the release of the stolen data,” the MPD negotiator eventually replied. “If this offer is not acceptable, then it seems our conversation is complete. I think we understand the consequences of not reaching an agreement. We are OK with that outcome.”

“This is unacceptable from our side,” the ransomware representative replied. “Follow our website at midnight.”

A post on the group’s website said: “The negotiations reached a dead end, the amount we were offered does not suit us, we are posting 20 more personal files on officers.” The 161MB file was password protected. The operators later published the passphrase after MPD officials refused to raise the price the department was willing to pay.

Three of the names listed in the personnel files matched the names of officers who work for the MPD, web searches showed. The files were based on background investigations of job applicants under consideration to be hired by the department.

MPD representatives didn’t respond to questions about the authenticity of the transcripts or the current status of negotiations.

Like virtually all ransomware operators these days, those with Babuk employ a double extortion model, which charges not only for the decryption key to unlock the stolen data but also in exchange for the promise not to make any of the data available publicly. The operators typically leak small amounts of data in hopes of motivating the victims to pay the fee. If victims refuse, future releases include ever more private and sensitive information.

The ransomware attack on the MPD has no known connection to the one that has hit Colonial Pipeline.

Continue Reading

Biz & IT

Amazon “seized and destroyed” 2 million counterfeit products in 2020

Published

on

Enlarge / Amazon trailers backed into bays at a distribution center in Miami, Florida, in August 2019.

Amazon “seized and destroyed” over 2 million counterfeit products that sellers sent to Amazon warehouses in 2020 and “blocked more than 10 billion suspected bad listings before they were published in our store,” the company said in its first “Brand Protection Report.”

In 2020, “we seized and destroyed more than 2 million products sent to our fulfillment centers and that we detected as counterfeit before being sent to a customer,” Amazon’s report said. “In cases where counterfeit products are in our fulfillment centers, we separate the inventory and destroy those products so they are not resold elsewhere in the supply chain,” the report also said.

Third-party sellers can also ship products directly to consumers instead of using Amazon’s shipping system. The 2 million fakes found in Amazon fulfillment centers would only account for counterfeit products from sellers using the “Fulfilled by Amazon” service.

The counterfeit problem got worse over the past year. “Throughout the pandemic, we’ve seen increased attempts by bad actors to commit fraud and offer counterfeit products,” Amazon VP Dharmesh Mehta wrote in a blog post yesterday.

Counterfeiting is a longstanding problem on Amazon. Other problems on Amazon that harm consumers include the sale of dangerous products, fake reviews, defective third-party goods, and the passing of bribes from unscrupulous sellers to unscrupulous Amazon employees and contractors. One US appeals court ruled in 2019 that Amazon can be held responsible for defective third-party goods, but Amazon has won other similar cases. Amazon is again arguing that it should not be held liable for a defective third-party product in a case before the Texas Supreme Court that involves a severely injured toddler.

Amazon tries to reassure legit sellers

Amazon’s new report was meant to reassure legitimate sellers that their products won’t be counterfeited. While counterfeits remain a problem for unsuspecting Amazon customers, the e-commerce giant said that “fewer than 0.01 percent of all products sold on Amazon received a counterfeit complaint from customers” in 2020. Of course, people may buy and use counterfeit products without ever realizing they are fake or without reporting it to Amazon, so that percentage may not capture the extent of the problem.

Amazon’s report on counterfeits describes extensive systems and processes to determine which sellers can do business on Amazon. While Amazon has argued in court that it is not liable for what third parties sell on its platform, the company is monitoring sellers in an effort to maintain credibility with buyers and legitimate sellers.

Amazon said it “invested over $700 million and employed more than 10,000 people to protect our store from fraud and abuse” in 2020, adding:

We leverage a combination of advanced machine learning capabilities and expert human investigators to protect our store proactively from bad actors and bad products. We are constantly innovating to stay ahead of bad actors and their attempts to circumvent our controls. In 2020, we prevented over 6 million attempts to create new selling accounts, stopping bad actors before they published a single product for sale, and blocked more than 10 billion suspected bad listings before they were published in our store.

“This is an escalating battle with criminals that attempt to sell counterfeits, and the only way to permanently stop counterfeiters is to hold them accountable through litigation in the court system and through criminal prosecution,” Amazon also said. “In 2020, we established a new Counterfeit Crimes Unit to build and refer cases to law enforcement, undertake independent investigations or joint investigations with brands, and pursue civil litigation against counterfeiters.”

Amazon said it now “report[s] all confirmed counterfeiters to law enforcement agencies in Canada, China, the European Union, UK, and US.” Amazon also urged governments to “increase prosecution of counterfeiters, increase resources for law enforcement fighting counterfeiters, and incarcerate these criminals globally.”

Stricter seller-verification system

Amazon said it had a “new live video and physical address verification” system in place in 2020 in which “Amazon connects one-on-one with prospective sellers through a video chat or in person at an Amazon office to verify sellers’ identities and government-issued documentation.” Amazon said it also “verifies new and existing sellers’ addresses by sending information including a unique code to the seller’s address.”

Most new attempts to register as a seller were apparently fraudulent, as Amazon said that “only 6 percent of attempted new seller account registrations passed our robust verification processes and listed products.” Overall, Amazon “stopped over 6 million attempts to create a selling account before they were able to publish a single listing for sale” in 2020, more than double “the 2.5 million attempts we stopped in 2019,” Amazon said.

The verification process isn’t enough on its own to stop all new fraudulent sellers, so Amazon said it performs “continuous monitoring” of sellers to identify new risks. “If we identify a bad actor, we immediately close their account, withhold funds disbursement, and determine if this new information brings other related accounts into suspicion. We also determine if the case warrants civil or criminal prosecution and report the bad actor to law enforcement,” Amazon said.

Amazon monitors product detail changes for fraud

One problem we wrote about a few months ago involves “bait-and-switch reviews” in which sellers trick Amazon into displaying reviews for unrelated products to get to the top of Amazon’s search results. In one case, a $23 drone with 6,400 reviews achieved a five-star average rating only because it had thousands of reviews for honey. At some point, the product listing had changed from a food item to a tech product, but the reviews for the food product remained. After a purging of the old reviews, that same product page now lists just 348 ratings at a 3.6-star average.

Amazon is trying to prevent recurrences of this problem, saying in its new report that it scans “more than 5 billion attempted changes to product detail pages daily for signs of potential abuse.”

Amazon also provides self-service tools to companies to help them block counterfeits of their products. Amazon’s report said that 18,000 brands have enrolled in “Project Zero,” which “provides brands with unprecedented power by giving them the ability to directly remove listings from our store.” The program also has an optional product serialization feature that lets sellers put unique codes on their products or packaging.

The self-service tool only accounts for a tiny percentage of blocked listings. “For every 1 listing removed by a brand through our self-service counterfeit removal tool, our automated protections removed more than 600 listings through scaled technology and machine learning that proactively addresses potential counterfeits and stops those listings from appearing in our store,” Amazon said.

Continue Reading

Biz & IT

Hackers who shut down pipeline: We don’t want to cause “problems for society”

Published

on

Enlarge / Problems with Colonial Pipeline’s distribution system tend to lead to gasoline runs and price increases across the US Southeast and Eastern seaboard. In this September 2016 photo, a man prepared to refuel his vehicle after a Colonial leak in Alabama.

On Friday, Colonial Pipeline took many of its systems offline in the wake of a ransomware attack. With systems offline to contain the threat, the company’s pipeline system is inoperative. The system delivers approximately 45 percent of the East Coast’s petroleum products, including gasoline, diesel fuel, and jet fuel.

Colonial Pipeline issued a statement Sunday saying that the US Department of Energy is leading the US federal government response to the attack. “[L]eading, third-party cybersecurity experts” engaged by Colonial Pipeline itself are also on the case. The company’s four main pipelines are still down, but it has begun restoring service to smaller lateral lines between terminals and delivery points as it determines how to safely restart its systems and restore full functionality.

Colonial Pipeline has not publicly said what was demanded of it or how the demand was made. Meanwhile, the hackers have issued a statement saying that they’re just in it for the money.

Regional emergency declaration

In response to the attacks on Colonial Pipeline, the Biden administration issued a Regional Emergency Declaration 2021-002 this Sunday. The declaration provides a temporary exemption to Parts 390 through 399 of the Federal Motor Carrier Safety Regulations, allowing alternate transportation of petroleum products via tanker truck to relieve shortages related to the attack.

The emergency declaration became effective immediately upon issuance Sunday and remains in effect until June 8 or until the emergency ends, whichever is sooner. Although the move will ease shortages somewhat, oil market analyst Gaurav Sharma told the BBC the exemption wouldn’t be anywhere near enough to replace the pipeline’s missing capacity. “Unless they sort it out by Tuesday, they’re in big trouble,” said Sharma, adding that “the first areas to hit would be Atlanta and Tennessee, then the domino effect goes up to New York.”

Russian gang DarkSide believed responsible for attack

Unnamed US government and private security sources engaged by Colonial have told CNN, The Washington Post, and Bloomberg that the Russian criminal gang DarkSide is likely responsible for the attack. DarkSide typically chooses targets in non-Russian-speaking countries but describes itself as “apolitical” on its dark web site.

Infosec analyst Dmitry Smilyanets tweeted a screenshot of a statement the group made this morning, apparently concerning the Colonial Pipeline attack:

NBC News reports that Russian cybercriminals frequently freelance for the Kremlin—but indications point to a cash grab made by the criminals themselves this time rather than a state-sponsored attack.

Dmitri Alperovitch, a co-founder of infosec company CrowdStrike, claims that direct Russian state involvement hardly matters at this point. “Whether they work for the state or not is increasingly irrelevant, given Russia’s obvious policy of harboring and tolerating cybercrime,” he said.

DarkSide “operates like a business”

This sample threat was posted to DarkSide's dark web site in 2020, detailing attacks made on a threat management company.
Enlarge / This sample threat was posted to DarkSide’s dark web site in 2020, detailing attacks made on a threat management company.

London-based security firm Digital Shadows said in September that DarkSide operates like a business and described its business model as “RaaC”—meaning Ransomware-as-a-Corporation.

In terms of its actual attack methods, DarkSide doesn’t appear to be very different from smaller criminal operators. According to Digital Shadows, the group stands out due to its careful selection of targets, preparation of custom ransomware executables for each target, and quasi-corporate communication throughout the attacks.

DarkSide claims to avoid targets in medical, education, nonprofit, or governmental sectors—and claims that it only attacks “companies that can pay the requested amount” after “carefully analyz[ing] accountancy” and determining a ransom amount based on a company’s net income. Digital Shadows believes these claims largely translate to “we looked you up on ZoomInfo first.”

It seems quite possible that the group didn’t realize how much heat it would bring onto itself with the Colonial Pipeline attack. Although not a government entity itself, Colonial’s operations are crucial enough to national security to have brought down immediate Department of Energy response—which the group certainly noticed and appears to have responded to via this morning’s statement that it would “check each company that our partners want to encrypt” to avoid “social consequences” in the future.

Continue Reading

Trending