Connect with us

Biz & IT

Is Europe closing in on an antitrust fix for surveillance technologists?

Published

on

The German Federal Cartel Office’s decision to order Facebook to change how it processes users’ personal data this week is a sign the antitrust tide could at last be turning against platform power.

One European Commission source we spoke to, who was commenting in a personal capacity, described it as “clearly pioneering” and “a big deal”, even without Facebook being fined a dime.

The FCO’s decision instead bans the social network from linking user data across different platforms it owns, unless it gains people’s consent (nor can it make use of its services contingent on such consent). Facebook is also prohibited from gathering and linking data on users from third party websites, such as via its tracking pixels and social plugins.

The order is not yet in force, and Facebook is appealing, but should it come into force the social network faces being de facto shrunk by having its platforms siloed at the data level.

To comply with the order Facebook would have to ask users to freely consent to being data-mined — which the company does not do at present.

Yes, Facebook could still manipulate the outcome it wants from users but doing so would open it to further challenge under EU data protection law, as its current approach to consent is already being challenged.

The EU’s updated privacy framework, GDPR, requires consent to be specific, informed and freely given. That standard supports challenges to Facebook’s (still fixed) entry ‘price’ to its social services. To play you still have to agree to hand over your personal data so it can sell your attention to advertisers. But legal experts contend that’s neither privacy by design nor default.

The only ‘alternative’ Facebook offers is to tell users they can delete their account. Not that doing so would stop the company from tracking you around the rest of the mainstream web anyway. Facebook’s tracking infrastructure is also embedded across the wider Internet so it profiles non-users too.

EU data protection regulators are still investigating a very large number of consent-related GDPR complaints.

But the German FCO, which said it liaised with privacy authorities during its investigation of Facebook’s data-gathering, has dubbed this type of behavior “exploitative abuse”, having also deemed the social service to hold a monopoly position in the German market.

So there are now two lines of legal attack — antitrust and privacy law — threatening Facebook (and indeed other adtech companies’) surveillance-based business model across Europe.

A year ago the German antitrust authority also announced a probe of the online advertising sector, responding to concerns about a lack of transparency in the market. Its work here is by no means done.

Data limits

The lack of a big flashy fine attached to the German FCO’s order against Facebook makes this week’s story less of a major headline than recent European Commission antitrust fines handed to Google — such as the record-breaking $5BN penalty issued last summer for anticompetitive behaviour linked to the Android mobile platform.

But the decision is arguably just as, if not more, significant, because of the structural remedies being ordered upon Facebook. These remedies have been likened to an internal break-up of the company — with enforced internal separation of its multiple platform products at the data level.

This of course runs counter to (ad) platform giants’ preferred trajectory, which has long been to tear modesty walls down; pool user data from multiple internal (and indeed external sources), in defiance of the notion of informed consent; and mine all that personal (and sensitive) stuff to build identity-linked profiles to train algorithms that predict (and, some contend, manipulate) individual behavior.

Because if you can predict what a person is going to do you can choose which advert to serve to increase the chance they’ll click. (Or as Mark Zuckerberg puts it: ‘Senator, we run ads.’)

This means that a regulatory intervention that interferes with an ad tech giant’s ability to pool and process personal data starts to look really interesting. Because a Facebook that can’t join data dots across its sprawling social empire — or indeed across the mainstream web — wouldn’t be such a massive giant in terms of data insights. And nor, therefore, surveillance oversight.

Each of its platforms would be forced to be a more discrete (and, well, discreet) kind of business.

Competing against data-siloed platforms with a common owner — instead of a single interlinked mega-surveillance-network — also starts to sound almost possible. It suggests a playing field that’s reset, if not entirely levelled.

(Whereas, in the case of Android, the European Commission did not order any specific remedies — allowing Google to come up with ‘fixes’ itself; and so to shape the most self-serving ‘fix’ it can think of.)

Meanwhile, just look at where Facebook is now aiming to get to: A technical unification of the backend of its different social products.

Such a merger would collapse even more walls and fully enmesh platforms that started life as entirely separate products before were folded into Facebook’s empire (also, let’s not forget, via surveillance-informed acquisitions).

Facebook’s plan to unify its products on a single backend platform looks very much like an attempt to throw up technical barriers to antitrust hammers. It’s at least harder to imagine breaking up a company if its multiple, separate products are merged onto one unified backend which functions to cross and combine data streams.

Set against Facebook’s sudden desire to technically unify its full-flush of dominant social networks (Facebook Messenger; Instagram; WhatsApp) is a rising drum-beat of calls for competition-based scrutiny of tech giants.

This has been building for years, as the market power — and even democracy-denting potential — of surveillance capitalism’s data giants has telescoped into view.

Calls to break up tech giants no longer carry a suggestive punch. Regulators are routinely asked whether it’s time. As the European Commission’s competition chief, Margrethe Vestager, was when she handed down Google’s latest massive antitrust fine last summer.

Her response then was that she wasn’t sure breaking Google up is the right answer — preferring to try remedies that might allow competitors to have a go, while also emphasizing the importance of legislating to ensure “transparency and fairness in the business to platform relationship”.

But it’s interesting that the idea of breaking up tech giants now plays so well as political theatre, suggesting that wildly successful consumer technology companies — which have long dined out on shiny convenience-based marketing claims, made ever so saccharine sweet via the lure of ‘free’ services — have lost a big chunk of their populist pull, dogged as they have been by so many scandals.

From terrorist content and hate speech, to election interference, child exploitation, bullying, abuse. There’s also the matter of how they arrange their tax affairs.

The public perception of tech giants has matured as the ‘costs’ of their ‘free’ services have scaled into view. The upstarts have also become the establishment. People see not a new generation of ‘cuddly capitalists’ but another bunch of multinationals; highly polished but remote money-making machines that take rather more than they give back to the societies they feed off.

Google’s trick of naming each Android iteration after a different sweet treat makes for an interesting parallel to the (also now shifting) public perceptions around sugar, following closer attention to health concerns. What does its sickly sweetness mask? And after the sugar tax, we now have politicians calling for a social media levy.

Just this week the deputy leader of the main opposition party in the UK called for setting up a standalone Internet regulatory with the power to break up tech monopolies.

Talking about breaking up well-oiled, wealth-concentration machines is being seen as a populist vote winner. And companies that political leaders used to flatter and seek out for PR opportunities find themselves treated as political punchbags; Called to attend awkward grilling by hard-grafting committees, or taken to vicious task verbally at the highest profile public podia. (Though some non-democratic heads of state are still keen to press tech giant flesh.)

In Europe, Facebook’s repeat snubs of the UK parliament’s requests last year for Zuckerberg to face policymakers’ questions certainly did not go unnoticed.

Zuckerberg’s empty chair at the DCMS committee has become both a symbol of the company’s failure to accept wider societal responsibility for its products, and an indication of market failure; the CEO so powerful he doesn’t feel answerable to anyone; neither his most vulnerable users nor their elected representatives. Hence UK politicians on both sides of the aisle making political capital by talking about cutting tech giants down to size.

The political fallout from the Cambridge Analytica scandal looks far from done.

Quite how a UK regulator could successfully swing a regulatory hammer to break up a global Internet giant such as Facebook which is headquartered in the U.S. is another matter. But policymakers have already crossed the rubicon of public opinion and are relishing talking up having a go.

That represents a sea-change vs the neoliberal consensus that allowed competition regulators to sit on their hands for more than a decade as technology upstarts quietly hoovered up people’s data and bagged rivals, and basically went about transforming themselves from highly scalable startups into market-distorting giants with Internet-scale data-nets to snag users and buy or block competing ideas.

The political spirit looks willing to go there, and now the mechanism for breaking platforms’ distorting hold on markets may also be shaping up.

The traditional antitrust remedy of breaking a company along its business lines still looks unwieldy when faced with the blistering pace of digital technology. The problem is delivering such a fix fast enough that the business hasn’t already reconfigured to route around the reset. 

Commission antitrust decisions on the tech beat have stepped up impressively in pace on Vestager’s watch. Yet it still feels like watching paper pushers wading through treacle to try and catch a sprinter. (And Europe hasn’t gone so far as trying to impose a platform break up.) 

But the German FCO decision against Facebook hints at an alternative way forward for regulating the dominance of digital monopolies: Structural remedies that focus on controlling access to data which can be relatively swiftly configured and applied.

Vestager, whose term as EC competition chief may be coming to its end this year (even if other Commission roles remain in potential and tantalizing contention), has championed this idea herself.

In an interview on BBC Radio 4’s Today program in December she poured cold water on the stock question about breaking tech giants up — saying instead the Commission could look at how larger firms got access to data and resources as a means of limiting their power. Which is exactly what the German FCO has done in its order to Facebook. 

At the same time, Europe’s updated data protection framework has gained the most attention for the size of the financial penalties that can be issued for major compliance breaches. But the regulation also gives data watchdogs the power to limit or ban processing. And that power could similarly be used to reshape a rights-eroding business model or snuff out such business entirely.

The merging of privacy and antitrust concerns is really just a reflection of the complexity of the challenge regulators now face trying to rein in digital monopolies. But they’re tooling up to meet that challenge.

Speaking in an interview with TechCrunch last fall, Europe’s data protection supervisor, Giovanni Buttarelli, told us the bloc’s privacy regulators are moving towards more joint working with antitrust agencies to respond to platform power. “Europe would like to speak with one voice, not only within data protection but by approaching this issue of digital dividend, monopolies in a better way — not per sectors,” he said. “But first joint enforcement and better co-operation is key.”

The German FCO’s decision represents tangible evidence of the kind of regulatory co-operation that could — finally — crack down on tech giants.

Blogging in support of the decision this week, Buttarelli asserted: “It is not necessary for competition authorities to enforce other areas of law; rather they need simply to identity where the most powerful undertakings are setting a bad example and damaging the interests of consumers.  Data protection authorities are able to assist in this assessment.”

He also had a prediction of his own for surveillance technologists, warning: “This case is the tip of the iceberg — all companies in the digital information ecosystem that rely on tracking, profiling and targeting should be on notice.”

So perhaps, at long last, the regulators have figured out how to move fast and break things.

Source link



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Biz & IT

Twitter’s latest robo-nag will flag “harmful” language before you post

Published

on

Enlarge / Before you tweet, you might be asked if you meant to be so rude.

Want to know exactly what Twitter’s fleet of text-combing, dictionary-parsing bots defines as “mean”? Starting any day now, you’ll have instant access to that data—at least, whenever a stern auto-moderator says you’re not tweeting politely.

On Wednesday, members of Twitter’s product-design team confirmed that a new automatic prompt will begin rolling out for all Twitter users, regardless of platform and device, that activates when a post’s language crosses Twitter’s threshold of “potentially harmful or offensive language.” This follows a number of limited-user tests of the notices beginning in May of last year. Soon, any robo-moderated tweets will be interrupted with a notice asking, “Want to review this before tweeting?”

Earlier tests of this feature, unsurprisingly, had their share of issues. “The algorithms powering the [warning] prompts struggled to capture the nuance in many conversations and often didn’t differentiate between potentially offensive language, sarcasm, and friendly banter,” Twitter’s announcement states. The news post clarifies that Twitter’s systems now account for, among other things, how often two accounts interact with each other—meaning, I’ll likely get a flag for sending curse words and insults to a celebrity I never talk to on Twitter, but I would likely be in the clear sending those same sentences via Twitter to friends or Ars colleagues.

Additionally, Twitter admits that its systems previously needed updates to “account for situations in which language may be reclaimed by underrepresented communities and used in non-harmful ways.” We hope the data points used to make those determinations don’t go so far as to check a Twitter account’s profile photo, especially since troll accounts typically use fake or stolen images. (Twitter has yet to clarify how it makes determinations for these aforementioned “situations.”)

As of press time, Twitter isn’t providing a handy dictionary for users to peruse—or cleverly misspell their favorite insults and curses in order to mask them from Twitter’s auto-moderation tools.

So, two-thirds kept it real, then?

To sell this nag-notice news to users, Twitter pats itself on the back in the form of data, but it’s not entirely convincing.

During the kindness-notice testing phase, Twitter says one-third of users elected to either rephrase their flagged posts or delete them, while anyone who was flagged began posting 11 percent fewer “offensive” posts and replies, as averaged out. (Meaning, some users may have become kinder, while others could have become more resolute in their weaponized speech.) That all sounds like a massive majority of users remaining steadfast in their personal quest to tell it like it is.

Twitter’s weirdest data point is that anyone who received a flag was “less likely to receive offensive and harmful replies back.” It’s unclear what point Twitter is trying to make with that data: why should any onus of politeness land on those who receive nasty tweets?

This follows another nag-notice initiative by Twitter, launched in late 2020, to encourage users to “read” an article linked by another Twitter user before “re-tweeting” it. In other words: if you see a juicy headline and slap the RT button, you could unwittingly share something you may not agree with. Yet this change seems like an undersized bandage to a bigger Twitter problem: how the service incentivizes rampant, timely use of the service in a search for likes and interactions, honesty and civility be damned.

And no nag notice will likely fix Twitter’s struggles with how inauthentic actors and trolls continue to game the system and poison the site’s discourse. The biggest example remains an issue found when clicking through to heavily “liked” and replied posts, usually from high-profile or “verified” accounts. Twitter commonly bumps drive-by posts to the top of these threads’ replies, often from accounts with suspicious activity and lack of organic interactions.

Perhaps Twitter could take the lessons from this nag notice roll-out to heart, particularly about weighting interactions based on a confirmed back-and-forth relationship between accounts. Or the company could get rid of all algorithm-driven weighting of posts, especially those that drive nonfollowed content to a user’s feed and go back to the better days of purely chronological content—so that we can more easily shrug our shoulders at the BS.

Continue Reading

Biz & IT

Data leak makes Peloton’s Horrible, No-Good, Really Bad Day even worse

Published

on

Peloton

Peloton is having a rough day. First, the company recalled two treadmill models following the death of a 6-year-old child who was pulled under one of the devices. Now comes word Peloton exposed sensitive user data, even after the company knew about the leak. No wonder the company’s stock price closed down 15 percent on Wednesday.

Peloton provides a line of network-connected stationary bikes and treadmills. The company also offers an online service that allows users to join classes, work with trainers, or do workouts with other users. In October, Peloton told investors it had a community of 3 million members. Members can set accounts to be public so friends can view details such as classes attended and workout stats, or users can choose for profiles to be private.

I know where you worked out last summer

Researchers at security consultancy Pen Test Partners on Wednesday reported that a flaw in Peloton’s online service was making data for all of its users available to anyone anywhere in the world, even when a profile was set to private. All that was required was a little knowledge of the faulty programming interfaces that Peloton uses to transmit data between devices and the company’s servers.

Data exposed included:

  • User IDs
  • Instructor IDs
  • Group Membership
  • Workout stats
  • Gender and age
  • Weight
  • If they are in the studio or not

Ars agreed to withhold another piece of personal data exposed because Peloton is still working to secure it.

A blog post Pen Test Partners published on Wednesday said that the APIs required no authentication before providing the information. Company researchers said that they reported the exposure to Peloton in January and promptly received an acknowledgement. Then, Wednesday’s post said, Peloton went silent.

Slow response, botched fix

Two weeks later, the researchers said, the company silently provided a partial fix. Rather than providing the user data with no authentication required at all, the APIs made the data available only to those who had an account. The change was better than nothing, but it still let anyone who subscribed to the online service obtain private details of any other subscriber.

When Pen Test Partners informed Peloton of the inadequate fix, they say they got no response. Pen Text Partners researcher Ken Munro said he went as far as looking up company executives on LinkedIn. The researchers said the fix came only after TechCrunch reporter Zack Whittaker, who first reported the leak, inquired about it.

“I was pretty pissed by this point, but figured it was worth one last shot before dropping an 0-day on Peloton users,” Munro told me. “I asked Zack W to hit up their press office. That had a miraculous effect – within hours I had an email from their new CISO, who was new in post and had investigated, found their rather weak response and had a plan to fix the bugs.”

A Peloton representative declined to discuss the timeline on the record but did provide the following canned response:

It’s a priority for Peloton to keep our platform secure and we’re always looking to improve our approach and process for working with the external security community. Through our Coordinated Vulnerability Disclosure program, a security researcher informed us that he was able to access our API and see information that’s available on a Peloton profile. We took action and addressed the issues based on his initial submissions, but we were slow to update the researcher about our remediation efforts. Going forward, we will do better to work collaboratively with the security research community and respond more promptly when vulnerabilities are reported. We want to thank Ken Munro for submitting his reports through our CVD program and for being open to working with us to resolve these issues.

The incident is the latest reminder that data stored online is often free for the taking, even when companies say it isn’t. This puts people in a bind. On the one hand, sharing weight, workout stats, and other data can often help users get the most out of training sessions or group workouts. On the other… well, you know.

I generally try to falsify much of the data I provide. Most of the services I use that require a credit card will approve purchases just fine even when I supply a false name, address, and phone number. Not having those details attached to user names or other data can often minimize the sting of a data leak like this one.

Continue Reading

Biz & IT

Starlink can serve 500,000 users easily, several million “more of a challenge”

Published

on

Enlarge / Screenshot from the Starlink order page, with the street address blotted out.

SpaceX has received more than 500,000 orders for Starlink broadband service, the company said yesterday.

“‘To date, over half a million people have placed an order or put down a deposit for Starlink,’ SpaceX operations engineer Siva Bharadvaj said during the launch webcast of its 26th Starlink mission,” CNBC reported.

SpaceX opened preorders for Starlink satellite service in February and is serving at least 10,000 users in its beta in the US and overseas combined. The preorders required a $99 deposit for service that would be available in the second half of this year. The 500,000 total orders presumably include both US residents and people in other countries; we asked SpaceX for more details and will update this article if we get a response.

A preorder doesn’t guarantee that you’ll get service, and slots are limited in each geographic region because of capacity limits. Still, SpaceX CEO Elon Musk said he expects all of the preorderers to get service—but said that SpaceX will face a challenge if it gets millions of orders.

“Only limitation is high density of users in urban areas,” Musk tweeted yesterday. “Most likely, all of the initial 500k will receive service. More of a challenge when we get into the several million user range.”

The total cost for each Starlink user is $499 for hardware, $50 for shipping and handling, and $99 for monthly service, plus tax. Preorders are still open on the Starlink website.

SpaceX prepares for up to 5 million users in US

Despite Musk’s comment, SpaceX has been laying the groundwork to potentially serve up to 5 million subscribers in the US. SpaceX initially obtained a Federal Communications Commission license to deploy up to 1 million user terminals (i.e. satellite dishes) in the US and later asked the FCC to increase the authorized amount to 5 million terminals. The application is still pending.

“SpaceX Services requests this increase in authorized units due to the extraordinary demand for access to the Starlink non-geostationary orbit satellite system,” the company told the FCC in its license-change request on July 31, 2020. At that time, nearly 700,000 people in the US had registered interest on Starlink’s website, but that action didn’t require putting down any money. The 500,000 orders and deposits that Starlink has received even without saying exactly when the service will exit beta is a stronger indication of people’s interest in the satellite broadband system, though this number likely includes non-US residents.

Musk has said that Starlink will be available to “most of Earth” by the end of 2021 and the whole planet by next year. SpaceX is also planning a new version of the “Dishy McFlatface” satellite dish for large vehicles, aircraft, and ships. Musk has said that the original version of the dish “should be fully mobile later this year, so you can move it anywhere or use it on an RV or truck in motion.”

Continue Reading

Trending