Connect with us

Social

Privacy not a blocker for “meaningful” research access to platform data, says report – TechCrunch

Published

on

European lawmakers are eyeing binding transparency requirements for Internet platforms in a Digital Services Act (DSA) due to be drafted by the end of the year. But the question of how to create governance structures that provide regulators and researchers with meaningful access to data so platforms can be held accountable for the content they’re amplifying is a complex one.

Platforms’ own efforts to open up their data troves to outside eyes have been chequered to say the least. Back in 2018, Facebook announced the Social Science One initiative, saying it would provide a select group of academics with access to about a petabyte’s worth of sharing data and metadata. But it took almost two years before researchers got access to any data.

“This was the most frustrating thing I’ve been involved in, in my life,” one of the involved researchers told Protocol earlier this year, after spending some 20 months negotiating with Facebook over exactly what it would release.

Facebook’s political Ad Archive API has similarly frustrated researchers. “Facebook makes it impossible to get a complete picture of all of the ads running on their platform (which is exactly the opposite of what they claim to be doing),” said Mozilla last year, accusing the tech giant of transparency-washing.

Facebook, meanwhile, points to European data protection regulations and privacy requirements attached to its business following interventions by the US’ FTC to justify painstaking progress around data access. But critics argue this is just a cynical shield against transparency and accountability. Plus of course none of these regulations stopped Facebook grabbing people’s data in the first place.

In January, Europe’s lead data protection regulator penned a preliminary opinion on data protection and research which warned against such shielding.

“Data protection obligations should not be misappropriated as a means for powerful players to escape transparency and accountability,” wrote EDPS Wojciech Wiewiorówski. “Researchers operating within ethical governance frameworks should therefore be able to access necessary API and other data, with a valid legal basis and subject to the principle of proportionality and appropriate safeguards.”

Nor is Facebook the sole offender here, of course. Google brands itself a ‘privacy champion’ on account of how tight a grip it keeps on access to user data, heavily mediating data it releases in areas where it claims ‘transparency’. While, for years, Twitter routinely disparaged third party studies which sought to understand how content flows across its platform — saying its API didn’t provide full access to all platform data and metadata so the research couldn’t show the full picture. Another convenient shield to eschew accountability.

More recently the company has made some encouraging noises to researchers, updating its dev policy to clarify rules, and offering up a COVID-related dataset — though the included tweets remains self selected. So Twitter’s mediating hand remains on the research tiller.

A new report by AlgorithmWatch seeks to grapple with the knotty problem of platforms evading accountability by mediating data access — suggesting some concrete steps to deliver transparency and bolster research, including by taking inspiration from how access to medical data is mediated, among other discussed governance structures.

The goal: “Meaningful” research access to platform data. (Or as the report title puts it: Operationalizing Research Access in Platform Governance: What to Learn from Other Industries?

“We have strict transparency rules to enable accountability and the public good in so many other sectors (food, transportation, consumer goods, finance, etc). We definitely need it for online platforms — especially in COVID-19 times, where we’re even more dependent on them for work, education, social interaction, news and media consumption,” co-author Jef Ausloos tells TechCrunch.

The report, which the authors are aiming at European Commission lawmakers as they ponder how to shape an effective platform governance framework, proposes mandatory data sharing frameworks with an independent EU-institution acting as an intermediary between disclosing corporations and data recipients.

It’s not the first time an online regulator has been mooted, of course — but the entity being suggested here is more tightly configured in terms of purpose than some of the other Internet overseers being proposed in Europe.

“Such an institution would maintain relevant access infrastructures including virtual secure operating environments, public databases, websites and forums. It would also play an important role in verifying and pre-processing corporate data in order to ensure it is suitable for disclosure,” they write in a report summary.

Discussing the approach further, Ausloos argues it’s important to move away from “binary thinking” to break the current ‘data access’ trust deadlock. “Rather than this binary thinking of disclosure vs opaqueness/obfuscation, we need a more nuanced and layered approach with varying degrees of data access/transparency,” he says. “Such a layered approach can hinge on types of actors requesting data, and their purposes.”

A market research purpose might only get access to very high level data, he suggests. Whereas medical research by academic institutions could be given more granular access — subject, of course, to strict requirements (such as a research plan, ethical board review approval and so on).

“An independent institution intermediating might be vital in order to facilitate this and generate the necessary trust. We think it is vital that that regulator’s mandate is detached from specific policy agendas,” says Ausloos. “It should be focused on being a transparency/disclosure facilitator — creating the necessary technical and legal environment for data exchange. This can then be used by media/competition/data protection/etc authorities for their potential enforcement actions.”

Ausloos says many discussions on setting up an independent regulator for online platforms have proposed too many mandates or competencies — making it impossible to achieve political consensus. Whereas a leaner entity with a narrow transparency/disclosure remit should be able to cut through noisy objections, is the theory.

The infamous example of Cambridge Analytica does certainly loom large over the ‘data for research’ space — aka, the disgraced data company which paid a Cambridge University academic to use an app to harvest and process Facebook user data for political ad targeting. And Facebook has thought nothing of turning this massive platform data misuse scandal into a stick to beat back regulatory proposals aiming to crack open its data troves.

But Cambridge Analytica was a direct consequence of a lack of transparency, accountability and platform oversight. It was also, of course, a massive ethical failure — given that consent for political targeting was not sought from people whose data was acquired. So it doesn’t seem a good argument against regulating access to platform data. On the contrary.

With such ‘blunt instrument’ tech talking points being lobbied into the governance debate by self-interested platform giants, the AlgorithmWatch report brings both welcome nuance and solid suggestions on how to create effective governance structures for modern data giants.

On the layered access point, the report suggests the most granular access to platform data would be the most highly controlled, along the lines of a medical data model. “Granular access can also only be enabled within a closed virtual environment, controlled by an independent body — as is currently done by Findata [Finland’s medical data institution],” notes Ausloos.

Another governance structure discussed in the report — as a case study from which to draw learnings on how to incentivize transparency and thereby enable accountability — is the European Pollutant Release and Transfer Register (E-PRTR). This regulates pollutant emissions reporting across the EU, and results in emissions data being freely available to the public via a dedicated web-platform and as a standalone dataset.

“Credibility is achieved by assuring that the reported data is authentic, transparent and reliable and comparable, because of consistent reporting. Operators are advised to use the best available reporting techniques to achieve these standards of completeness, consistency and credibility,” the report says on the E-PRTR.

“Through this form of transparency, the E-PRTR aims to impose accountability on operators of industrial facilities in Europe towards to the public, NGOs, scientists, politicians, governments and supervisory authorities.”

While EU lawmakers have signalled an intent to place legally binding transparency requirements on platforms — at least in some less contentious areas, such as illegal hate speech, as a means of obtaining accountability on some specific content problems — they have simultaneously set out a sweeping plan to fire up Europe’s digital economy by boosting the reuse of (non-personal) data.

Leveraging industrial data to support R&D and innovation is a key plank of the Commission’s tech-fuelled policy priorities for the next five+ years, as part of an ambitious digital transformation agenda.

This suggests that any regional move to open up platform data is likely to go beyond accountability — given EU lawmakers are pushing for the broader goal of creating a foundational digital support structure to enable research through data reuse. So if privacy-respecting data sharing frameworks can be baked in, a platform governance structure that’s designed to enable regulated data exchange almost by default starts to look very possible within the European context.

“Enabling accountability is important, which we tackle in the pollution case study; but enabling research is at least as important,” argues Ausloos, who does postdoc research at the University of Amsterdam’s Institute for Information Law. “Especially considering these platforms constitute the infrastructure of modern society, we need data disclosure to understand society.”

“When we think about what transparency measures should look like for the DSA we don’t need to reinvent the wheel,” adds Mackenzie Nelson, project lead for AlgorithmWatch’s Governing Platforms Project, in a statement. “The report provides concrete recommendations for how the Commission can design frameworks that safeguard user privacy while still enabling critical research access to dominant platforms’ data.”

You can read the full report here.

Continue Reading

Social

US government says North Korean hackers are targeting American healthcare organizations with ransomware – TechCrunch

Published

on

The FBI, CISA, and the U.S. Treasury Department are warning that North Korean state-sponsored hackers are using ransomware to target healthcare and public health sector organizations across the United States.

In a joint advisory published Wednesday, the U.S. government agencies said they had observed North Korean-backed hackers deploying Maui ransomware since at least May 2021 to encrypt servers responsible for healthcare services, including electronic health records, medical imaging, and entire intranets.

“The FBI assesses North Korean state-sponsored cyber actors have deployed Maui ransomware against Healthcare and Public Health Sector organizations,” the advisory reads. “The North Korean state-sponsored cyber actors likely assume healthcare organizations are willing to pay ransoms because these organizations provide services that are critical to human life and health. Because of this assumption, the FBI, CISA, and Treasury assess North Korean state-sponsored actors are likely to continue targeting [healthcare] organizations.”

The advisory notes that in many of the incidents observed and responded to by the FBI, the Maui ransomware caused disruption to healthcare services “for prolonged periods.”

Maui was first identified by Stairwell, a threat-hunting startup that aims to help organizations determine if they have been compromised, in early-April 2022. In an analysis of the ransomware, Stairwell principal reverse engineer Silas Cutler notes that Maui lacks many of the features commonly seen with tooling from ransomware-as-a-service (RaaS) providers, such as an embedded ransom note or automated means of transmitting encryption keys to attackers. Rather, Stairwell concludes that Maui is likely manually deployed across victims’ networks, with remote operators targeting specific files they want to encrypt.

North Korea has long used cryptocurrency-stealing operations to fund its nuclear weapons program. In an email, John Hultquist, vice president of Mandiant Intelligence, said that as a result “ransomware is a no-brainer” for the North Korean regime.

“Ransomware attacks against healthcare are an interesting development, in light of the focus these actors have made on this sector since the emergence of COVID-19. It is not unusual for an actor to monetize access which may have been initially garnered as part of a cyber espionage campaign,” said Hultquist. “We have noted recently that North Korean actors have shifted focus away from healthcare targets to other traditional diplomatic and military organizations. Unfortunately, healthcare organizations are also extraordinarily vulnerable to extortion of this type because of the serious consequences of a disruption,” he added.

The advisory, which also includes indicators of compromise (IOCs) and information on tactics, techniques and procedures (TTPs) employed in these attacks to help network defenders, urges organizations in the healthcare industries to strengthen their defenses by limiting access to data, turning off network device management interfaces, and by using monitoring tools to observe whether Internet of Things devices have become compromised.

“The FBI, along with our federal partners, remains vigilant in the fight against North Korea’s malicious cyber threats to our healthcare sector,” said FBI Cyber Division assistant director Bryan Vorndran. “We are committed to sharing information and mitigation tactics with our private sector partners to assist them in shoring up their defenses and protecting their systems.”

The U.S. government’s latest warning follows a spate of high-profile cyberattacks targeting healthcare organizations; University Medical Center Southern Nevada was hit by a ransomware attack in August 2021 that compromised files containing protected health information and personally identifiable information, and Eskenazi Health said in October that cybercriminals had access to their network for almost three months. Last month, Kaiser Permanente confirmed a breach of an employee’s email account led to the theft of 70,000 patient records.

Continue Reading

Social

Hotel giant Marriott confirms yet another data breach – TechCrunch

Published

on

Hotel group Marriott International has confirmed another data breach, with hackers claiming to have stolen 20 gigabytes of sensitive data including guests’ credit card information.

The incident, first reported by Databreaches.net Tuesday, is said to have happened in June when an unnamed hacking group claimed they used social engineering to trick an employee at a Marriott hotel Maryland into giving them access to their computer.

“Marriott International is aware of a threat actor who used social engineering to trick one associate at a single Marriott hotel into providing access to the associate’s computer,” Marriott spokesperson Melissa Froehlich Flood told TechCrunch in a statement. “The threat actor did not gain access to Marriott’s core network.”

Marriott said the hotel chain identified, and was investigating, the incident before the threat actor contacted the company in an extortion attempt, which Marriott said it did not pay.

The group claiming responsibility for the attack say the stolen data includes guests’ credit card information and confidential information about both guests and employees. Samples of the data provided to Databreaches.net purport to show reservation logs for airline crew members from January 2022 and names and other details of guests, as well as credit card information used to make bookings.

However, Marriott told TechCrunch that its investigation determined that the data accessed “primarily contained non-sensitive internal business files regarding the operation of the property.”

The company said that it is preparing to notify 300-400 individuals regarding the incident, and has already notified relevant law enforcement agencies.

This isn’t the first time Marriott has suffered a significant data breach. Hackers breached the hotel chain in 2014 to access almost 340 million guest records worldwide – an incident that went undetected until September 2018 and led to a £14.4 million ($24M) fine from the U.K’s Information Commissioner’s Office. In January 2020, Marriott was hacked again in a separate incident that affected around 5.2 million guests.

TechCrunch asked Marriott what cybersecurity protections it has in place to prevent such incidents from happening, but the company declined to answer.

Continue Reading

Social

Rivian says it’s on track to deliver 25,000 vehicles this year – TechCrunch

Published

on

Rivian said Wednesday the company produced 4,401 vehicles at its manufacturing facility in Normal, Illinois, and delivered 4,467 vehicles for the quarter ended June 30.

“These figures remain in line with the company’s expectations, and it believes it is on track to deliver on the 25,000 annual production guidance previously provided,” Rivian said in a statement.

In the first quarter of 2022, Rivian produced 2,553 vehicles and delivered 1,227 vehicles.

The production figures include a mix of the Rivian R1T pickup truck, R1S SUV and the commercial vans it is making for Amazon.

Developing...

Continue Reading

Trending