Connect with us

Social

Twitter’s manipulated media policy will remove harmful tweets & voter suppression, label others – TechCrunch

Published

on

Twitter today is announcing the official version of its “deepfake” and manipulated media policy, which largely involves labeling tweets and warning users of manipulated, deceptively altered or fabricated media — not, in most cases, removing them. Tweets containing manipulated or synthetic media will only be removed if they’re likely to cause harm, the company says.

However, Twitter’s definition of “harm” goes beyond physical harm, like threats to a person’s or group’s physical safety or the risk of mass violence or civil unrest. Also included in the definition of “harm” are any threats to the privacy or the ability of a person or group to freely express themselves or participate in civic events.

That means the policy covers things like stalking, unwanted or obsessive attention and targeted content containing tropes, epithets or material intended to silence someone. And notably, given the impending U.S. presidential election, it also includes voter suppression or intimidation.

An initial draft of Twitter’s policy was first announced in November. At the time, Twitter said it would place a notice next to tweets sharing synthetic and manipulated media, warn users before they shared those tweets and include informational links explaining why the media was believed to be manipulated. This, essentially, is now confirmed as the official policy but is spelled out in more detail.

Twitter says it collected user feedback ahead of crafting the new policy using the hashtag #TwitterPolicyFeedback and gathered more than 6,500 responses as a result. The company prides itself on engaging its community when making policy decisions, but given Twitter’s slow to flat user growth over the years, it may want to try consulting with people who have so far refused to join Twitter. This would give Twitter a wider understanding as to why so many have opted out and how that intersects with its policy decisions.

The company also says it consulted with a global group of civil society and academic experts, such as Witness, the U.K.-based Reuters Institute and researchers at New York University.

Based on feedback, Twitter found that a majority of users (70%) wanted Twitter to take action on misleading and altered media, but only 55% wanted all media of this sort removed. Dissenters, as expected, cited concerns over free expression. Most users (90%) only wanted manipulated media considered harmful to be removed. A majority (75+%) also wanted Twitter to take further action on the accounts sharing this sort of media.

Unlike Facebook’s deepfake policy, which ignores disingenuous doctoring like cuts and splices to videos and out-of-context clips, Twitter’s policy isn’t limited to a specific technology, such as AI-enabled deepfakes. It’s much broader.

“Things like selected editing or cropping or slowing down or overdubbing, or manipulation of subtitles would all be forms of manipulated media that we would consider under this policy,” confirmed Yoel Roth, head of site integrity at Twitter.

“Our goal in making these assessments is to understand whether someone on Twitter who’s just scrolling through their timeline has enough information to understand whether the media being shared in a tweet is or isn’t what it claims to be,” he explained.

The policy utilizes three tests to decide how Twitter will take action on manipulated media. It first confirms the media itself is synthetic or manipulated. It then assesses if the media is being shared in a deceptive manner. And finally, it evaluates the potential for harm.

Media is considered deceptive if it could result in confusing others or leading to misunderstandings, or if it tries to deceive people about its origin — like media that claims it’s depicting reality, but is not.

This is where the policy gets a little messy, as Twitter will have to examine the further context of this media, including not only the tweet’s text, but also the media’s metadata, the Twitter’s user’s profile information, including websites linked in the profile that are sharing the media, or websites linked in the tweet itself. This sort of analysis can take time and isn’t easily automated.

If the media is determined also to cause serious harm, as described above, it will be removed.

Twitter, though, has left itself a lot of wiggle room in crafting the policy, using words like “may” and “likely” to indicate its course of action in each scenario. (See rubric below).

For example, manipulated media “may be” labeled, and manipulated and deceptive content is “likely to be” labeled. Manipulated, deceptive and harmful content is “very likely” to be removed. This sort of wording gives Twitter leeway to make policy exceptions, without actually breaking policy as it would if it used stronger language like “will be removed” or “will be labeled.”

That said, Twitter’s manipulated media policy doesn’t exist in a vacuum. Some of the worst types of manipulated media, like non-consensual nudity, were already banned by the Twitter Rules. The new policy, then, isn’t the only thing that will be considered when Twitter makes a decision.

Today, Twitter is also detailing how manipulated media will be labeled. In the case where the media isn’t removed because it doesn’t “cause harm,” Twitter will add a warning label to the tweet along with a link to additional explanations and clarifications, via a landing page that offers more context.

A fact-checking component will also be a part of this system, led by Twitter’s curation team. In the case of misleading tweets, Twitter aims to present facts from news organizations, experts and others who are talking about what’s happening directly in line with the misleading tweets.

Twitter will also show a warning label to people before they retweet or like the tweet, may reduce the visibility of a tweet and may prevent it from being recommended.

One drawback to Twitter’s publish-in-public platform is that tweets can go viral and spread very quickly, while Twitter’s ability to enforce its policy can lag behind. Twitter isn’t proactively scouring its network for misinformation in most cases — it’s relying on its users reporting tweets for review.

And that can take time. Twitter has been criticized over the years for its failures to respond to harassment and abuse, despite policies to the contrary, and its struggle to remove bad actors. In other words, Twitter’s intentions with regard to manipulated media may be spelled out in this new policy, but Twitter’s real-world actions may still be found lacking. Time will tell.

“Twitter’s mission is to serve the public conversation. As part of that, we want to encourage healthy participation in that conversation. Things that distort or distract from what’s happening threaten the integrity of information on Twitter,” said Twitter VP of Trust & Safety, Del Harvey. “Our goal is really to provide people with more context around certain types of media they come across on Twitter and to ensure they’re able to make informed decisions around what they’re seeing,” she added.

Continue Reading

Social

Facebook will pay $650 million to settle class action suit centered on Illinois privacy law – TechCrunch

Published

on

Facebook was ordered to pay $650 million Friday for running afoul of an Illinois law designed to protect the state’s residents from invasive privacy practices.

That law, the Biometric Information Privacy Act (BIPA), is a powerful state measure that’s tripped up tech companies in recent years. The suit against Facebook was first filed in 2015, alleging that Facebook’s practice of tagging people in photos using facial recognition without their consent violated state law.

Indeed, 1.6 million Illinois residents will receive at least $345 under the final settlement ruling in California federal court. The final number is $100 million higher than the $550 million Facebook proposed in 2020, which a judge deemed inadequate. Facebook disabled the automatic facial recognition tagging features in 2019, making it opt-in instead and addressing some of the privacy criticisms echoed by the Illinois class action suit.

A cluster of lawsuits accused Microsoft, Google and Amazon of breaking the same law last year after Illinois residents’ faces were used to train their facial recognition systems without explicit consent.

The Illinois privacy law has tangled up some of tech’s giants, but BIPA has even more potential to impact smaller companies with questionable privacy practices. The controversial facial recognition software company Clearview AI now faces its own BIPA-based class action lawsuit in the state after the company failed to dodge the suit by pushing it out of state courts.

A $650 million settlement would be enough to crush any normal company, though Facebook can brush it off much like it did with the FTC’s record-setting $5 billion penalty in 2019. But the Illinois law isn’t without teeth. For Clearview, it was enough to make the company pull out of business in the state altogether.

The law can’t punish a behemoth like Facebook in the same way, but it is one piece in a regulatory puzzle that poses an increasing threat to the way tech’s data brokers have done business for years. With regulators at the federal, state and legislative level proposing aggressive measures to rein in tech, the landmark Illinois law provides a compelling framework that other states could copy and paste. And if big tech thinks navigating federal oversight will be a nightmare, a patchwork of aggressive state laws governing how tech companies do business on a state-by-state basis is an alternate regulatory future that could prove even less palatable.

 

Continue Reading

Social

Twitter rolls out vaccine misinformation warning labels and a strike-based system for violations – TechCrunch

Published

on

Twitter announced Monday that it would begin injecting new labels into users’ timelines to push back against misinformation that could disrupt the rollout of COVID-19 vaccines. The labels, which will also appear as pop-up messages in the retweet window, are the company’s latest product experiment designed to shape behavior on the platform for the better.

The company will attach notices to tweeted misinformation warning users that the content “may be misleading” and linking out to vetted public health information. These initial vaccine misinformation sweeps, which begin today, will be conducted by human moderators at Twitter and not automated moderation systems.

Twitter says the goal is to use these initial determinations to train its AI systems so that down the road a blend of human and automated efforts will scan the site for vaccine misinformation. The latest misinformation measure will target tweets in English before expanding.

Twitter also introduced a new strike system for violations of its pandemic-related rules. The new system is modeled after a set of consequences it implemented for voter suppression and voting-related misinformation. Within that framework, a user with two or three “strikes” faces a 12-hour account lockout. With four violations, they lose account access for one week, with permanent suspension looming after five strikes.

Twitter introduced its first pandemic-specific policies a year ago, banning tweets promoting false treatment or prevention claims along with any content that could put people at higher risk of spreading COVID-19. In December, Twitter added new rules focused on popular vaccine conspiracy theories and announced that warning labels were on the way.

Continue Reading

Social

Facebook launches BARS, a TikTok-like app for creating and sharing raps – TechCrunch

Published

on

Facebook’s internal R&D group, NPE Team, is today launching its next experimental app, called BARS. The app makes it possible for rappers to create and share their raps using professionally created beats, and is the NPE Team’s second launch in the music space following its recent public debut of music video app Collab.

While Collab focuses on making music with others online, BARS is instead aimed at would-be rappers looking to create and share their own videos. In the app, users will select from any of the hundreds of professionally created beats, then write their own lyrics and record a video. BARS can also automatically suggest rhymes as you’re writing out lyrics, and offers different audio and visual filters to accompany videos as well as an autotune feature.

There’s also a “Challenge mode” available, where you can freestyle with auto-suggested word cues, which has more of a game-like element to it. The experience is designed to be accommodating to people who just want to have fun with rap, similar to something like Smule’s AutoRap, perhaps, which also offers beats for users’ own recordings.

Image Credits: Facebook

The videos themselves can be up to 60 seconds in length and can then be saved to your Camera Roll or shared out on other social media platforms.

Like NPE’s Collab, the pandemic played a role in BARS’ creation. The pandemic shut down access to live music and places where rappers could experiment, explains NPE Team member DJ Iyler, who also ghostwrites hip-hop songs under the alias “D-Lucks.”

“I know access to high-priced recording studios and production equipment can be limited for aspiring rappers. On top of that, the global pandemic shut down live performances where we often create and share our work,” he says.

BARS was built with a team of aspiring rappers, and today launched into a closed beta.

Image Credits: Facebook

Despite the focus on music, and rap in particular, the new app in a way can be seen as yet another attempt by Facebook to develop a TikTok competitor — at least in this content category.

TikTok has already become a launchpad for up-and-coming musicians, including rappers; it has helped rappers test their verses, is favored by many beatmakers and is even influencing what sort of music is being made. Diss tracks have also become a hugely popular format on TikTok, mainly as a way for influencers to stir up drama and chase views. In other words, there’s already a large social community around rap on TikTok, and Facebook wants to shift some of that attention back its way.

The app also resembles TikTok in terms of its user interface. It’s a two-tabbed vertical video interface — in its case, it has  “Featured” and “New” feeds instead of TikTok’s “Following” and “For You.” And BARS places the engagement buttons on the lower-right corner of the screen with the creator name on the lower-left, just like TikTok.

However, in place of hearts for favoriting videos, your taps on a video give it “Fire” — a fire emoji keeps track. You can tap “Fire” as many times as you want, too. But because there’s (annoyingly) no tap-to-pause feature, you may accidentally “fire” a video when you were looking for a way to stop its playback. To advance in BARS, you swipe vertically, but the interface is lacking an obvious “Follow” button to track your favorite creators. It’s hidden under the top-right three-dot menu.

The app is seeded with content from NPE Team members, which includes other aspiring rappers, former music producers and publishers.

Currently, the BARS beta is live on the iOS App Store in the U.S., and is opening its waitlist. Facebook says it will open access to BARS invites in batches, starting in the U.S. Updates and news about invites, meanwhile, will be announced on Instagram.

Facebook’s recent launches from its experimental apps division include Collab and collage maker E.gg, among others. Not all apps stick around. If they fail to gain traction, Facebook shuts them down — as it did last year with the Pinterest-like video app Hobbi.

Continue Reading

Trending