Connect with us

Social

Threat of inauguration violence casts a long shadow over social media – TechCrunch

Published

on

As the U.S. heads into one of the most perilous phases of American democracy since the Civil War, social media companies are scrambling to shore up their patchwork defenses for a moment they appear to have believed would never come.

Most major platforms pulled the emergency break last week, deplatforming the president of the United States and enforcing suddenly robust rules against conspiracies, violent threats and undercurrents of armed insurrection, all of which proliferated on those services for years. But within a week’s time, Amazon, Facebook, Twitter, Apple and Google had all made historic decisions in the name of national stability — and appearances. Snapchat, TikTok, Reddit and even Pinterest took their own actions to prevent a terror plot from being hatched on their platforms.

Now, we’re in the waiting phase. More than a week after a deadly pro-Trump riot invaded the iconic seat of the U.S. legislature, the internet still feels like it’s holding its breath, a now heavily-fortified inauguration ceremony looming ahead.

(Photo by SAUL LOEB/AFP via Getty Images)

What’s still out there

On the largest social network of all, images hyping follow-up events continued to circulate mid this week. One digital Facebook flyer promoted an “armed march on Capitol Hill and all state Capitols,” pushing the dangerous and false conspiracy that the 2020 presidential election was stolen.

Facebook says that it’s working to identify flyers calling for “Stop the Steal” adjacent events using digital fingerprinting, the same process it uses to remove terrorist content from ISIS and Al Qaeda. The company noted that it has seen flyers calling for events on January 17 across the country, January 18 in Virginia and inauguration day in D.C.

At least some of Facebook’s new efforts are working: one popular flyer TechCrunch observed on the platform was removed from some users’ feeds this week. A number of “Stop the Steal” groups we’d observed over the last month also unceremoniously blinked offline early this week following more forceful action from the company. Still, given the writing on the wall, many groups had plenty of time to tweak their names by a few words or point followers elsewhere to organize.

With only days until the presidential transition, acronym-heavy screeds promoting QAnon, an increasingly mainstream collection of outrageous pro-Trump government conspiracy theories, also remain easy to find. On one page with 2,500 followers, a QAnon believer pushed the debunked claim that anti-fascists executed the attack on the Capitol, claiming “January 6 was a trap.”

QAnon sign

(Photo by Win McNamee/Getty Images)

On a different QAnon group, an ominous post from an admin issued Congress a warning: “We have found a way to end this travesty! YOUR DAYS ARE NUMBERED!” The elaborate conspiracy’s followers were well represented at the deadly riot at the Capitol, as the many giant “Q” signs and esoteric t-shirt slogans made clear.

In a statement to TechCrunch about the state of extremism on the platform, Facebook says it is coordinating with terrorism experts as well as law enforcement “to prevent direct threats to public safety.” The company also noted that it works with partners to stay aware of violent content taking root on other platforms.

Facebook’s efforts are late and uneven, but they’re also more than the company has done to date. Measures from big social networks coupled with the absence of far-right social networks like Parler and Gab have left Trump’s most ardent supporters once again swearing off Silicon Valley and fanning out for an alternative.

Social media migration

Private messaging apps Telegram and Signal are both seeing an influx of users this week, but they offer something quite different from a Facebook or Twitter-like experience. Some expert social network observers see the recent migration as seasonal rather than permanent.

“The spike in usage of messaging platforms like Telegram and Signal will be temporary,” Yonder CEO Jonathon Morgan told TechCrunch. “Most users will either settle on platforms with a social experience, like Gab, MeWe, or Parler, if it returns, or will migrate back to Twitter and Facebook.”

That company uses AI to track how social groups connect online and what they talk about — violent conspiracies included. Morgan believes that propaganda-spreading “performative internet warriors” make a lot of noise online, but a performance doesn’t work without an audience. Others may quietly pose a more serious threat.

“The different types of engagement we saw during the assault on the Capitol mirror how these groups have fragmented online,” Morgan said. “We saw a large mob who was there to cheer on the extremists but didn’t enter the Capitol, performative internet warriors taking selfies, and paramilitaries carrying flex cuffs (mislabeled as “zip ties” in a lot of social conversation), presumably ready to take hostages.

“Most users (the mob) will be back on Parler if it returns, and in the meantime, they are moving to other apps that mimic the social experience of Twitter and Facebook, like MeWe.”

Still, Morgan says that research shows “deplatforming” extremists and conspiracy-spreaders is an effective strategy and efforts by “tech companies from Airbnb to AWS” will reduce the chances of violence in the coming days.

Cleaning up platforms can help turn the masses away from dangerous views, he explained, but the same efforts might further galvanize people with an existing intense commitment to those beliefs. With the winds shifting, already heterogeneous groups will be scattered too, making their efforts desperate and less predictable.

Deplatforming works, with risks

Jonathan Greenblatt, CEO of the Anti-Defamation League, told TechCrunch that social media companies still need to do much more to prepare for inauguration week. “We saw platforms fall short in their response to the Capitol insurrection,” Greenblatt said.

He cautioned that while many changes are necessary, we should be ready for online extremism to evolve into a more fractured ecosystem. Echo chambers may become smaller and louder, even as the threat of “large scale” coordinated action diminishes.

“The fracturing has also likely pushed people to start communicating with each other via encrypted apps and other private means, strengthening the connections between those in the chat and providing a space where people feel safe openly expressing violent thoughts, organizing future events, and potentially plotting future violence,” Greenblatt said.

By their own standards, social media companies have taken extraordinary measures in the U.S. in the last two weeks. But social networks have a long history of facilitating violence abroad, even as attention turns to political violence in America.

Greenblatt repeated calls for companies to hire more human moderators, a suggestion often made by experts focused on extremism. He believes social media could still take other precautions for inauguration week, like introducing a delay into livestreams or disabling them altogether, bolstering rapid response teams and suspending more accounts temporarily rather than focusing on content takedowns and handing out “strikes.”

“Platforms have provided little-to-nothing in the way of transparency about learnings from last week’s violent attack in the Capitol,” Greenblatt said.

“We know the bare minimum of what they ought to be doing and what they are capable of doing. If these platforms actually provided transparency and insights, we could offer additional—and potentially significantly stronger—suggestions.”

Continue Reading

Social

Facebook launches a series tests to inform future changes to its News Feed algorithms – TechCrunch

Published

on

Facebook may be reconfiguring its News Feed algorithms. After being grilled by lawmakers about the role that Facebook played in the attack on the U.S. Capitol, the company announced this morning it will be rolling out a series of News Feed ranking tests that will ask users to provide feedback about the posts they’re seeing, which will later be incorporated into Facebook’s News Feed ranking process. Specifically, Facebook will be looking to learn which content people find inspirational, what content they want to see less of (like politics), and what other topics they’re generally interested in, among other things.

This will be done through a series of global tests, one of which will involve a survey directly beneath the post itself which asks, “how much were you inspired by this post?,” with the goal of helping to show more people posts of an inspirational nature closer at the top of the News Feed.

Image Credits: Facebook

Another test will work to the Facebook News Feed experience to reflect what people want to see. Today, Facebook prioritizes showing you content from friends, Groups and Pages you’ve chosen to follow, but it has algorithmically crafted an experience of whose posts to show you and when based on a variety of signals. This includes both implicit and explicit signals — like how much you engage with that person’s content (or Page or Group) on a regular basis, as well as whether you’ve added them as a “Close Friend” or “Favorite” indicating you want to see more of their content than others, for example.

However, just because you’re close to someone in real life, that doesn’t mean that you like what they post to Facebook. This has driven families and friends apart in recent years, as people discovered by way of social media how people they thought they knew really viewed the world. It’s been a painful reckoning for some. Facebook hasn’t managed to fix the problem, either. Today, users still scroll News Feeds that reinforce their views, no matter how problematic. And with the growing tide of misinformation, the News Feed has gone from just placing users into a filter bubble to presenting a full alternate reality for some, often populated by conspiracies theories.

Facebook’s third test doesn’t necessarily tackle this problem head-on, but instead looks to gain feedback about what users want to see, as a whole. Facebook says that it will begin asking people whether they want to see more or fewer posts on certain topics, like Cooking, Sports, or Politics, and more. Based on users’ collective feedback, Facebook will adjust its algorithms to show more content people say they’re interested in, and fewer posts about topics they don’t want to see.

The area of politics, specifically, has been an issue for Facebook. The social network for years has been charged with helping to fan the flames of political discourse, polarizing and radicalizing users through its algorithms, distributing misinformation at scale, and encouraging an ecosystem of divisive clickbait, as publishers sought engagement instead of fairness and balance when reporting the news. There are now entirely biased and subjective outlets posing as news sources who benefit from algorithms like Facebook’s, in fact.

Shortly after the Capitol attack, Facebook announced it would try clamping down on political content in the News Feed for a small percentage of people in the U.S., Canada, Brazil and Indonesia, for period of time during tests.

Now, the company says it will work to better understand what content is being linked negative News Feed experiences, including political content. In this case, Facebook may ask users on posts with a lot of negative reactions what sort of content they want to see less of.

It will also more prominently feature the option to hide posts you find “irrelevant, problematic or irritating.” Although this feature existed before, you’ll now be able to tap an X in the upper-right corner of a post to hide it from the News Feed, if in the test group, and see fewer like in the future, for a more personalized experience.

It’s not clear that allowing users to pick and choose their topics is the best way to solve the larger problems with negative posts, divisive content or misinformation, though this test is less about the latter and more about making the News Feed “feel” more positive.

As the data is collected from the tests, Facebook will incorporate the learnings into its News Feed ranking algorithms. But it’s not clear to what extent it will be adjusting the algorithm on a global basis versus simply customizing the experience for end users on a more individual basis over time.

The company says the tests will run over the next few months.

Continue Reading

Social

Instagram launches tools to filter out abusive DMs based on keywords and emojis, and to block people, even on new accounts – TechCrunch

Published

on

Facebook and its family of apps have long grappled with the issue of how to better manage — and eradicate — bullying and other harassment on its platform, turning both to algorithms and humans in its efforts to tackle the problem better. In the latest development, today, Instagram is announcing some new tools of its own.

First, it’s introducing a new way for people to further shield themselves from harassment in their direct messages, specifically in message requests by way of a new set of words, phrases and emojis that might signal abusive content, which will also include common misspellings of those key terms, sometimes used to try to evade the filters. Second, it’s giving users the ability to proactively block people even if they try to contact the user in question over a new account.

The blocking account feature is going live globally in the next few weeks, Instagram said, and it confirmed to me that the feature to filter out abusive DMs will start rolling out in the UK, France, Germany, Ireland, Canada, Australia and New Zealand in a few weeks’ time before becoming available in more countries over the next few months.

Notably, these features are only being rolled out on Instagram — not Messenger, and not WhatsApp, Facebook’s other two hugely popular apps that enable direct messaging. The spokesperson confirmed that Facebook hopes to bring it to other apps in the stable later this year. (Instagram and others have regularly issued updates on single apps before considering how to roll them out more widely.)

Instagram said that the feature to scan DMs for abusive content — which will be based on a list of words and emojis that Facebook compiles with the help of anti-discrimination and anti-bullying organizations (it did not specify which), along with terms and emoji’s that you might add in yourself — has to be turned on proactively, rather than being made available by default.

Why? More user license, it seems, and to keep conversations private if uses want them to be. “We want to respect peoples’ privacy and give people control over their experiences in a way that works best for them,” a spokesperson said, pointing out that this is similar to how its comment filters also work. It will live in Settings>Privacy>Hidden Words for those who will want to turn on the control.

There are a number of third-party services out there in the wild now building content moderation tools that sniff out harassment and hate speech — they include the likes of Sentropy and Hive — but what has been interesting is that the larger technology companies up to now have opted to build these tools themselves. That is also the case here, the company confirmed.

The system is completely automated, although Facebook noted that it reviews any content that gets reported. While it doesn’t keep data from those interactions, it confirmed that it will be using reported words to continue building its bigger database of terms that will trigger content getting blocked, and subsequently deleting, blocking and reporting the people who are sending it.

On the subject of those people, it’s been a long time coming that Facebook has started to get smarter on how it handles the fact that the people with really ill intent have wasted no time in building multiple accounts to pick up the slack when their primary profiles get blocked. People have been aggravated by this loophole for as long as DMs have been around, even though Facebook’s harassment policies had already prohibited people from repeatedly contacting someone who doesn’t want to hear from them, and the company had already also prohibited recidivism, which as Facebook describes it, means “if someone’s account is disabled for breaking our rules, we would remove any new accounts they create whenever we become aware of it.”

The company’s approach to Direct Messages has been something of a template for how other social media companies have built these out.

In essence, they are open-ended by default, with one inbox reserved for actual contacts, but a second one for anyone at all to contact you. While some people just ignore that second box altogether, the nature of how Instagram works and is built is for more, not less, contact with others, and that means people will use those second inboxes for their DMs more than they might, for example, delve into their spam inboxes in email.

The bigger issue continues to be a game of whack-a-mole, however, and one that not just its users are asking for more help to solve. As Facebook continues to find itself under the scrutinizing eye of regulators, harassment — and better management of it — has emerged as a very key area that it will be required to solve before others do the solving for it.

Continue Reading

Social

Facebook is expanding Spotify partnership with new ‘Boombox’ project – TechCrunch

Published

on

Facebook is deepening its relationship with music company Spotify and will allow users to listen to music hosted on Spotify while browsing through its apps as part of a new initiative called “Project Boombox,” Facebook CEO Mark Zuckerberg said Monday.

Facebook is building an in-line audio player that will allow users to listen to songs or playlists being shared on the platforms without being externally linked to Spotify’s app or website. Zuckerberg highlighted the feature as another product designed to improve the experience of creators on its platforms, specifically the ability of musicians to share their work, “basically making audio a first-class type of media,” he said.

We understand from sources familiar with the Spotify integration that this player will support both music and podcasts. It has already been tested in non-U.S. markets, including Mexico and Thailand, and it’s expected to arrive in about a week.

The news was revealed in a wide-ranging interview with reporter Casey Newton on the company’s future pursuits in the audio world as Facebook aims to keep pace with upstart efforts like Clubhouse and increased activity in the podcasting world. 

“We think that audio is going to be a first-class medium and that there are all these different products to be built across this whole spectrum,” said Zuckerberg. “Of course, it includes some areas that, that have been, you know, popular recently like podcasting and kind of live audio rooms like this, but I also think that there’s some interesting things that are under-explored in the area overall.”

Spotify has already supported a fairly productive relationship with the Facebook and Instagram platforms. In recent years the music and podcasts platform has been integrated more deeply into Instagram Stories where users can share content from the service, a feature that’s also been available in Facebook Stories.

Continue Reading

Trending