Connect with us

Social

Facebook’s self-styled ‘oversight’ board selects first cases, most dealing with hate speech – TechCrunch

Published

on

A Facebook -funded body that the tech giant set up to distance itself from tricky and potentially reputation-damaging content moderation decisions has announced the first bundle of cases it will consider.

In a press release on its website the Facebook Oversight Board (FOB) says it sifted through more than 20,000 submissions before settling on six cases — one of which was referred to it directly by Facebook.

The six cases it’s chosen to start with are:

Facebook submission: 2020-006-FB-FBR

A case from France where a user posted a video and accompanying text to a COVID-19 Facebook group — which relates to claims about the French agency that regulates health products “purportedly refusing authorisation for use of hydroxychloroquine and azithromycin against COVID-19, but authorising promotional mail for remdesivir”; with the user criticizing the lack of a health strategy in France and stating “[Didier] Raoult’s cure” is being used elsewhere to save lives”. Facebook says it removed the content for violating its policy on violence and incitement. The video in questioned garnered at least 50,000 views and 1,000 shares.

The FOB says Facebook indicated in its referral that this case “presents an example of the challenges faced when addressing the risk of offline harm that can be caused by misinformation about the COVID-19 pandemic”.

User submissions:

Out of the five user submissions that the FOB selected, the majority (three cases) are related to hate speech takedowns.

One case apiece is related to Facebook’s nudity and adult content policy; and to its policy around dangerous individuals and organizations.

See below for the Board’s descriptions of the five user submitted cases:

  • 2020-001-FB-UA: A user posted a screenshot of two tweets by former Malaysian Prime Minister, Dr Mahathir Mohamad, in which the former Prime Minister stated that “Muslims have a right to be angry and kill millions of French people for the massacres of the past” and “[b]ut by and large the Muslims have not applied the ‘eye for an eye’ law. Muslims don’t. The French shouldn’t. Instead the French should teach their people to respect other people’s feelings.” The user did not add a caption alongside the screenshots. Facebook removed the post for violating its policy on hate speech. The user indicated in their appeal to the Oversight Board that they wanted to raise awareness of the former Prime Minister’s “horrible words”.
  • 2020-002-FB-UA: A user posted two well-known photos of a deceased child lying fully clothed on a beach at the water’s edge. The accompanying text (in Burmese) asks why there is no retaliation against China for its treatment of Uyghur Muslims, in contrast to the recent killings in France relating to cartoons. The post also refers to the Syrian refugee crisis. Facebook removed the content for violating its hate speech policy. The user indicated in their appeal to the Oversight Board that the post was meant to disagree with people who think that the killer is right and to emphasise that human lives matter more than religious ideologies.

  • 2020-003-FB-UA: A user posted alleged historical photos showing churches in Baku, Azerbaijan, with accompanying text stating that Baku was built by Armenians and asking where the churches have gone. The user stated that Armenians are restoring mosques on their land because it is part of their history. The user said that the “т.а.з.и.к.и” are destroying churches and have no history. The user stated that they are against “Azerbaijani aggression” and “vandalism”. The content was removed for violating Facebook’s hate speech policy. The user indicated in their appeal to the Oversight Board that their intention was to demonstrate the destruction of cultural and religious monuments.

  • 2020-004-IG-UA: A user in Brazil posted a picture on Instagram with a title in Portuguese indicating that it was to raise awareness of signs of breast cancer. Eight photographs within the picture showed breast cancer symptoms with corresponding explanations of the symptoms underneath. Five of the photographs included visible and uncovered female nipples. The remaining three photographs included female breasts, with the nipples either out of shot or covered by a hand. Facebook removed the post for violating its policy on adult nudity and sexual activity. The post has a pink background, and the user indicated in a statement to the Oversight Board that it was shared as part of the national “Pink October” campaign for the prevention of breast cancer.

  • 2020-005-FB-UA: A user in the US was prompted by Facebook’s “On This Day” function to reshare a “memory” in the form of a post that the user made two years ago. The user reshared the content. The post (in English) is an alleged quote from Joseph Goebbels, the Reich Minister of Propaganda in Nazi Germany, on the need to appeal to emotions and instincts, instead of intellect, and on the unimportance of truth. Facebook removed the content for violating its policy on dangerous individuals and organisations. The user indicated in their appeal to the Oversight Board that the quote is important as the user considers the current US presidency to be following a fascist model

Public comments on the cases can be submitted via the FOB’s website — but only for seven days (closing at 8:00 Eastern Standard Time on Tuesday, December 8, 2020).

The FOB says it “expects” to decide on each case — and “for Facebook to have acted on this decision” — within 90 days. So the first ‘results’ from the FOB, which only began reviewing cases in October, are almost certainly not going to land before 2021.

Panels comprised of five FOB members — including at least one from the region “implicated in the content” — will be responsible for deciding whether the specific pieces of content in question should stay down or be put back up.

Facebook’s outsourcing of a fantastically tiny subset of content moderation considerations to a subset of its so-called ‘Oversight Board’ has attracted plenty of criticism (including inspiring a mirrored unofficial entity that dubs itself the Real Oversight Board) — and no little cynicism.

Not least because it’s entirely funded by Facebook; structured as Facebook intended it to be structured; and with members chosen via a system devised by Facebook.

If it’s radical change you’re looking for, the FOB is not it.

Nor does the entity have any power to change Facebook policy — it can only issue recommendations (which Facebook can choose to entirely ignore).

Its remit does not extend to being able to investigate how Facebook’s attention-seeking business model influences the types of content being amplified or depressed by its algorithms, either.

And the narrow focus on content taken downs — rather than content that’s already allowed on the social network — skews its purview, as we’ve pointed out before.

So you won’t find the board asking tough questions about why hate groups continue to flourish and recruit on Facebook, for example, or robustly interrogating how much succour its algorithmic amplification has gifted to the antivaxx movement.  By design, the FOB is focused on symptoms, not the nation-sized platform ill of Facebook itself. Outsourcing a fantastically tiny subset of content moderations decisions can’t signify anything else.  

With this Facebook-commissioned pantomime of accountability the tech giant will be hoping to generate a helpful pipeline of distracting publicity — focused around specific and ‘nuanced’ content decisions — deflecting plainer but harder-hitting questions about the exploitative and abusive nature of Facebook’s business itself, and the lawfulness of its mass surveillance of Internet users, as lawmakers around the world grapple with how to rein in tech giants.  

The company wants the FOB to reframe discussion about the culture wars (and worse) that Facebook’s business model fuels as a societal problem — pushing a self-serving ‘fix’ for algorithmically fuelled societal division in the form of a few hand-picked professionals opining on individual pieces of content, leaving it free to continue defining the shape of the attention economy on a global scale. 

Continue Reading

Social

TikTok’s new Q&A feature lets creators respond to fan questions using text or video – TechCrunch

Published

on

TikTok is testing a new video Q&A feature that allows creators to more directly respond to their audience’s questions with either text or video answers, the company confirmed to TechCrunch. The feature works across both video and livestreams (TikTok LIVE), but is currently only available to select creators who have opted into the test, we understand.

Q&A’s have become a top way creators engage fans on social media, and have proven to be particularly popular in places like Instagram Stories and in other social apps like Snapchat-integrated YOLO, or even in smaller startups.

On TikTok, however, Q&A’s are now a big part of the commenting experience, as many creators respond to individual comments by publishing a new video that explains their answer in more detail than a short, text comment could. Sometimes these answers are meant to clarify or add context, while other times creators will take on their bullies and trolls with their video responses. As a result, the TikTok comment section has grown to play a larger role in shaping TikTok trends and culture.

Q&A’s are also a key means for creators to engage with fans when live streaming. But it can be difficult for creators to keep up with a flood of questions and comments through the current live chat interface.

Seeing how creators were already using Q&A’s with their fans is how the idea for the new feature came about. Much like the existing “reply to comments with video” feature, the Q&A option lets creators directly respond to their audience questions. Where available, users will be able to designate their comments as questions by tapping the Q&A button in a video’s comment field, or they can submit questions directly through the Q&A link on the creator’s profile page.

For creators, the feature simplifies the process of responding to questions, as it lets them view all their fans’ questions in one place.

There’s no limit to the number of questions that a creator can receive, though they don’t have to reply to each one.

The feature was first spotted by social media consultant Matt Navarra, who posted screenshots of what the feature looks like in action, including how it appears on users’ profiles.

During the test, the new Q&A feature is only being made available to creators with public Creator Accounts that have over 10,000 followers and who have opted into the feature within their Settings, TikTok confirmed to TechCrunch. Participants in the test today include some safelisted creators from TikTok’s Creative Learning Fund program, announced last year, among others.

TikTok says the Q&A feature is currently in testing globally, and it aims to roll out it to more users with Creator Accounts in the weeks ahead.

Continue Reading

Social

Facebook and Instagram’s AI-generated image captions now offer far more details – TechCrunch

Published

on

Every picture posted to Facebook and Instagram gets a caption generated by an image analysis AI, and that AI just got a lot smarter. The improved system should be a treat for visually impaired users, and may help you find your photos faster in the future.

Alt text is a field in an image’s metadata that describes its contents: “A person standing in a field with a horse,” or “a dog on a boat.” This lets the image be understood by people who can’t see it.

These descriptions are often added manually by a photographer or publication, but people uploading photos to social media generally don’t bother, if they even have the option. So the relatively recent ability to automatically generate one — the technology has only just gotten good enough in the last couple years — has been extremely helpful in making social media more accessible in general.

Facebook created its Automatic Alt Text system in 2016, which is eons ago in the field of machine learning. The team has since cooked up many improvements to it, making it faster and more detailed, and the latest update adds an option to generate a more detailed description on demand.

The improved system recognizes 10 times more items and concepts than it did at the start, now around 1,200. And the descriptions include more detail. What was once “Two people by a building” may now be “A selfie of two people by the Eiffel Tower.” (The actual descriptions hedge with “may be…” and will avoid including wild guesses.)

But there’s more detail than that, even if it’s not always relevant. For instance, in this image the AI notes the relative positions of the people and objects:

Image Credits: Facebook

Obviously the people are above the drums, and the hats are above the people, none of which really needs to be said for someone to get the gist. But consider an image described as “A house and some trees and a mountain.” Is the house on the mountain or in front of it? Are the trees in front of or behind the house, or maybe on the mountain in the distance?

In order to adequately describe the image, these details should be filled in, even if the general idea can be gotten across with fewer words. If a sighted person wants more detail they can look closer or click the image for a bigger version — someone who can’t do that now has a similar option with this “generate detailed image description” command. (Activate it with a long press in the Android app or a custom action in iOS.)

Perhaps the new description would be something like “A house and some trees in front of a mountain with snow on it.” That paints a better picture, right? (To be clear, these examples are made up, but it’s the sort of improvement that’s expected.)

The new detailed description feature will come to Facebook first for testing, though the improved vocabulary will appear on Instagram soon. The descriptions are also kept simple so they can be easily translated to other languages already supported by the apps, though the feature may not roll out in other countries simultaneously.

Continue Reading

Social

India asks WhatsApp to withdraw new privacy policy, expresses ‘grave concerns’ – TechCrunch

Published

on

India has asked WhatsApp to withdraw the planned change to its privacy policy, posing a new headache to Facebook-owned service that identifies the South Asian nation as its biggest market by users.

In an email to WhatsApp head Will Cathcart, the nation’s IT ministry said WhatsApp’s planned update to its data-sharing policy raised “grave concerns regarding the implications for the choice and autonomy of Indian citizens… Therefore, you are called upon to withdraw the proposed changes.”

The ministry also sought clarification from WhatsApp on its data-sharing agreement with Facebook and other commercial firms and has asked why users in the EU are exempt from the new privacy policy but their counterpoint in India have no choice but to comply.

“Such a differential treatment is prejudicial to the interests of Indian users and is viewed with serious concern by the government,” the ministry wrote in the email, a copy of which was obtained by TechCrunch. “The government of India owes a sovereign responsibility to its citizens to ensure that their interests are not compromised and therefore it calls upon WhatsApp to respond to concerns raised in this letter.”

Through an in-app alert earlier this month, WhatsApp had asked users to agree to new terms of conditions that granted the app the consent to share with Facebook some personal data about them, such as their phone number and location. Users were initially provided until February 8 to comply with the new policy if they wished to continue using the service.

“This ‘all-or-nothing’ approach takes away any meaninful choice from Indian users. This approach leverages the social significance of WhatsApp to force users into a bargain, which may infringe on their interests in relation to informational privacy and information security,” the ministry said in the email.

The notification from WhatsApp prompted a lot of confusion — and in some cases, anger and frustration — among its users, many of which have explored alternative messaging apps such as Telegram and Signal in recent weeks. WhatsApp, which Facebook bought for $19 billion in 2014, has been sharing some limited information about its users with the social giant since 2016 — and for a period allowed users to opt-out of this. Last week the Facebook-owned app, which serves more than 2 billion users worldwide, said it was deferring the enforcement of the planned policy to May 15.

An advertisement from WhatsApp is seen in a newspaper at a stall in New Delhi on January 13, 2021. (Photo by Sajjad HUSSAIN / AFP) (Photo by SAJJAD HUSSAIN/AFP via Getty Images)

WhatsApp also ran front-page ads on several newspapers in India, where it has amassed over 450 million users, last week to explain the changes and debunk some rumors.

New Delhi also said that it was reviewing the Personal Data Protection Bill, a monumental privacy bill that is meant to oversee how data of users are shared with the world. “Since the Parliament is seized of the issue, making such a momentous change for Indian users at this time puts the cart before the horse. Since the Personal Data Protection Bill strongly follows the principle of ‘purpose limitation,’ these changes may lead to significant implementational challenges for WhatsApp should the Bill become an Act,” the letter said.

On Tuesday, India’s IT and Law Minister Ravi Shankar Prasad said, “Be it WhatsApp, be it Facebook, be it any digital platform. You are free to do business in India but do it in a manner without impinging upon the rights of Indians who operate there.”

Continue Reading

Trending