Connect with us

Social

Facebook’s Flood of Languages Leave It Struggling to Monitor Content

Published

on

Facebook’s struggles with hate speech and other types of problematic content are being hampered by the company’s inability to keep up with a flood of new languages as mobile phones bring social media to every corner of the globe.

The company offers its 2.3 billion users features such as menus and prompts in 111 different languages, deemed to be officially supported. Reuters has found another 31 widely spoken languages on Facebook that do not have official support.

Detailed rules known as “community standards,” which bar users from posting offensive material including hate speech and celebrations of violence, were translated in only 41 languages out of the 111 supported as of early March, Reuters found.

Facebook’s 15,000-strong content moderation workforce speaks about 50 tongues, though the company said it hires professional translators when needed. Automated tools for identifying hate speech work in about 30.

The language deficit complicates Facebook’s battle to rein in harmful content and the damage it can cause, including to the company itself. Countries including Australia, Singapore and the UK are now threatening harsh new regulations, punishable by steep fines or jail time for executives, if it fails to promptly remove objectionable posts.

The community standards are updated monthly and run to about 9,400 words in English.

Monika Bickert, the Facebook vice president in charge of the standards, has previously told Reuters that they were “a heavy lift to translate into all those different languages.”

A Facebook spokeswoman said this week the rules are translated case by case depending on whether a language has a critical mass of usage and whether Facebook is a primary information source for speakers. The spokeswoman said there was no specific number for critical mass.

She said among priorities for translations are Khmer, the official language in Cambodia, and Sinhala, the dominant language in Sri Lanka, where the government blocked Facebook this week to stem rumors about devastating Easter Sunday bombings.

A Reuters report found last year that hate speech on Facebook that helped foster ethnic cleansing in Myanmar went unchecked in part because the company was slow to add moderation tools and staff for the local language.

Facebook says it now offers the rules in Burmese and has more than 100 speakers of the language among its workforce.

The spokeswoman said Facebook’s efforts to protect people from harmful content had “a level of language investment that surpasses most any technology company.”

But human rights officials say Facebook is in jeopardy of a repeat of the Myanmar problems in other strife-torn nations where its language capabilities have not kept up with the impact of social media.

“These are supposed to be the rules of the road and both customers and regulators should insist social media platforms make the rules known and effectively police them,” said Phil Robertson, deputy director of Human Rights Watch’s Asia Division. “Failure to do so opens the door to serious abuses.”

Abuse in Fijian
Mohammed Saneem, the supervisor of elections in Fiji, said he felt the impact of the language gap during elections in the South Pacific nation in November last year. Racist comments proliferated on Facebook in Fijian, which the social network does not support. Saneem said he dedicated a staffer to emailing posts and translations to a Facebook employee in Singapore to seek removals.

Facebook said it did not request translations, and it gave Reuters a post-election letter from Saneem praising its “timely and effective assistance.”

Saneem told Reuters that he valued the help but had expected pro-active measures from Facebook.

“If they are allowing users to post in their language, there should be guidelines available in the same language,” he said.

Similar issues abound in African nations such as Ethiopia, where deadly ethnic clashes among a population of 107 million have been accompanied by ugly Facebook content. Much of it is in Amharic, a language supported by Facebook. But Amharic users looking up rules get them in English.

At least 652 million people worldwide speak languages supported by Facebook but where rules are not translated, according to data from language encyclopedia Ethnologue. Another 230 million or more speak one of the 31 languages that do not have official support.

Facebook uses automated software as a key defense against prohibited content. Developed using a type of artificial intelligence known as machine learning, these tools identify hate speech in about 30 languages and “terrorist propaganda” in 19, the company said.

Machine learning requires massive volumes of data to train computers, and a scarcity of text in other languages presents a challenge in rapidly growing the tools, Guy Rosen, the Facebook vice president who oversees automated policy enforcement, has told Reuters.

facebook reuters graphics 2 Facebook

Growth regions
Beyond the automation and a few official fact-checkers, Facebook relies on users to report problematic content. That creates a major issue where community standards are not understood or even known to exist.

Ebele Okobi, Facebook’s director of public policy for Africa, told Reuters in March that the continent had the world’s lowest rates of user reporting.

“A lot of people don’t even know that there are community standards,” Okobi said.

Facebook has bought radio advertisements in Nigeria and worked with local organisations to change that, she said. It also has held talks with African education officials to introduce social media etiquette into the curriculum, she said.

Simultaneously, Facebook is partnering with wireless carriers and other groups to expand Internet access in countries including Uganda and the Democratic Republic of Congo where it has yet to officially support widely-used languages such as Luganda and Kituba. Asked this week about the expansions without language support, Facebook declined to comment.

The company announced in February it would soon have its first 100 sub-Saharan Africa-based content moderators at an outsourcing facility in Nairobi. They will join existing teams in reviewing content in Somali, Oromo and other languages.

But the community standards are not translated into Somali or Oromo. Posts in Somali from last year celebrating the al-Shabaab militant group remained on Facebook for months despite a ban on glorifying organisations or acts that Facebook designates as terrorist.

“Disbelievers and apostates, die with your anger,” read one post seen by Reuters this month that praised the killing of a Sufi cleric.

After Reuters inquired about the post, Facebook said it took down the author’s account because it violated policies.

Ability to derail
Posts in Amharic reviewed by Reuters this month attacked the Oromo and Tigray ethnic populations in vicious terms that clearly violated Facebook’s ban on discussing ethnic groups using “violent or dehumanising speech, statements of inferiority, or calls for exclusion.”

Facebook removed the two posts Reuters inquired about. The company added that it had erred in allowing one of them, from December 2017, to remain online following an earlier user report.

For officials such as Saneem in Fiji, Facebook’s efforts to improve content moderation and language support are painfully slow. Saneem said he warned Facebook months in advance of the election in the archipelago of 900,000 people. Most of them use Facebook, with half writing in English and half in Fijian, he estimated.

“Social media has the ability to completely derail an election,” Saneem said.

Other social media companies face the same problem to varying degrees.

Facebook-owned Instagram said its 1,179-word community guidelines are in 30 out of 51 languages offered to users. WhatsApp, owned by Facebook as well, has terms in nine of 58 supported languages, Reuters found.

Alphabet’s YouTube presents community guidelines in 40 of 80 available languages, Reuters found. Twitter’s rules are in 37 of 47 supported languages, and Snap Inc’s in 13 out of 21.

“A lot of misinformation gets spread around and the problem with the content publishers is the reluctance to deal with it,” Saneem said. “They do owe a duty of care.”

© Thomson Reuters 2019

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Social

Pinterest tests online events with dedicated ‘class communities’ – TechCrunch

Published

on

Pinterest is getting into online events. The company has been spotted testing a new feature that allows users to sign up for Zoom classes through Pinterest, while creators use Pinterest’s class boards to organize class materials, notes and other resources, or even connect with attendees through a group chat option. The company confirmed the test of online classes is an experiment now in development, but wouldn’t offer further details about its plans.

The feature itself was discovered on Tuesday by reverse engineer Jane Manchun Wong, who found details about the online classes by looking into the app’s code.

Currently, you can visit some of these “demo” profiles directly — like “@pinsmeditation” or “@pinzoom123,” for example — and view their listed Class Communities. However, these communities are empty when you click through. That’s because the feature is still unreleased, Wong says.

When and if the feature is later launched to the public, the communities would include dedicated sections where creators will be able to organize their class materials — like lists of what to bring to class, notes, photos and more. They could also use these communities to offer a class overview and description, connect users to a related shop, group chat feature and more.

Creators are also able to use the communities — which are basically enhanced Pinterest boards — to respond to questions from attendees, share photos from the class and otherwise interact with the participants.

When a user wants to join a class, they can click a “book” button to sign up, and are then emailed a confirmation with the meeting details. Other buttons direct attendees to download Zoom or copy the link to join the class.

It’s not surprising that Pinterest would expand into the online events space, given its platform has become a popular tool for organizing remote learning resources during the coronavirus pandemic. Teachers have turned to Pinterest to keep track of lesson plans, get inspiration, share educational activities and more. In the early days of the pandemic, Pinterest reported record usage when the company saw more searches and saves globally in a single March weekend than ever before in its history, as a result of its usefulness as a online organizational tool.

This growth has continued throughout the year. In October, Pinterest’s stock jumped on strong earnings after the company beat on revenue and user growth metrics. The company brought in $443 million in revenue, versus $383.5 million expected, and grew its monthly active users to 442 million, versus the 436.4 million expected. Outside of the coronavirus impacts, much of this growth was due to strong international adoption, increased ad spend from advertisers boycotting Facebook and a surge of interest from users looking for iOS 14 home screen personalization ideas.

Given that the U.S. has failed to get the COVID-19 pandemic under control, many classes, events and other activities will remain virtual even as we head into 2021. The online events market may continue to grow in the years that follow, too, thanks to the kickstart the pandemic provided the industry as a whole.

“We are experimenting with ways to help creators interact more closely with their audience,” a Pinterest spokesperson said, when asked for more information.

Pinterest wouldn’t confirm additional details about its plans for online events, but did say the feature was in development and the test would help to inform the product’s direction.

Pinterest often tries out new features before launching them to a wider audience. Earlier this summer, TechCrunch reported on a Story Pins feature the company had in the works. Pinterest then launched the feature in September. If the same time frame holds up for online events, we could potentially see the feature become more widely available sometime early next year.

Continue Reading

Social

Twitter will bring back verification – TechCrunch

Published

on

Twitter prepares to hand out more blue checkmarks, YouTube suspends OANN and Discord is raising a big funding round. This is your Daily Crunch for November 24, 2020.

The big story: Twitter will bring back verification

Twitter paused its blue checkmark verification system in 2017 as it faced controversy over who gets verified — specifically over the decision to verify the organizer of the infamous and deadly white supremacist rally in Charlottesville.

Since then, Twitter has done occasional verifications for medical experts tweeting about COVID-19 and candidates running for public office, but it hasn’t brought back the program in a systematic way.

Now Twitter says it will relaunch verification in 2021, and that it’s currently soliciting feedback on the policy. Initially, verification will focus on six types of accounts: government officials, companies/brands/nonprofits, news, entertainment, sports and activists/organizers/other influential individuals.

The tech giants

YouTube suspends and demonetizes One America News Network over COVID-19 video — YouTube said, “After careful review, we removed a video from OANN and issued a strike on the channel for violating our COVID-19 misinformation policy.”

Instagram businesses and creators may be getting a Messenger-like ‘FAQ’ feature — This new feature will allow people to start conversations with businesses or creators’ accounts by tapping on a commonly asked question within a chat.

Fortnite adds a $12 monthly subscription bundle — The $11.99 monthly Fortnite Crew fee entitles players to a full season battle pass, 1,000 monthly bucks and a Crew Pack featuring an exclusive outfit bundle.

Startups, funding and venture capital

Discord is close to closing a round that would value the company at up to $7B — The new funding comes just months after a $100 million investment that gave the company a $3.5 billion valuation.

Dija, a new delivery startup from former Deliveroo employees, is closing in on a $20M round led by Blossom — Few details are public about Dija, except that it will offer convenience and fresh food delivery using a “dark” convenience store mode.

Marie Ekeland launches 2050, a new fund with radically ambitious, long-term goals —  Ekeland used to be an investor at French VC firm Elaia, where she backed adtech firm Criteo.

Advice and analysis from Extra Crunch

As edtech grows cash rich, some lessons for early stage — The valuation bumps for both Duolingo and Udemy underscore just how much investor confidence there is in edtech’s remote learning boom.

Working to understand C3.ai’s growth story — As its IPO looms, how quickly did C3.ai grow in its October quarter?

Decrypted: Apple and Facebook’s privacy feud, Twitter hires Mudge, mysterious zero-days — Zack Whittaker’s latest roundup of cybersecurity-related news.

(Extra Crunch is our membership program, which aims to democratize information about startups. And until November 30, you can get 25% off an annual membership.)

Everything else

Biden-Harris team finally get their transition .gov domain — This comes after the General Services Administration gave the green light for the Biden-Harris team to transition from political campaign to government administration.

India bans 43 more Chinese apps over cybersecurity concerns — India is not done banning Chinese apps.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.

Continue Reading

Social

Twitter to relaunch account verifications in early 2021, asks for feedback on policy – TechCrunch

Published

on

Twitter announced today it’s planning to relaunch its verification system in 2021, and will now begin the process of soliciting public feedback on the new policy ahead of its implementation. Under the policy, Twitter will initially verify six types of accounts, including those belonging to government officials; companies, brands and nonprofit organizations; news; entertainment; sports; and activists, organizers and other influential individuals. The number of categories could expand in time.

Twitter’s verification system, which provides a blue checkmark to designate accounts belonging to public figures, was paused in 2017 as the company tried to address confusion over what it meant to be verified.

The issue at the time was that Twitter had verified the account belonging to Jason Keller, the person who organized the deadly white supremacist rally in Charlottesville, Virginia. In response to the wave of criticism directed at Twitter as a result of this action, the company defended its decision by pointing to its policies around account verification, which explained its blue badges were awarded to accounts of “public interest.”

Critics argued that genuinely noteworthy figures were still struggling to get their own accounts verified, and that verifying a known white supremacist was not something that should ever be in the “public interest.” As a result, Twitter in November 2017 decided to pause all account verifications.

The following year, the company announced work on the verification system would be placed on a longer, more indefinite hold, so Twitter could direct its resources to focus on election integrity. That proved to be a significant undertaking, as it turned out.

Though the company this year verified medical experts tweeting about COVID-19 and labeled candidates running for public office, these efforts were managed in more of a one-off fashion.

Now, with the 2020 U.S. presidential election having wrapped, and with a transition underway, Twitter says work on its new verification system will finally resume.

The company today shared a draft of its new verification policy in order to gain public feedback. The policy details more specifically which accounts can be verified and introduces additional guidelines that could limit some accounts from receiving the blue badge.

For example, Twitter says the account must be “notable and active,” and the badge won’t be awarded to any accounts with incomplete profiles. Twitter will also deny or remove verification badges from otherwise qualified individuals if their accounts are found to be in repeated violation of the Twitter Rules.

The company additionally admitted it had verified accounts over the years which should not be, as based on these guidelines. To correct this, Twitter will begin to automatically remove badges from accounts that are inactive or have incomplete profiles, to help it streamline its work going forward.

The policy also lays out specifics about how it will determine whether an account in a supported category will qualify.

For example, news organizations will have to adhere to professional standards for journalism, and independent or freelance journalists will need to provide at least three bylines in qualifying organizations published in the last six months. Entertainers will need to be able to point to credits on their IMDb page or to references in verified news publications. Government officials will need to show a public reference on an official government website, party website or multiple references by news media. Sports figures will have to appear on team websites, rosters or in sports data services like Sportradar. There are a few other ways to be verified in these categories, too.

The guidelines for public figures are more detailed, as they must meet two different criteria for “notability” — one that quantifies their Twitter activity and another that highlights their off-Twitter notability, like a Wikipedia page, Google Trends profile, profile on an official advocacy site and more.

“We know we can’t solve verification with a new policy alone — and that this initial policy won’t cover every case for being verified — but it is a critical first step in helping us provide more transparency and fairer standards for verification on Twitter as we reprioritize this work,” a company announcement stated. “This version of the policy is a starting point, and we intend to expand the categories and criteria for verification significantly over the next year,” it noted.

Twitter users will be able to offer feedback on the new verification policy starting today, November 24, 2020, and continuing through December 8, 2020. The policy is being made available in English, Hindi, Arabic, Spanish, Portuguese and Japanese. Users can either respond to the survey Twitter has posted or they can choose to tweet their feedback publicly, using the hashtag #VerificationFeedback.

In addition, Twitter says it’s working with local non-governmental organizations and its Trust and Safety Council to gain a range of other perspectives.

After December 8, 2020, Twitter will train its team on the new policy and introduce the final version by Decemeber 17, 2020. The verification system itself, which will include a new public application process, will begin in early 2021.

Though Twitter is giving itself time to make policy changes based on public feedback, it had already begun to develop the underlying technology for the verification application process.

Twitter confirmed to TechCrunch this June it was in the process of building a new in-app system for requesting verification. The feature had been found buried in the app’s code by reverse engineer Jane Manchun Wong, who tweeted a screenshot of a new option, “Request Verification,” that appeared under Twitter’s account settings. At the time, Twitter wouldn’t confirm when the new system would go live.

Though not everyone will qualify for verification, Twitter says it’s working on other features that will help to better distinguish accounts on its platform. Also in 2021, the company will introduce new account types and labels that will help Twitter users identify themselves on their profiles. More details on these features will be announced in the weeks to come, Twitter says.

Continue Reading

Trending