Connect with us

Social

Facebook Messenger Dark Mode Now Rolling Out, Can Be Manually Activated on Android, iOS

Published

on

Facebook’s new “Dark Mode” feature for Messenger can now be enabled manually on Android and iOS devices. The social media giant had announced last October that it would soon roll out the much-awaited feature. Now, four months later, Facebook has rolled out this feature to its users. Facebook Messenger Dark mode was earlier this week spotted to have rolled out silently to users in several countries including Czech Republic, Indonesia, Philippines, Portugal, and Saudi Arabia.

In a blog post on Monday, Facebook said “As many may have discovered, dark mode can be accessed through a hidden, limited-time only experience. Simply send a crescent moon emoji in any Messenger chat to unlock the setting and prompt to turn on dark mode.”

The Dark Mode on Facebook Messenger can be enabled by sending a moon emoji in a chat. As soon as users send this emoji, a message at the top pops up that reads “You Found Dark Mode!”. Once the “Dark Mode” is on, Facebook will display a message saying it’s still working on this feature, so you won’t see “Dark Mode” everywhere in Facebook Messenger. It may also appear broken at some places.

Facebook Messenger Dark Mode, as spotted by a Reddit user earlier this week
Photo Credit: Reddit/ Hegaton

 

“Messenger’s dark mode provides lower brightness while maintaining contrast and vibrancy. Dark mode cuts down the glare from your phone for use in low light situations, so you can use the Messenger features you love no matter when or where you are,” the blog post added. Facebook adds that the dark mode will be rolled out fully via Settings in the coming weeks.

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Social

Twitter unveils Birdwatch – TechCrunch

Published

on

Twitter pilots a new tool to fight disinformation, Apple brings celebrity-guided walks to the Apple Watch and Clubhouses raises funding. This is your Daily Crunch for January 25, 2021.

The big story: Twitter unveils Birdwatch

Twitter launched a new product today that it says will offer “a community-based approach to misinformation.”

With Birdwatch, users will be able to flag tweets that they find misleading, write notes to add context to those tweets and rate the notes written by others. This is supposed to be a complement to the existing system where Twitter removes or labels particularly problematic tweets, rather than a replacement.

What remains to be seen: How Twitter will handle it when two or more people get locked into a battle and post a flurry of conflicting notes about whether a tweet is misleading or not.

The tech giants

Walking with Dolly — Apple discusses how and why it brought Time to Walk to the Watch.

Google pledges grants and facilities for COVID-19 vaccine programs — The tech giant is one of several large corporations that have pledged support to local government agencies and medical providers to help increase vaccinations.

Facebook will give academic researchers access to 2020 election ad targeting data — Starting next month, Facebook will open up academic access to a data set of 1.3 million political and social issue ads.

Startups, funding and venture capital

Clubhouse announces plans for creator payments and raises new funding led by Andreessen Horowitz — While we try to track down the actual value of this round, Clubhouse has confirmed it will be introducing products to help creators on the platform get paid.

Taboola is going public via SPAC — The transaction is expected to close in the second quarter, and the combined company will trade on the New York Stock Exchange under the ticker symbol TBLA.

Wolt closes $530M round to continue expanding beyond restaurant delivery — The Helsinki-based online ordering and delivery company initially focused on restaurants but has since expanded to other verticals.

Advice and analysis from Extra Crunch

Qualtrics raises IPO pricing ahead of debut — After being acquired by SAP, Qualtrics announced it would spin out as its own public company.

Fintechs could see $100 billion of liquidity in 2021 — The Matrix Fintech Index weighs public markets, liquidity and a new e-commerce trend.

Unpacking Chamath Palihapitiya’s SPAC deals for Latch and Sunlight Financial — There’s no escaping SPACs, at least for a little while.

(Extra Crunch is our membership program, which helps founders and startup teams get ahead. You can sign up here.)

Everything else

Moderna says it’s making variant-specific COVID-19 vaccines, but its existing vaccine should still work — Moderna has detailed some of the steps it’s taking to ensure that its vaccine remains effective in the face of emerging strains of the SARS-CoV-2 virus that leads to COVID-19.

Original Content podcast: ‘Bridgerton’ is an addictive reimagining of Jane Austen-style romance — Did I mention that the cast is insanely good-looking?

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.

Continue Reading

Social

Debunk, don’t ‘prebunk,’ and other psychology lessons for social media moderation – TechCrunch

Published

on

If social networks and other platforms are to get a handle on disinformation, it’s not enough to know what it is — you have to know how people react to it. Researchers at MIT and Cornell have some surprising but subtle findings that may affect how Twitter and Facebook should go about treating this problematic content.

MIT’s contribution is a counter-intuitive one. When someone encounters a misleading headline in their timeline, the logical thing to do would be to put a warning before it so that the reader knows it’s disputed from the start. Turns out that’s not quite the case.

In a study of nearly 3,000 people who evaluated the accuracy of headlines after receiving different (or no) warnings about them.

Going into the project, I had anticipated it would work best to give the correction beforehand, so that people already knew to disbelieve the false claim when they came into contact with it. To my surprise, we actually found the opposite,” said study co-author David Rand in an MIT news article. “Debunking the claim after they were exposed to it was the most effective.”

When a person was warned beforehand that the headline was misleading, they improved in their classification accuracy by 5.7 percent. When the warning came simultaneously with the headline, that improvement grew to 8.6 percent. But if shown the warning afterwards, they were 25 percent better. In other words, debunking beat “prebunking” by a fair margin.

The team speculated as to the cause of this, suggesting that it fits with other indications that people are more likely to incorporate feedback into a preexisting judgment rather than alter that judgment as it’s being formed. They warned that the problem is far deeper than a tweak like this can fix.

“There is no single magic bullet that can cure the problem of misinformation,” said co-author Adam Berinsky. “Studying basic questions in a systematic way is a critical step toward a portfolio of effective solutions.”

The study from Cornell is equal parts reassuring and frustrating. People viewing potentially misleading information were reliably influenced by the opinions of large groups — whether or not those groups were politically aligned with the reader.

It’s reassuring because it suggests that people are willing to trust that if 80 out of 100 people thought a story was a little fishy, even if 70 of those 80 were from the other party, there might just be something to it. It’s frustrating because of how seemingly easy it is to sway an opinion simply by saying that a large group thinks it’s one way or the other.

“In a practical way, we’re showing that people’s minds can be changed through social influence independent of politics,” said graduate student Maurice Jakesch, lead author of the paper. “This opens doors to use social influence in a way that may de-polarize online spaces and bring people together.”

Partisanship still played a role, it must be said — people were about 21 percent less likely to have their view swayed if the group opinion was led by people belonging to the other party. But even so people were very likely to be affected by the group’s judgment.

Part of why misinformation is so prevalent is because we don’t really understand why it’s so appealing to people, and what measures reduce that appeal, among other simple questions. As long as social media is blundering around in darkness they’re unlikely to stumble upon a solution, but every study like this makes a little more light.

Continue Reading

Social

Twitter’s Birdwatch fights misinformation with community notes – TechCrunch

Published

on

Twitter is launching what it calls “a community-based approach to misinformation.”

The Birdwatch project first came to light last fall thanks to product sleuth Jane Manchun Wong. Now Twitter has launched a pilot version via the Birdwatch website.

The goal, as explained in a blog post by Twitter’s Vice President of Product Keith Coleman, is to expand beyond the labels that the company already applies to controversial or potentially misleading tweets, which he suggested are limited to “circumstances where something breaks our rules or receives widespread public attention.”

Coleman wrote that the Birdwatch approach will “broaden the range of voices that are part of tackling this problem.” That has brings a broader range of perspectives to these issues and goes beyond the simple question of, “Is this tweet true or not?” It may also take some of the heat off Twitter for individual content moderation decisions.

Users can sign up on the Birdwatch site to flag tweets that they find misleading, add context via notes and rate the notes written by other contributors, based on whether they’re helpful or not. These notes will only be visible on the Birdwatch site for now, but it sounds like the company’s goal is to incorporate them to the main Twitter experience.

“We believe this approach has the potential to respond quickly when misleading information spreads, adding context that people trust and find valuable,” Coleman said. “Eventually we aim to make notes visible directly on Tweets for the global Twitter audience, when there is consensus from a broad and diverse set of contributors.”

Given the potential for plenty of argument and back-and-froth on contentious tweets, it remains to be seen how Twitter will present these notes in a way that isn’t confusing or overwhelming, or how it can avoid weighing in on some of these arguments. The company said Birdwatch will use rank content based on algorithmic “reputation and consensus systems,” with the code shared publicly. (All notes contributed to Birdwatch will also be available for download.) You read more about the initial ranking system here.

“We know there are a number of challenges toward building a community-driven system like this — from making it resistant to manipulation attempts to ensuring it isn’t dominated by a simple majority or biased based on its distribution of contributors,” Coleman said. “We’ll be focused on these things throughout the pilot.”

Continue Reading

Trending