Connect with us

Tech News

Facebook changes algorithm to demote “borderline content” that almost violates its policy – TechCrunch

Published

on

Facebook has changed its News Feed algorithm to demote content that comes close to violating its policies prohibiting misinformation, hate speech, violence, bullying, clickbait so it’s seen by fewer people even it’s highly engaging. In a 5000-word letter by Mark Zuckerberg published today, he explained how a “basic incentive problem” that “when left unchecked, people will engage disproportionately with more sensationalist and provocative content. Our research suggests that no matter where we draw the lines for what is allowed, as a piece of content gets close to that line, people will engage with it more on average  — even when they tell us afterwards they don’t like the content.”

Without intervention, the engagement with borderline content looks like the graph above, increasing as it gets closer to the policy line. So Facebook is intervening, artificially suppressing the News Feed distribution of this kind of content so engagement looks like the graph below.

Facebook will apply penalties to borderline content not just the News Feed but to all of its content, including Groups and Pages themselves to ensure it doesn’t radicalize people by recommending they join communities because they’re highly engaging thanks to toeing the policy line. “Divisive groups and pages can still fuel polarization” Zuckerberg notes.

However, users who purposefully want to view borderline content will be given the chance to opt in. Zuckerberg writes that “For those who want to make these decisions themselves, we believe they should have that choice since this content doesn’t violate our standards.” For example, Facebook might create flexible standards for types of content like nudity where cultural norms vary, like how some coutnries ban women from exposing much skin in photographs while others allow nudity on network television. It may be some time until these opt ins are available, though, as Zuckerber says Facebook must first train its AI to be able to reliably detect content that either crosses the line, or purposefully approaches the borderline.

Facebook had previously changed the algorithm to demote clickbait. Starting in 2014 it downranked links that people clicked on but quickly bounced from without going back to Like the post on Facebook. By 2016, it was analyzing headlines for common clickbait phrases, and this year it banned clickbait rings for inauthentic behavior. But now it’s giving the demotion treatment to other types of sensational content. That could mean posts with violence that stop short of showing physical injury, or lewd images with genitalia barely covered, or posts that suggest people should commit violence for a cause without directly telling them to.

Facebook could end up exposed to criticism, especially from fringe political groups who rely on borderline content to whip up their bases and spread their messages. But with polarization and sensationalism rampant and tearing apart society, Facebook has settled on a policy that it may try to uphold freedom of speech, but users are not entitled to amplification of that speech.

Below is Zuckerberg’s full written statement on the borderline content:

One of the biggest issues social networks face is that, when left unchecked, people will engage disproportionately with more sensationalist and provocative content. This is not a new phenomenon. It is widespread on cable news today and has been a staple of tabloids for more than a century. At scale it can undermine the quality of public discourse and lead to polarization. In our case, it can also degrade the quality of our services. 

[ Graph showing line with growing engagement leading up to the policy line, then blocked ] 

Our research suggests that no matter where we draw the lines for what is allowed, as a piece of content gets close to that line, people will engage with it more on average  — even when they tell us afterwards they don’t like the content. 

This is a basic incentive problem that we can address by penalizing borderline content so it gets less distribution and engagement. By making the distribution curve look like the graph below where distribution declines as content gets more sensational, people are disincentivized from creating provocative content that is as close to the line as possible.

[ Graph showing line declining engagement leading up to the policy line, then blocked ]

This process for adjusting this curve is similar to what I described above for proactively identifying harmful content, but is now focused on identifying borderline content instead. We train AI systems to detect borderline content so we can distribute that content less. 

The category we’re most focused on is click-bait and misinformation. People consistently tell us these types of content make our services worse — even though they engage with them. As I mentioned above, the most effective way to stop the spread of misinformation is to remove the fake accounts that generate it. The next most effective strategy is reducing its distribution and virality. (I wrote about these approaches in more detail in my note on [Preparing for Elections].)

Interestingly, our research has found that this natural pattern of borderline content getting more engagement applies not only to news but to almost every category of content. For example, photos close to the line of nudity, like with revealing clothing or sexually suggestive positions, got more engagement on average before we changed the distribution curve to discourage this. The same goes for posts that don’t come within our definition of hate speech but are still offensive.

This pattern may apply to the groups people join and pages they follow as well. This is especially important to address because while social networks in general expose people to more diverse views, and while groups in general encourage inclusion and acceptance, divisive groups and pages can still fuel polarization. To manage this, we need to apply these distribution changes not only to feed ranking but to all of our recommendation systems for things you should join.

One common reaction is that rather than reducing distribution, we should simply move the line defining what is acceptable. In some cases this is worth considering, but it’s important to remember that won’t address the underlying incentive problem, which is often the bigger issue. This engagement pattern seems to exist no matter where we draw the lines, so we need to change this incentive and not just remove content. 

I believe these efforts on the underlying incentives in our systems are some of the most important work we’re doing across the company. We’ve made significant progress in the last year, but we still have a lot of work ahead.

By fixing this incentive problem in our services, we believe it’ll create a virtuous cycle: by reducing sensationalism of all forms, we’ll create a healthier, less polarized discourse where more people feel safe participating.

 

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech News

SiriusXM’s new satellite radio plan is made for two-car households

Published

on

Many cars now come with Bluetooth support, which means satellite radio service is less relevant than ever. Despite that, SiriusXM is still a thing and for those who prefer the satellite radio experience while driving, it has a new plan called Platinum VIP. The new plan offers service for two cars, including streaming access, under a single rate.

SiriusXM is best known for its satellite radio service, which provided drivers in the pre-smartphone era with high-quality audio content, including music and radio shows, for their commutes. Though the company has since launched a streaming service that competes with alternatives like Spotify, it still offers satellite radio for those who have a car radio system that supports it.

There are some advantages that SiriusXM offers drivers, namely that you don’t need to mess with your phone at any point and can instead access and control it the way you would old-school radio. As well, SiriusXM has a large library of exclusive content and more niche offerings like talk shows and sports broadcasts.

If you’re someone who owns two cars — or lives with someone who has their own car — and you find value in satellite radio, SiriusXM has a new plan for you. Platinum VIP is priced at $35/month and provides simultaneous access to both the satellite radio service and the company’s streaming app for two cars/users.

As expected, customers get access to the ad-free experience, plus the Platinum VIP plan includes two Howard Stern channels, access to Pandora stations, the company’s original podcasts, a variety of sports content including play-by-play for major games, exclusive comedy bits, and more.

These offerings are joined by what SiriusXM calls “VIP perks,” including access to around 5,000 live concert recordings and videos via Nugs.net. These subscribers also get priority customer service calls. Platinum VIP joins the other newly renamed plans, including Platinum, Music & Entertainment, Music Showcase, and Choose & Save.

Continue Reading

Tech News

Lucasfilm hires Star Wars fan behind Luke face fix

Published

on

The creators of Star Wars at Industrial Light and Magic (ILM) and Lucasfilm aren’t shy about the fact that they employ fans of their most popular creations. After so many decades of creating Star Wars, it was inevitable that Lucasfilm would eventually be working with professionals that grew up watching Star Wars as a kid. This week, we have a friendly reminder that the dream is real: A creative film and special effects individual that went by the name “Shamook” was hired by Lucasfilm after he created his own take on a scene from The Mandalorian.

In the final (pre-credits) scene in The Mandalorian Season 2, we see a digitally-retargeted Mark Hammill performance, de-aged and fixed to look like Luke Skywalker just a short period after the events of Return of the Jedi. The result was fantastic, amazing, mind-blowing, and all that good stuff. But it wasn’t perfect.

Shamook saw what they did with this scene and took it upon himself to digitally edit the scene to make it just a BIT better. Below you’ll see that edited scene.

This update to the scene makes Luke Skywalker look just a bit more like his Return of the Jedi self. It pushes the performance over the edge – to a place where it feels natural enough that it’s apparent that it caught the eye of someone at Industrial Light and Magic. It’s become clear here and now that Shamook was hired by Lucasfilm after the release of that video.

In comments on a different video on his YouTube Channel for Deepfakes, Shamook revealed that he’d joined ILM/Lucasfilm “a few months ago” (a few months before he revealed the hiring in a commend made in early July, 2021). He added, “now I’ve settled into my job, uploads should start increasing again.” He also revealed that his role with ILM is “Senior Facial Capture Artist.”

It’s quite likely that future Star Wars projects won’t shy away from using tech like this again, in the very near future. Imagine what we’ll see in the Obi-Wan Kenobi show on Disney+, or The Book of Boba Fett, or the Lando show – the possibilities are endless!

Continue Reading

Tech News

Android 12 Beta 3.1 released with major loopy issue fixes

Published

on

There’s a new Android update available today, Beta style, just so long as you’re part of the Android Beta program with Google. If you are a part of that program – open to the public, mind you – you’ll likely see a software update to Android 12 Beta 3.1 today. This update works with build SPB3.210618.016 with the usual x86 (64-bit), ARM (v8-A) emulator support, with a July 2021 security patch and Google Play services version 21.24.13, with API 31 for developers.

If you’re already using a device that’s running Android 12 Beta 3, you’ll more than likely see this update to your smartphone this afternoon. The update includes mainly bug fixes, but also adds stability to the build in a wide variety of places. You won’t likely notice any major difference in this software VS the previous most-recent release unless you’ve noticed one of a series of bugs that’ve been fixed.

SEE TOO: Android 12 Beta 3 released: Here’s what’s exciting

This update fixes an issue that caused Android low memory killer daemon (lmkd) to kill processes like a wild maniac. This update fixes an issue “that sometimes caused the System UI to crash.” If you’ve noticed your device getting stuck in the dreaded boot loop of death since the most recent update, this update should… fix that… if you’ve found a way to get out of the loop, of course.

For those of you that’ve never gotten your phone stuck in an “boot loop”, it’s essentially like it’s starting up, getting to the point where you’d expect to be able to interact with it, then oops! It’s starting again, getting to that point where you think you’ll be able to start using it… and so on. It’s an issue that occurs from time to time, and doesn’t necessarily mean the device is broken or useless – but it’s not always easy to fix.

If you own a Google Pixel smartphone released in the last couple of years, there’s a good chance you’ll have access to this Android 12 Beta 3.1 build. Take a peek at release notes for this build if you’re interested in getting far more in-depth as a developer or an Android enthusiast. There are also a variety of other brand phones that can access this Android 12 Beta 3.1 now, as it was with the Android 12 Beta earlier this year.

Continue Reading

Trending