Connect with us

Social

Facebook Tool Lets Users Know if Their Photos Were Compromised in Latest Data Breach

Published

on

Last week, Facebook reported of a software bug that affected nearly 7 million users, and this bug may have exposed a broader set of photos to app developers than what those users intended. The company has confirmed that the faulty API issue has been fixed, and that it has also informed concerned third-party developers to delete the photos. Developers will then be able to obtain access to the set of photos which would normally have been shared. Additionally, the company has now also released a tool that will inform users if their data was compromised or not.

The new help page published by Facebook will warn users whether their account was compromised or not. This new page will require for you to be logged in to the social networking site. For users who haven’t suffered during the data breach, the following message displays, “Your Facebook account has not been affected by this issue and the apps you use did not have access to your other photos.”

For those affected, the page will list all the apps where your photos were exposed to. In any case, even if your account’s photos have not been compromised, Facebook recommends logging into any apps where you’ve shared your Facebook photos to check which photos they have access to, and revoke it if too much information has passed through.

The company had earlier said that only those people who granted permission for third-party apps to access the photos were affected. Generally, when people give apps access to their photos, it means only photos posted on their Facebook page. Facebook says the bug potentially gave developers access to other photos, such as those shared on Marketplace or on Facebook Stories. The bug also affected photos that people uploaded to Facebook but chose not to post or could not post for technical reasons.

Facebook said that these users’ photos may have been exposed for 12 days in September (between September 13 and 25) and that the bug was fixed.

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Social

Facebook launches a series tests to inform future changes to its News Feed algorithms – TechCrunch

Published

on

Facebook may be reconfiguring its News Feed algorithms. After being grilled by lawmakers about the role that Facebook played in the attack on the U.S. Capitol, the company announced this morning it will be rolling out a series of News Feed ranking tests that will ask users to provide feedback about the posts they’re seeing, which will later be incorporated into Facebook’s News Feed ranking process. Specifically, Facebook will be looking to learn which content people find inspirational, what content they want to see less of (like politics), and what other topics they’re generally interested in, among other things.

This will be done through a series of global tests, one of which will involve a survey directly beneath the post itself which asks, “how much were you inspired by this post?,” with the goal of helping to show more people posts of an inspirational nature closer at the top of the News Feed.

Image Credits: Facebook

Another test will work to the Facebook News Feed experience to reflect what people want to see. Today, Facebook prioritizes showing you content from friends, Groups and Pages you’ve chosen to follow, but it has algorithmically crafted an experience of whose posts to show you and when based on a variety of signals. This includes both implicit and explicit signals — like how much you engage with that person’s content (or Page or Group) on a regular basis, as well as whether you’ve added them as a “Close Friend” or “Favorite” indicating you want to see more of their content than others, for example.

However, just because you’re close to someone in real life, that doesn’t mean that you like what they post to Facebook. This has driven families and friends apart in recent years, as people discovered by way of social media how people they thought they knew really viewed the world. It’s been a painful reckoning for some. Facebook hasn’t managed to fix the problem, either. Today, users still scroll News Feeds that reinforce their views, no matter how problematic. And with the growing tide of misinformation, the News Feed has gone from just placing users into a filter bubble to presenting a full alternate reality for some, often populated by conspiracies theories.

Facebook’s third test doesn’t necessarily tackle this problem head-on, but instead looks to gain feedback about what users want to see, as a whole. Facebook says that it will begin asking people whether they want to see more or fewer posts on certain topics, like Cooking, Sports, or Politics, and more. Based on users’ collective feedback, Facebook will adjust its algorithms to show more content people say they’re interested in, and fewer posts about topics they don’t want to see.

The area of politics, specifically, has been an issue for Facebook. The social network for years has been charged with helping to fan the flames of political discourse, polarizing and radicalizing users through its algorithms, distributing misinformation at scale, and encouraging an ecosystem of divisive clickbait, as publishers sought engagement instead of fairness and balance when reporting the news. There are now entirely biased and subjective outlets posing as news sources who benefit from algorithms like Facebook’s, in fact.

Shortly after the Capitol attack, Facebook announced it would try clamping down on political content in the News Feed for a small percentage of people in the U.S., Canada, Brazil and Indonesia, for period of time during tests.

Now, the company says it will work to better understand what content is being linked negative News Feed experiences, including political content. In this case, Facebook may ask users on posts with a lot of negative reactions what sort of content they want to see less of.

It will also more prominently feature the option to hide posts you find “irrelevant, problematic or irritating.” Although this feature existed before, you’ll now be able to tap an X in the upper-right corner of a post to hide it from the News Feed, if in the test group, and see fewer like in the future, for a more personalized experience.

It’s not clear that allowing users to pick and choose their topics is the best way to solve the larger problems with negative posts, divisive content or misinformation, though this test is less about the latter and more about making the News Feed “feel” more positive.

As the data is collected from the tests, Facebook will incorporate the learnings into its News Feed ranking algorithms. But it’s not clear to what extent it will be adjusting the algorithm on a global basis versus simply customizing the experience for end users on a more individual basis over time.

The company says the tests will run over the next few months.

Continue Reading

Social

Instagram launches tools to filter out abusive DMs based on keywords and emojis, and to block people, even on new accounts – TechCrunch

Published

on

Facebook and its family of apps have long grappled with the issue of how to better manage — and eradicate — bullying and other harassment on its platform, turning both to algorithms and humans in its efforts to tackle the problem better. In the latest development, today, Instagram is announcing some new tools of its own.

First, it’s introducing a new way for people to further shield themselves from harassment in their direct messages, specifically in message requests by way of a new set of words, phrases and emojis that might signal abusive content, which will also include common misspellings of those key terms, sometimes used to try to evade the filters. Second, it’s giving users the ability to proactively block people even if they try to contact the user in question over a new account.

The blocking account feature is going live globally in the next few weeks, Instagram said, and it confirmed to me that the feature to filter out abusive DMs will start rolling out in the UK, France, Germany, Ireland, Canada, Australia and New Zealand in a few weeks’ time before becoming available in more countries over the next few months.

Notably, these features are only being rolled out on Instagram — not Messenger, and not WhatsApp, Facebook’s other two hugely popular apps that enable direct messaging. The spokesperson confirmed that Facebook hopes to bring it to other apps in the stable later this year. (Instagram and others have regularly issued updates on single apps before considering how to roll them out more widely.)

Instagram said that the feature to scan DMs for abusive content — which will be based on a list of words and emojis that Facebook compiles with the help of anti-discrimination and anti-bullying organizations (it did not specify which), along with terms and emoji’s that you might add in yourself — has to be turned on proactively, rather than being made available by default.

Why? More user license, it seems, and to keep conversations private if uses want them to be. “We want to respect peoples’ privacy and give people control over their experiences in a way that works best for them,” a spokesperson said, pointing out that this is similar to how its comment filters also work. It will live in Settings>Privacy>Hidden Words for those who will want to turn on the control.

There are a number of third-party services out there in the wild now building content moderation tools that sniff out harassment and hate speech — they include the likes of Sentropy and Hive — but what has been interesting is that the larger technology companies up to now have opted to build these tools themselves. That is also the case here, the company confirmed.

The system is completely automated, although Facebook noted that it reviews any content that gets reported. While it doesn’t keep data from those interactions, it confirmed that it will be using reported words to continue building its bigger database of terms that will trigger content getting blocked, and subsequently deleting, blocking and reporting the people who are sending it.

On the subject of those people, it’s been a long time coming that Facebook has started to get smarter on how it handles the fact that the people with really ill intent have wasted no time in building multiple accounts to pick up the slack when their primary profiles get blocked. People have been aggravated by this loophole for as long as DMs have been around, even though Facebook’s harassment policies had already prohibited people from repeatedly contacting someone who doesn’t want to hear from them, and the company had already also prohibited recidivism, which as Facebook describes it, means “if someone’s account is disabled for breaking our rules, we would remove any new accounts they create whenever we become aware of it.”

The company’s approach to Direct Messages has been something of a template for how other social media companies have built these out.

In essence, they are open-ended by default, with one inbox reserved for actual contacts, but a second one for anyone at all to contact you. While some people just ignore that second box altogether, the nature of how Instagram works and is built is for more, not less, contact with others, and that means people will use those second inboxes for their DMs more than they might, for example, delve into their spam inboxes in email.

The bigger issue continues to be a game of whack-a-mole, however, and one that not just its users are asking for more help to solve. As Facebook continues to find itself under the scrutinizing eye of regulators, harassment — and better management of it — has emerged as a very key area that it will be required to solve before others do the solving for it.

Continue Reading

Social

Facebook is expanding Spotify partnership with new ‘Boombox’ project – TechCrunch

Published

on

Facebook is deepening its relationship with music company Spotify and will allow users to listen to music hosted on Spotify while browsing through its apps as part of a new initiative called “Project Boombox,” Facebook CEO Mark Zuckerberg said Monday.

Facebook is building an in-line audio player that will allow users to listen to songs or playlists being shared on the platforms without being externally linked to Spotify’s app or website. Zuckerberg highlighted the feature as another product designed to improve the experience of creators on its platforms, specifically the ability of musicians to share their work, “basically making audio a first-class type of media,” he said.

We understand from sources familiar with the Spotify integration that this player will support both music and podcasts. It has already been tested in non-U.S. markets, including Mexico and Thailand, and it’s expected to arrive in about a week.

The news was revealed in a wide-ranging interview with reporter Casey Newton on the company’s future pursuits in the audio world as Facebook aims to keep pace with upstart efforts like Clubhouse and increased activity in the podcasting world. 

“We think that audio is going to be a first-class medium and that there are all these different products to be built across this whole spectrum,” said Zuckerberg. “Of course, it includes some areas that, that have been, you know, popular recently like podcasting and kind of live audio rooms like this, but I also think that there’s some interesting things that are under-explored in the area overall.”

Spotify has already supported a fairly productive relationship with the Facebook and Instagram platforms. In recent years the music and podcasts platform has been integrated more deeply into Instagram Stories where users can share content from the service, a feature that’s also been available in Facebook Stories.

Continue Reading

Trending