Every picture posted to Facebook and Instagram gets a caption generated by an image analysis AI, and that AI just got a lot smarter. The improved system should be a treat for visually impaired users, and may help you find your photos faster in the future.
Alt text is a field in an image’s metadata that describes its contents: “A person standing in a field with a horse,” or “a dog on a boat.” This lets the image be understood by people who can’t see it.
These descriptions are often added manually by a photographer or publication, but people uploading photos to social media generally don’t bother, if they even have the option. So the relatively recent ability to automatically generate one — the technology has only just gotten good enough in the last couple years — has been extremely helpful in making social media more accessible in general.
Facebook created its Automatic Alt Text system in 2016, which is eons ago in the field of machine learning. The team has since cooked up many improvements to it, making it faster and more detailed, and the latest update adds an option to generate a more detailed description on demand.
The improved system recognizes 10 times more items and concepts than it did at the start, now around 1,200. And the descriptions include more detail. What was once “Two people by a building” may now be “A selfie of two people by the Eiffel Tower.” (The actual descriptions hedge with “may be…” and will avoid including wild guesses.)
But there’s more detail than that, even if it’s not always relevant. For instance, in this image the AI notes the relative positions of the people and objects:
Obviously the people are above the drums, and the hats are above the people, none of which really needs to be said for someone to get the gist. But consider an image described as “A house and some trees and a mountain.” Is the house on the mountain or in front of it? Are the trees in front of or behind the house, or maybe on the mountain in the distance?
In order to adequately describe the image, these details should be filled in, even if the general idea can be gotten across with fewer words. If a sighted person wants more detail they can look closer or click the image for a bigger version — someone who can’t do that now has a similar option with this “generate detailed image description” command. (Activate it with a long press in the Android app or a custom action in iOS.)
Perhaps the new description would be something like “A house and some trees in front of a mountain with snow on it.” That paints a better picture, right? (To be clear, these examples are made up, but it’s the sort of improvement that’s expected.)
The new detailed description feature will come to Facebook first for testing, though the improved vocabulary will appear on Instagram soon. The descriptions are also kept simple so they can be easily translated to other languages already supported by the apps, though the feature may not roll out in other countries simultaneously.
Facebook launches a series tests to inform future changes to its News Feed algorithms – TechCrunch
Facebook may be reconfiguring its News Feed algorithms. After being grilled by lawmakers about the role that Facebook played in the attack on the U.S. Capitol, the company announced this morning it will be rolling out a series of News Feed ranking tests that will ask users to provide feedback about the posts they’re seeing, which will later be incorporated into Facebook’s News Feed ranking process. Specifically, Facebook will be looking to learn which content people find inspirational, what content they want to see less of (like politics), and what other topics they’re generally interested in, among other things.
This will be done through a series of global tests, one of which will involve a survey directly beneath the post itself which asks, “how much were you inspired by this post?,” with the goal of helping to show more people posts of an inspirational nature closer at the top of the News Feed.
Another test will work to the Facebook News Feed experience to reflect what people want to see. Today, Facebook prioritizes showing you content from friends, Groups and Pages you’ve chosen to follow, but it has algorithmically crafted an experience of whose posts to show you and when based on a variety of signals. This includes both implicit and explicit signals — like how much you engage with that person’s content (or Page or Group) on a regular basis, as well as whether you’ve added them as a “Close Friend” or “Favorite” indicating you want to see more of their content than others, for example.
However, just because you’re close to someone in real life, that doesn’t mean that you like what they post to Facebook. This has driven families and friends apart in recent years, as people discovered by way of social media how people they thought they knew really viewed the world. It’s been a painful reckoning for some. Facebook hasn’t managed to fix the problem, either. Today, users still scroll News Feeds that reinforce their views, no matter how problematic. And with the growing tide of misinformation, the News Feed has gone from just placing users into a filter bubble to presenting a full alternate reality for some, often populated by conspiracies theories.
Facebook’s third test doesn’t necessarily tackle this problem head-on, but instead looks to gain feedback about what users want to see, as a whole. Facebook says that it will begin asking people whether they want to see more or fewer posts on certain topics, like Cooking, Sports, or Politics, and more. Based on users’ collective feedback, Facebook will adjust its algorithms to show more content people say they’re interested in, and fewer posts about topics they don’t want to see.
The area of politics, specifically, has been an issue for Facebook. The social network for years has been charged with helping to fan the flames of political discourse, polarizing and radicalizing users through its algorithms, distributing misinformation at scale, and encouraging an ecosystem of divisive clickbait, as publishers sought engagement instead of fairness and balance when reporting the news. There are now entirely biased and subjective outlets posing as news sources who benefit from algorithms like Facebook’s, in fact.
Shortly after the Capitol attack, Facebook announced it would try clamping down on political content in the News Feed for a small percentage of people in the U.S., Canada, Brazil and Indonesia, for period of time during tests.
Now, the company says it will work to better understand what content is being linked negative News Feed experiences, including political content. In this case, Facebook may ask users on posts with a lot of negative reactions what sort of content they want to see less of.
It will also more prominently feature the option to hide posts you find “irrelevant, problematic or irritating.” Although this feature existed before, you’ll now be able to tap an X in the upper-right corner of a post to hide it from the News Feed, if in the test group, and see fewer like in the future, for a more personalized experience.
It’s not clear that allowing users to pick and choose their topics is the best way to solve the larger problems with negative posts, divisive content or misinformation, though this test is less about the latter and more about making the News Feed “feel” more positive.
As the data is collected from the tests, Facebook will incorporate the learnings into its News Feed ranking algorithms. But it’s not clear to what extent it will be adjusting the algorithm on a global basis versus simply customizing the experience for end users on a more individual basis over time.
The company says the tests will run over the next few months.
Instagram launches tools to filter out abusive DMs based on keywords and emojis, and to block people, even on new accounts – TechCrunch
Facebook and its family of apps have long grappled with the issue of how to better manage — and eradicate — bullying and other harassment on its platform, turning both to algorithms and humans in its efforts to tackle the problem better. In the latest development, today, Instagram is announcing some new tools of its own.
First, it’s introducing a new way for people to further shield themselves from harassment in their direct messages, specifically in message requests by way of a new set of words, phrases and emojis that might signal abusive content, which will also include common misspellings of those key terms, sometimes used to try to evade the filters. Second, it’s giving users the ability to proactively block people even if they try to contact the user in question over a new account.
The blocking account feature is going live globally in the next few weeks, Instagram said, and it confirmed to me that the feature to filter out abusive DMs will start rolling out in the UK, France, Germany, Ireland, Canada, Australia and New Zealand in a few weeks’ time before becoming available in more countries over the next few months.
Notably, these features are only being rolled out on Instagram — not Messenger, and not WhatsApp, Facebook’s other two hugely popular apps that enable direct messaging. The spokesperson confirmed that Facebook hopes to bring it to other apps in the stable later this year. (Instagram and others have regularly issued updates on single apps before considering how to roll them out more widely.)
Instagram said that the feature to scan DMs for abusive content — which will be based on a list of words and emojis that Facebook compiles with the help of anti-discrimination and anti-bullying organizations (it did not specify which), along with terms and emoji’s that you might add in yourself — has to be turned on proactively, rather than being made available by default.
Why? More user license, it seems, and to keep conversations private if uses want them to be. “We want to respect peoples’ privacy and give people control over their experiences in a way that works best for them,” a spokesperson said, pointing out that this is similar to how its comment filters also work. It will live in Settings>Privacy>Hidden Words for those who will want to turn on the control.
There are a number of third-party services out there in the wild now building content moderation tools that sniff out harassment and hate speech — they include the likes of Sentropy and Hive — but what has been interesting is that the larger technology companies up to now have opted to build these tools themselves. That is also the case here, the company confirmed.
The system is completely automated, although Facebook noted that it reviews any content that gets reported. While it doesn’t keep data from those interactions, it confirmed that it will be using reported words to continue building its bigger database of terms that will trigger content getting blocked, and subsequently deleting, blocking and reporting the people who are sending it.
On the subject of those people, it’s been a long time coming that Facebook has started to get smarter on how it handles the fact that the people with really ill intent have wasted no time in building multiple accounts to pick up the slack when their primary profiles get blocked. People have been aggravated by this loophole for as long as DMs have been around, even though Facebook’s harassment policies had already prohibited people from repeatedly contacting someone who doesn’t want to hear from them, and the company had already also prohibited recidivism, which as Facebook describes it, means “if someone’s account is disabled for breaking our rules, we would remove any new accounts they create whenever we become aware of it.”
The company’s approach to Direct Messages has been something of a template for how other social media companies have built these out.
In essence, they are open-ended by default, with one inbox reserved for actual contacts, but a second one for anyone at all to contact you. While some people just ignore that second box altogether, the nature of how Instagram works and is built is for more, not less, contact with others, and that means people will use those second inboxes for their DMs more than they might, for example, delve into their spam inboxes in email.
The bigger issue continues to be a game of whack-a-mole, however, and one that not just its users are asking for more help to solve. As Facebook continues to find itself under the scrutinizing eye of regulators, harassment — and better management of it — has emerged as a very key area that it will be required to solve before others do the solving for it.
Facebook is expanding Spotify partnership with new ‘Boombox’ project – TechCrunch
Facebook is deepening its relationship with music company Spotify and will allow users to listen to music hosted on Spotify while browsing through its apps as part of a new initiative called “Project Boombox,” Facebook CEO Mark Zuckerberg said Monday.
Facebook is building an in-line audio player that will allow users to listen to songs or playlists being shared on the platforms without being externally linked to Spotify’s app or website. Zuckerberg highlighted the feature as another product designed to improve the experience of creators on its platforms, specifically the ability of musicians to share their work, “basically making audio a first-class type of media,” he said.
We understand from sources familiar with the Spotify integration that this player will support both music and podcasts. It has already been tested in non-U.S. markets, including Mexico and Thailand, and it’s expected to arrive in about a week.
The news was revealed in a wide-ranging interview with reporter Casey Newton on the company’s future pursuits in the audio world as Facebook aims to keep pace with upstart efforts like Clubhouse and increased activity in the podcasting world.
“We think that audio is going to be a first-class medium and that there are all these different products to be built across this whole spectrum,” said Zuckerberg. “Of course, it includes some areas that, that have been, you know, popular recently like podcasting and kind of live audio rooms like this, but I also think that there’s some interesting things that are under-explored in the area overall.”
Spotify has already supported a fairly productive relationship with the Facebook and Instagram platforms. In recent years the music and podcasts platform has been integrated more deeply into Instagram Stories where users can share content from the service, a feature that’s also been available in Facebook Stories.
Wemo HomeKit remote bypasses apps and Siri for smart home control
Wemo has added a new remote control to its smart home line-up, with the Wemo Stage Scene Controller designed to...
Meet Thistle, the startup that wants to secure billions of IoT devices
Getty Images For more than two decades, Window Snyder has built security into products at some of the biggest companies...
Analyst: Nintendo says Microsoft’s xCloud streaming isn’t coming to Switch
Enlarge / That Android Note Ultra 20 (with removable controller) at the top of the image is the closest you’re...
Garmin Venu 2 smartwatch is the do-all fitness tracker
The Garmin Venu 2 smartwatch will cost you approximately $400 – let’s talk about why. The Garmin Venu 2 does...
Biden says US will halve carbon emissions by 2030
Getty Images | Bloomberg President Joe Biden announced Thursday that the US would cut carbon emissions by 50 to 52...
Social1 year ago
CrashPlan for Small Business Review
Gadgets3 years ago
A fictional Facebook Portal videochat with Mark Zuckerberg – TechCrunch
Mobile3 years ago
Memory raises $5M to bring AI to time tracking – TechCrunch
Social2 years ago
iPhone XS priciest yet in South Korea
Cars2 years ago
What’s the best cloud storage for you?
Security2 years ago
Google latest cloud to be Australian government certified
Social2 years ago
Apple’s new iPad Pro aims to keep enterprise momentum
Cars2 years ago
SK Telecom and Samsung to collaborate on 5G for enterprise