Connect with us

Social

WhatsApp Dark Mode Elements Spotted in Avatar Images, VoIP Screen on Android Beta

Published

on

WhatsApp Dark Mode has been in development for the chat service’s Android and iPhone apps for some time now. But now, the latest WhatsApp beta for Android version indicated some notable tweaks pertaining to the ongoing development. The beta update appears to include new avatar placeholders for the WhatsApp Dark Mode. Similarly, there is a new VoIP screen with dark elements. This screen surfaces once a user receives a WhatsApp call. The new changes are a part of the latest WhatsApp beta version for Android. However, they aren’t visible to the public just yet.

The fresh series of Dark Mode-focussed changes exist in the WhatsApp beta version 2.19.354 for Android. This new update includes the avatar images (or placeholder icons) for broadcasts, individual profiles, and groups with a grey background. These would be visible to the users once the WhatsApp Dark Mode is enabled, reports WhatsApp beta watcher WABetaInfo.

WhatsApp Dark Mode elements now spotted in avatar images 
Photo Credit: WABetaInfo

 

WhatsApp by default has the avatar images with a green background — matching the dark green ribbon available at the top. But, this seems to have changed in the latest WhatsApp beta, with new WhatsApp Dark Mode elements.

Alongside the new avatar images, the latest WhatsApp beta version for Android devices seems to have the new VoIP screen dark elements to support the Dark Mode. The new screen will continue to have the green background that exists on the current VoIP screen, but with a darker tint to reduce eyestrain to some extent. It is likely that app would make further interface-level changes before bringing WhatsApp Dark Mode to the public.

whatsapp android dark mode voip screen wabetainfo WhatsApp

WhatsApp could bring a new VoIP screen with dark elements
Photo Credit: WABetaInfo

 

That being said, the latest WhatsApp Dark Mode tweaks aren’t visible publicly. You can, however, download the latest WhatsApp beta version on your Android device directly via Google Play beta programme or through its APK from APK Mirror.

Last week, WhatsApp beta version 2.19.348 for Android devices was released, and it was spotted adding a self-destructing ‘Delete messages’ feature. The addition could allow users to choose how long they want a new message to be visible to their contacts.

The anticipated WhatsApp Dark Mode is also apparently in the development for many months. It was also spotted testing on iPhone last month. Nonetheless, the Facebook-owned company is yet to showcase its development formally.



Source link

Continue Reading

Social

TikTok calls in outside help with content moderation in Europe – TechCrunch

Published

on

TikTok is bringing in external experts in Europe in fields such as child safety, young people’s mental health and extremism to form a Safety Advisory Council to help it with content moderation in the region.

The move, announced today, follows an emergency intervention by Italy’s data protection authority in January — which ordered TikTok to block users it cannot age verify after the death of a girl who was reported by local media to have died of asphyxiation as a result of participating in a black out challenge on the video sharing platform.

The social media platform has also been targeted by a series of coordinated complaints by EU consumer protection agencies, which put out two reports last month detailing a number of alleged breaches of the bloc’s consumer protection and privacy rules — including child safety-specific concerns.

“We are always reviewing our existing features and policies, and innovating to take bold new measures to prioritise safety,” TikTok writes today, putting a positive spin on needing to improve safety on its platform in the region.

“The Council will bring together leaders from academia and civil society from all around Europe. Each member brings a different, fresh perspective on the challenges we face and members will provide subject matter expertise as they advise on our content moderation policies and practices. Not only will they support us in developing forward-looking policies that address the challenges we face today, they will also help us to identify emerging issues that affect TikTok and our community in the future.”

It’s not the first such advisory body TikTok has launched. A year ago it announced a US Safety Advisory Council, after coming under scrutiny from US lawmakers concerned about the spread of election disinformation and wider data security issues, including accusations the Chinese-owned app was engaging in censorship at the behest of the Chinese government.

But the initial appointees to TikTok’s European content moderation advisory body suggest its regional focus is more firmly on child safety/young people’s mental health and extremism and hate speech, reflecting some of the main areas where it’s come under the most scrutiny from European lawmakers, regulators and civil society so far.

TikTok has appointed nine individuals to its European Council (listed here) — initially bringing in external expertise in anti-bullying, youth mental health and digital parenting; online child sexual exploitation/abuse; extremism and deradicalization; anti-bias/discrimination and hate crimes — a cohort it says it will expand as it adds more members to the body (“from more countries and different areas of expertise to support us in the future”).

TikTok is also likely to have an eye on new pan-EU regulation that’s coming down the pipe for platforms operating in the region.

EU lawmakers recently put forward a legislative proposal that aims to dial up accountability for digital service providers over the content they push and monetize. The Digital Services Act, which is currently in draft, going through the bloc’s co-legislative process, will regulate how a wide range of platforms must act to remove explicitly illegal content (such as hate speech and child sexual exploitation).

The Commission’s DSA proposal avoided setting specific rules for platforms to tackle a broader array of harms — such as issues like youth mental health — which, by contrast, the UK is proposing to address in its plan to regulate social media (aka the Online Safety bill). However the planned legislation is intended to drive accountability around digital services in a variety of ways.

For example, it contains provisions that would require larger platforms — a category TikTok would most likely fall into — to provide data to external researchers so they can study the societal impacts of services. It’s not hard to imagine that provision leading to some head-turning (independent) research into the mental health impacts of attention-grabbing services. So the prospect is platforms’ own data could end up translating into negative PR for their services — i.e. if they’re shown to be failing to create a safe environment for users.

Ahead of that oversight regime coming in, platforms have increased incentive to up their outreach to civil society in Europe so they’re in a better position to skate to where the puck is headed.

 

Continue Reading

Social

Facebook will pay $650 million to settle class action suit centered on Illinois privacy law – TechCrunch

Published

on

Facebook was ordered to pay $650 million Friday for running afoul of an Illinois law designed to protect the state’s residents from invasive privacy practices.

That law, the Biometric Information Privacy Act (BIPA), is a powerful state measure that’s tripped up tech companies in recent years. The suit against Facebook was first filed in 2015, alleging that Facebook’s practice of tagging people in photos using facial recognition without their consent violated state law.

Indeed, 1.6 million Illinois residents will receive at least $345 under the final settlement ruling in California federal court. The final number is $100 million higher than the $550 million Facebook proposed in 2020, which a judge deemed inadequate. Facebook disabled the automatic facial recognition tagging features in 2019, making it opt-in instead and addressing some of the privacy criticisms echoed by the Illinois class action suit.

A cluster of lawsuits accused Microsoft, Google and Amazon of breaking the same law last year after Illinois residents’ faces were used to train their facial recognition systems without explicit consent.

The Illinois privacy law has tangled up some of tech’s giants, but BIPA has even more potential to impact smaller companies with questionable privacy practices. The controversial facial recognition software company Clearview AI now faces its own BIPA-based class action lawsuit in the state after the company failed to dodge the suit by pushing it out of state courts.

A $650 million settlement would be enough to crush any normal company, though Facebook can brush it off much like it did with the FTC’s record-setting $5 billion penalty in 2019. But the Illinois law isn’t without teeth. For Clearview, it was enough to make the company pull out of business in the state altogether.

The law can’t punish a behemoth like Facebook in the same way, but it is one piece in a regulatory puzzle that poses an increasing threat to the way tech’s data brokers have done business for years. With regulators at the federal, state and legislative level proposing aggressive measures to rein in tech, the landmark Illinois law provides a compelling framework that other states could copy and paste. And if big tech thinks navigating federal oversight will be a nightmare, a patchwork of aggressive state laws governing how tech companies do business on a state-by-state basis is an alternate regulatory future that could prove even less palatable.

 

Continue Reading

Social

Twitter rolls out vaccine misinformation warning labels and a strike-based system for violations – TechCrunch

Published

on

Twitter announced Monday that it would begin injecting new labels into users’ timelines to push back against misinformation that could disrupt the rollout of COVID-19 vaccines. The labels, which will also appear as pop-up messages in the retweet window, are the company’s latest product experiment designed to shape behavior on the platform for the better.

The company will attach notices to tweeted misinformation warning users that the content “may be misleading” and linking out to vetted public health information. These initial vaccine misinformation sweeps, which begin today, will be conducted by human moderators at Twitter and not automated moderation systems.

Twitter says the goal is to use these initial determinations to train its AI systems so that down the road a blend of human and automated efforts will scan the site for vaccine misinformation. The latest misinformation measure will target tweets in English before expanding.

Twitter also introduced a new strike system for violations of its pandemic-related rules. The new system is modeled after a set of consequences it implemented for voter suppression and voting-related misinformation. Within that framework, a user with two or three “strikes” faces a 12-hour account lockout. With four violations, they lose account access for one week, with permanent suspension looming after five strikes.

Twitter introduced its first pandemic-specific policies a year ago, banning tweets promoting false treatment or prevention claims along with any content that could put people at higher risk of spreading COVID-19. In December, Twitter added new rules focused on popular vaccine conspiracy theories and announced that warning labels were on the way.

Continue Reading

Trending