Connect with us

Mobile

WhatsApp has an encrypted child porn problem – TechCrunch

Published

on

WhatsApp chat groups are being used to spread illegal child pornography, cloaked by the app’s end-to-end encryption. Without the necessary number of human moderators, the disturbing content is slipping by WhatsApp’s automated systems. A report from two Israeli NGOs reviewed by TechCrunch details how third-party apps for discovering WhatsApp groups include “Adult” sections that offer invite links to join rings of users trading images of child exploitation. TechCrunch has reviewed materials showing many of these groups are currently active.

TechCrunch’s investigation shows that Facebook could do more to police WhatsApp and remove this kind of content. Even without technical solutions that would require a weakening of encryption, WhatsApp’s moderators should have been able to find these groups and put a stop to them. Groups with names like “child porn only no adv” and “child porn xvideos” found on the group discovery app “Group Links For Whats” by Lisa Studio don’t even attempt to hide their nature. And a screenshot provided by anti-exploitation startup AntiToxin reveals active WhatsApp groups with names like “Children 💋👙👙” or “videos cp” — a known abbreviation for ‘child pornography’.

A screenshot from today of active child exploitation groups on WhatsApp. Phone numbers and photos redacted. Provided by AntiToxin.

Better manual investigation of these group discovery apps and WhatsApp itself should have immediately led these groups to be deleted and their members banned. While Facebook doubled its moderation staff from 10,000 to 20,000 in 2018 to crack down on election interference, bullying and other policy violations, that staff does not moderate WhatsApp content. With just 300 employees, WhatsApp runs semi-independently, and the company confirms it handles its own moderation efforts. That’s proving inadequate for policing a 1.5 billion-user community.

The findings from the NGOs Screen Savers and Netivei Reshe were written about today by Financial Times, but TechCrunch is publishing the full report, their translated letter to Facebook, translated emails with Facebook, their police report, plus the names of child pornography groups on WhatsApp and group discovery apps listed above. A startup called AntiToxin Technologies that researches the topic has backed up the report, providing the screenshot above and saying it’s identified more than 1,300 videos and photographs of minors involved in sexual acts on WhatsApp groups. Given that Tumblr’s app was recently temporarily removed from the Apple App Store for allegedly harboring child pornography, we’ve asked Apple if it will temporarily suspend WhatsApp, but have not heard back. 

Uncovering a nightmare

In July 2018, the NGOs became aware of the issue after a man reported to one of their hotlines that he’d seen hardcore pornography on WhatsApp. In October, they spent 20 days cataloging more than 10 of the child pornography groups, their content and the apps that allow people to find them.

The NGOs began contacting Facebook’s head of Policy, Jordana Cutler, starting September 4th. They requested a meeting four times to discuss their findings. Cutler asked for email evidence but did not agree to a meeting, instead following Israeli law enforcement’s guidance to instruct researchers to contact the authorities. The NGO reported their findings to Israeli police but declined to provide Facebook with their research. WhatsApp only received their report and the screenshot of active child pornography groups today from TechCrunch.

Listings from a group discovery app of child exploitation groups on WhatsApp. URLs and photos have been redacted.

WhatsApp tells me it’s now investigating the groups visible from the research we provided. A Facebook spokesperson tells TechCrunch, “Keeping people safe on Facebook is fundamental to the work of our teams around the world. We offered to work together with police in Israel to launch an investigation to stop this abuse.” A statement from the Israeli Police’s head of the Child Online Protection Bureau, Meir Hayoun, notes that: “In past meetings with Jordana, I instructed her to always tell anyone who wanted to report any pedophile content to contact the Israeli police to report a complaint.”

A WhatsApp spokesperson tells me that while legal adult pornography is allowed on WhatsApp, it banned 130,000 accounts in a recent 10-day period for violating its policies against child exploitation. In a statement, WhatsApp wrote that:

WhatsApp has a zero-tolerance policy around child sexual abuse. We deploy our most advanced technology, including artificial intelligence, to scan profile photos and images in reported content, and actively ban accounts suspected of sharing this vile content. We also respond to law enforcement requests around the world and immediately report abuse to the National Center for Missing and Exploited Children. Sadly, because both app stores and communications services are being misused to spread abusive content, technology companies must work together to stop it.

But it’s that over-reliance on technology and subsequent under-staffing that seems to have allowed the problem to fester. AntiToxin’s CEO Zohar Levkovitz tells me, “Can it be argued that Facebook has unwittingly growth-hacked pedophilia? Yes. As parents and tech executives we cannot remain complacent to that.”

Automated moderation doesn’t cut it

WhatsApp introduced an invite link feature for groups in late 2016, making it much easier to discover and join groups without knowing any members. Competitors like Telegram had benefited as engagement in their public group chats rose. WhatsApp likely saw group invite links as an opportunity for growth, but didn’t allocate enough resources to monitor groups of strangers assembling around different topics. Apps sprung up to allow people to browse different groups by category. Some usage of these apps is legitimate, as people seek communities to discuss sports or entertainment. But many of these apps now feature “Adult” sections that can include invite links to both legal pornography-sharing groups as well as illegal child exploitation content.

A WhatsApp spokesperson tells me that it scans all unencrypted information on its network — basically anything outside of chat threads themselves — including user profile photos, group profile photos and group information. It seeks to match content against the PhotoDNA banks of indexed child pornography that many tech companies use to identify previously reported inappropriate imagery. If it finds a match, that account, or that group and all of its members, receive a lifetime ban from WhatsApp.

A WhatsApp group discovery app’s listings of child exploitation groups on WhatsApp

If imagery doesn’t match the database but is suspected of showing child exploitation, it’s manually reviewed. If found to be illegal, WhatsApp bans the accounts and/or groups, prevents it from being uploaded in the future and reports the content and accounts to the National Center for Missing and Exploited Children. The one example group reported to WhatsApp by Financial Times was already flagged for human review by its automated system, and was then banned along with all 256 members.

To discourage abuse, WhatsApp says it limits groups to 256 members and purposefully does not provide a search function for people or groups within its app. It does not encourage the publication of group invite links and the vast majority of groups have six or fewer members. It’s already working with Google and Apple to enforce its terms of service against apps like the child exploitation group discovery apps that abuse WhatsApp. Those kind of groups already can’t be found in Apple’s App Store, but remain available on Google Play. We’ve contacted Google Play to ask how it addresses illegal content discovery apps and whether Group Links For Whats by Lisa Studio will remain available, and will update if we hear back. [Update 3pm PT: Google has not provided a comment but the Group Links For Whats app by Lisa Studio has been removed from Google Play. That’s a step in the right direction.]

But the larger question is that if WhatsApp was already aware of these group discovery apps, why wasn’t it using them to track down and ban groups that violate its policies. A spokesperson claimed that group names with “CP” or other indicators of child exploitation are some of the signals it uses to hunt these groups, and that names in group discovery apps don’t necessarily correlate to the group names on WhatsApp. But TechCrunch then provided a screenshot showing active groups within WhatsApp as of this morning, with names like “Children 💋👙👙” or “videos cp”. That shows that WhatsApp’s automated systems and lean staff are not enough to prevent the spread of illegal imagery.

The situation also raises questions about the trade-offs of encryption as some governments like Australia seek to prevent its usage by messaging apps. The technology can protect free speech, improve the safety of political dissidents and prevent censorship by both governments and tech platforms. However, it also can make detecting crime more difficult, exacerbating the harm caused to victims.

WhatsApp’s spokesperson tells me that it stands behind strong end-to-end encryption that protects conversations with loved ones, doctors and more. They said there are plenty of good reasons for end-to-end encryption and it will continue to support it. Changing that in any way, even to aid catching those that exploit children, would require a significant change to the privacy guarantees it’s given users. They suggested that on-device scanning for illegal content would have to be implemented by phone makers to prevent its spread without hampering encryption.

But for now, WhatsApp needs more human moderators willing to use proactive and unscalable manual investigation to address its child pornography problem. With Facebook earning billions in profit per quarter and staffing up its own moderation ranks, there’s no reason WhatsApp’s supposed autonomy should prevent it from applying adequate resources to the issue. WhatsApp sought to grow through big public groups, but failed to implement the necessary precautions to ensure they didn’t become havens for child exploitation. Tech companies like WhatsApp need to stop assuming cheap and efficient technological solutions are sufficient. If they want to make money off huge user bases, they must be willing to pay to protect and police them.

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published.

Mobile

Google consolidates its Chrome and Android password managers – TechCrunch

Published

on

Google today announced an update to its password manager that will finally introduce a consistent look-and-feel across the service’s Chrome and Android implementations. Users will soon see a new unified user experience that will automatically group multiple passwords for the same sites or apps together, as well as a new shortcut on the Android home screen to get access to these passwords.

In addition to this, Google is also now adding a new password-related feature to Chrome on iOS, which can now generate strong passwords for you (once you set Chrome as an autofill provider).

Image Credits: Google

Meanwhile, on Android, Google’s password check can now also flag weak and re-used passwords and help you to automatically change them, while Chrome users across platforms will now see compromised password warnings.

With this release today, Google will now also finally let you manually add passwords to its passwords manager (“due to popular demand,” Google says) and the company is bringing Touch-to-Login to Chrome on Android to log you in to supported sites with a single tap.

Image Credits: Google

Continue Reading

Mobile

TaskHuman lands $20M to expand its virtual coaching platform – TechCrunch

Published

on

TaskHuman, a professional development platform focused on coaching, today announced that it raised $20 million in Series B funding led by Madrona with participation from Impact Venture Capital, RingCentral Ventures, Sure Ventures, USVP, Gaingels, PeopleTech Angels, Propel(x) and Zoom Ventures. The latest infusion brings the company’s total raised to $35 million, which CEO Ravi Swaminathan said is being put toward product development, marketing and sales efforts.

Swaminathan and Daniel Mazzella co-founded TaskHuman in 2017, with the goal of connecting users with specialists on topics related to their personal and professional lives. Swaminathan was previously a program and logistics manager at Dell and VP of software solutions at SanDisk, while Mazzella was a system admin at Stamps.com. The two met at Wizr, a startup developing AI systems to analyze security camera footage.

“When it comes to learning and personal development, no amount of generic articles or watching pre-recorded videos [can replace] a real person with experience in a given area. Creating TaskHuman was our response to solve this challenge,” Swaminathan told TechCrunch in an email Q&A. “We started by offering foundational needs, including health and wellness, physical fitness, mental, spiritual, emotional wellbeing, and more. Since then, we’ve continued to expand and support the entire needs of an individual for personal and professional growth, like financial wellbeing, sales and leadership coaching, pet training, travel planning, and more.”

TaskHuman users connect with experts over live video chats. The company claims to have a network of over 1,000 “coaches” across nearly 50 countries, each specializing in distinctive areas. An AI-powered search feature lets users search for topics and coaches in natural language (e.g., “I want to lose weight”), while a recommendation engine attempts to personalize the browsing experience by suggesting, for example, similar coaches based on past sessions.

“TaskHuman has a direct relationship with each coach, and we pay them according to the terms of our relationship for their coaching contributions. They are all contractors globally,” Swaminathan said, when asked about the coaching payment structure.

Users can buy access to the TaskHuman network with “TaskHuman minutes,” which can be applied to a chat session with any specialist or topic, Swaminathan says. Alternatively, companies can subscribe to TaskHuman to offer unlimited access to their employees as well as in-app content and group sessions.

Image Credits: TaskHuman

Swaminathan makes the case that the enterprise in particular stands to benefit from TaskHuman’s platform. It’s true that corporate training programs tend to be a mixed bag, with only 25% of respondents to a McKinsey survey saying that their company’s training improved their job performance. According to another survey, 75% of managers were dissatisfied with their company’s learning and development function in 2019.

“At the board and C-suite level, many companies view insufficient attention to employee well-being as a threat to productivity and, conversely, a strong commitment to each worker’s physical, mental, and spiritual prosperity as a competitive advantage for recruiting and retaining talent in a time of labor shortages and the ‘Great Resignation,’” Swaminathan said. “From case studies, we have found return on investment in four main areas: preventing burnout, reducing employee attrition, improving employee engagement and recruitment, and reducing medical cost claims.”

Competition in the crowded e-learning field spans BetterUp, CoachHub and Torch. Swaminathan argues that his company’s offering is broader in scope, however, and offers superior access to specialists because it doesn’t require scheduling sessions in advance.

“We have found that the pandemic really allowed people to go beyond their comfort zones and embrace video technologies like TaskHuman, Zoom, RingCentral, and others,” Swaminathan said. “We feel a need to accelerate our mission during these difficult times to help people in both their personal and professional lives, and we feel an urgency to combat the current mental health crisis and Great Resignation culture by fulfilling the dire craving for 1:1, personalized engagement for personal and professional growth.”

Certainly, TaskHuman has benefited from the pandemic, which spurred coaches of all types to move online. According to a 2021 survey by the International Coaching Federation, 83% of coaches increased their use of audio-video platforms for coaching during the health crisis while 82% saw a decrease for in-person sessions.

TaskHuman says that its customers include Zoom, Dr. Scholl’s, RingCentral and public and government institutions like Purdue University, Oakland Housing Authority and Job Corps centers run by the U.S. Department of Labor. While Swaminathan declined to disclose financials, he said that annual recurring revenue has grown by more than 5 times year over year.

“Our company is laser-focused on global expansion and scaling its network of coaches,” Swaminathan said. “We will be continually adding to the set of human experience and expertise that are available on the platform and expanding support for providers in even more languages and countries around the world.”

Continue Reading

Mobile

European Union keeps mobile roaming fees at bay for another decade – TechCrunch

Published

on

Five years ago, the European Union passed rules which largely ended mobile roaming fees for citizens traveling with their devices across borders within the bloc. Today lawmakers are reupping the regulation that lets EU citizens “roam like at home” for a full decade, meaning European consumers can keep avoiding most extra fees when travelling within another of the 27 EU Member States (or the EEA) until at least 2032.

The updated regulation also brings some new additions — including a focus on quality of service, with a requirement that consumers have access to the same services abroad in the EU as at home when the same networks and technologies are available on the network in the visited Member State.

This means, for example, that a roaming customer who can use 5G services at home should also have 5G roaming services — where they are available — in the visited Member State.

The quality of service provision does not mean a guarantee of getting the same mobile network speed when roaming, since network speeds can vary, but the Commission says the new rules “aim to ensure that when similar quality or speeds are available in the visited network, the domestic operator should ensure the same quality of the roaming service”.

Operators are also required to inform their customers of the quality of services they can expect while roaming by stating this in the roaming contract and publishing information on their website.

The Commission argues that quality of service will be increasingly important as 5G rollouts expand and mobile network technology continues to evolve (its PR includes the phrase “future 6G” — alongside talk of the EU “investing in developing and using innovative digital solutions”).

“As concerns 5G services, it will become more and more important for consumers travelling abroad to know if they could be affected by limitations in available network quality when using certain applications and services,” it suggests. “The new roaming rules aim to enable innovation and business development, ensuring the widest use of innovative services and minimising the risk that citizens would not be able to use certain applications requiring the latest network technology, such as 5G, when crossing internal EU borders.”

The EU’s executive also frames the updated roaming regulation as a boon to digital innovation by reducing the risk of usage disruption since consumers can continuously use their apps and services as they travel across borders in the EU.

The Commission’s PR makes no mention of contrasting recent developments in the UK — which ceased to be an EU Member on January 31 2020, following the 2016 ‘Brexit’ referendum vote to leave the bloc — and where, since the EU roaming regulation ceased to apply, most of the big carriers have quietly announced they will be reintroducing roaming charges for their UK subscribers travelling in the EU.

But UK mobile users are unlikely to have missed the fact that Brexit has meant a return of roaming fees when they want to travel in Europe.

Some Brits may therefore detect a faint trace of trolling in this statement from Thierry Breton, the EU’s commissioner for the internal market, commenting on the extension of fee-free roaming inside the EU, who said: “Remember when we had to switch off mobile data when travelling in Europe — to avoid ending up with a massive roaming bill? Well this is history. And we intend to keep it this way for at least the next 10 years. Better speed, more transparency: We keep improving EU citizens’ lives.”

Transparency

Another focus for the EU’s updated regulation is around increasing transparency about the types of services that can still bring additional costs when roaming, such as calling customer service numbers, helpdesks or insurance companies — to help travellers in the bloc avoid related ‘bill shocks’.

The Commission says consumers who are roaming should receive an SMS about “potential increased charges” from using such services.

“The SMS should include a link to a dedicated webpage providing additional information on the types of services and, if available, about the relevant phone numbering ranges,” it notes, suggesting operators may also include information about the types of services that may be subject to higher charges in roaming in their contracts with the consumers.

The updated rules are also intended to improve information provision about and access to emergency communications across the EU — such as via the single European emergency number, 112.

“Dialing the emergency numbers and transmitting information on the location of the caller while roaming should be seamless and for free. Likewise, citizens who cannot place a call to 112 should be able to access emergency services free of charge through alternative means when roaming, for example through real time text or a smartphone application,” says the Commission.

“The new roaming rules also reinforce access to emergency services, through calls and alternative means of communications in case of cross border use. It will also ensure that the transmission of caller location will be seamless and free of charge while using roaming services.”

The EU is continuing to regulate wholesale caps — controlling the maximum prices a visited operator may charge for the use of its network by another operator in order to provide roaming services — with the Commission describing this as “an essential element for the sustainability of ‘roam like at home’ for operators”. Its review of the roaming market concluded that wholesale caps should be further reduced.

“The co-legislators agreed on a gradual reduction of the wholesale caps from 2022 onwards,” it notes. “These caps reflect decreasing operators’ wholesale costs of providing roaming services, provide sufficient investment incentives and maximise sustainability for EU operators.”

The Commission expects these wholesale cost reductions to lead to benefits for consumers — such as more generous data allowances while roaming and less likelihood of consumers having to pay surcharges for data usage that exceeds contract allowances.

Operators will still be able to apply a ‘fair use’ policy — meaning that if a person moves to live in another EU country it will be better for them to move to a local contract, as permanent roaming is no longer considered ‘fair use’.

Continue Reading

Trending