Connect with us

Social

Two-thirds of iOS apps disable ATS, an iOS security feature

Published

on


Image: Wandera

Two-thirds of iOS apps do not use an Apple technology that can help them support and enforce encrypted communications, according to a report published today by cyber-security firm Wandera.

The company said it scanned over 30,000 iOS applications and found that 67.7% of the apps were disabling a default iOS security feature called ATS (App Transport Security) on purpose.

ATS was introduced with the release of iOS 9, released in September 2015, and works by blocking all HTTP connections between an app and its remote server.

At the WWDC 2016 conference, Apple announced plans to make ATS obligatory for all iOS apps starting January 2017, but the company abandoned its plans in December 2016, weeks before the enforcement was to take force.

ATS still ships enabled by default for all iOS apps, and developers must disable it when they are coding their apps, in case they need to talk to HTTP domains or some error-prone HTTPS websites that may default back to HTTP — and trigger an ATS ban.

In its report, Wandera says that only 27% of the 30,000 apps it scanned are currently using ATS to enforce encrypted communications and block plaintext HTTP connections.

Around 5.3% use ATS, but they disabled it granularly, for certain domains.

Ad frameworks recommend disabling ATS

According to Wandera, the reason why three years later the ATS security feature has seen little adoption appears to be that ad frameworks/networks often recommend in their documentation that iOS developers disable ATS inside apps, to prevent iOS from blocking communications to ad servers in case of an error.

“Ad network operators are in a competitive space and want to streamline the process for developers to make their apps compatible,” Wandera said. “By removing ‘roadblocks’ such as encryption requirements, they make it easier for more developers to incorporate their ad networks into their applications.”

This seems to be the primary reasons why ATS is often disabled. For example, Wandera said that ATS is more widely used inside paid apps, which do not rely on advertising revenue, and where app developers have no reason to disable ATS to safeguard their profits.

ATS: paid vs free apps

Image: Wandera

Some paid apps still disable ATS, but this appears to be related to content management, with some app makers wanting to make sure remotely-hosted content on HTTP or error-prone HTTPS gets loaded without glitches.

Furthermore, these percentages should be taken with a grain of salt, as ATS being disabled doesn’t necessarily mean that those iOS apps are not using HTTPS.

“It just means that system safeguards are disabled and hence there is much more room for [HTTPS] error,” the Wandera team said.

Related cybersecurity coverage:

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Social

New privacy bill would put major limits on targeted advertising – TechCrunch

Published

on

A new bill seeks to dramatically reshape the online advertising landscape to the detriment of companies like Facebook, Google and data brokers that leverage deep stores of personal information to make money from targeted ads.

The bill, the Banning Surveillance Advertising Act, introduced by Reps. Anna Eshoo (D-CA) and Jan Schakowsky (D-IL) in the House and Cory Booker (D-NJ) in the Senate, would dramatically limit the ways that tech companies serve ads to their users, banning the use of personal data altogether.

Any targeting based on “protected class information, such as race, gender, and religion, and personal data purchased from data brokers” would be off-limits were the bill to pass. Platforms could still target ads based on general location data at the city or state level and “contextual advertising” based on the content a user is interacting with would still be allowed.

The bill would empower the FTC and state attorneys general to enforce violations, with fines of up to $5,000 per incident for knowing violations.

“The ‘surveillance advertising’ business model is premised on the unseemly collection and hoarding of personal data to enable ad targeting,” Rep. Eshoo said. “This pernicious practice allows online platforms to chase user engagement at great cost to our society, and it fuels disinformation, discrimination, voter suppression, privacy abuses, and so many other harms.”

Sen. Booker called the targeted advertising model “predatory and invasive,” stressing how the practice exacerbates misinformation and extremism on social media platforms.

Privacy-minded companies including search engine maker DuckDuckGo and Proton, creator of ProtonMail, backed the legislation along with organizations including the Electronic Privacy Information Center (EPIC), the Anti-Defamation League, Accountable Tech and Common Sense Media.

Continue Reading

Social

Snapchat says it’s getting better at finding illicit drug dealers before users do – TechCrunch

Published

on

Snapchat has faced increasing criticism in recent years as the opioid crisis plays out on social media, often with tragic results.

In October, an NBC investigation reported the stories of a number of young people aged 13 to 23 who died after purchasing fentanyl-laced pills on Snapchat. Snapchat parent company Snap responded by committing to improve its ability to detect and remove this kind of content and ushering users who search for drug-related content to an educational harm reduction portal.

Snapchat provided a glimpse at its progress against illicit drug sales on the platform, noting that 88 percent of the drug-related content it finds is now identified proactively by automated systems, with community reporting accounting for the other 12 percent. Snap says this number is up by a third since its October update, indicating that more of this content is being detected up front before being identified by users.

“Since this fall, we have also seen another important indicator of progress: a decline in community-reported content related to drug sales,” Snap wrote in a blog post. “In September, over 23% of drug-related reports from Snapchatters contained content specifically related to sales, and as a result of proactive detection work, we have driven that down to 16% as of this month. This marks a decline of 31% in drug-related reports. We will keep working to get this number as low as possible.”

The company says that it also recently introduced a new safeguard that prevents 13 to 17 year-old users from showing up in its Quick Add user search results unless they have friends in common with the person searching. That precaution is meant to discourage minors from connecting with users they don’t know, in this case to deter online drug transactions.

Snapchat is also adding information from the CDC on the dangers of fentanyl into its “Heads Up” harm reduction portal and partnering with the Community Anti-Drug Coalitions of America (CADCA), a global nonprofit working to “prevent substance misuse through collaborative community efforts.”

The company works with experts to identify new search terms that sellers use to get around its rules against selling illicit substances. Snapchat calls the work to keep its lexicon of drug sales jargon up to date “a constant, ongoing effort.”

The U.S. Drug Enforcement Administration published a warning last month about the dangers of pills purchased online that contain fentanyl, a synthetic opioid that is deadlier in much smaller doses than heroin. Because fentanyl increasingly shows up in illicitly purchased drugs, including those purchased online, it can prove fatal to users who believed they were ingesting other substances.

In December, DEA Administrator Anne Milgram called Snapchat and other social media apps “haven[s] for drug traffickers” in a December interview with CBS. “Because drug traffickers are harnessing social media because it is accessible, they’re able to access millions of Americans and it is anonymous and they’re able to sell these fake pills that are not what they say they are,” Milgram said.

While social media platforms dragged their feet about investing in proactive, aggressive content moderation, online drug sales took root. Companies have sealed up some of the more obvious ways to find illicit drugs online (a few years ago it was as simple as searching #painpills on Instagram, for instance) but savvy sellers adapt their practices to get around new rules as they’re made.

The rise of fentanyl is a significant factor exacerbating the American opioid epidemic and the substance’s prevalence in online sales presents unique challenges. In an October hearing on children’s online safety, Snap called the issue the company’s “top priority,” but many lawmakers and families affected by online drug sales remain skeptical that social media companies are taking their role in the opioid crisis seriously.

 

Continue Reading

Social

Twitter expands misinformation reporting feature to more international markets – TechCrunch

Published

on

Last August, Twitter introduced a new feature in select markets, including the U.S., that invited users to report misinformation they encountered on its platform — including things like election-related or Covid-19 misinformation, for example. Now the company is rolling out the feature to more markets as its test expands. In addition to the U.S., Australia, and South Korea, where the feature had already gone live, Twitter is rolling out the reporting option to users in Brazil, Spain, and the Philippines.

The company also offered an update on the feature’s traction, noting that the company has received more than 3.7 million user-submitted reports since its debut. For context, Twitter has around 211 million monetizable active daily users, as of its most recent earnings, 37 million of which are U.S.-based and 174 million based in international markets.

According to Yoel Roth, Twitter’s head of site integrity, the “vast majority” of content the company takes action on for misinformation is identified proactively through automation (which accounts for 50%+ of enforcements) or proactive monitoring. User-submitted reports via the new feature, however, Twitter to identify patterns of misinformation — an area where Twitter has seen the most success so far from the feature, Roth says. This is particularly true in areas like non-text-baed misinformation like media and URLs that link to content hosted off Twitter’s platform.

But he also noted that when Twitter reviewed a subset of individual reported tweets, only around 10% were considered “actionable” compared with 20-30% in other policy areas, as many tweets analyzed didn’t contain misinformation at all.

In markets where the feature is available, users can report misinformation by clicking the three-dot menu in the upper-right of a tweet, then choosing the “report tweet” option. From there, they’ll be able to click the option “it’s misleading.”

While Twitter already offered a way to report violating content on its platform before the addition of the new flagging option, its existing reporting flow didn’t offer a clear way to report tweets containing misinformation. Instead, users would have to pick from options like “it’s suspicious or spam” or “it’s abusive or harmful,” among others, before further narrowing down how the specific tweet was in violation of Twitter’s rules.

The ability to flag tweets as misinformation allows users to more quickly and directly flag content that may not fit into existing rules, as well. But the reports themselves are tied into Twitter’s existing enforcement flow, where a combination of human review and moderation is used to determine if a punitive action should take place. Twitter had also said the reported tweets would be sorted for review based on priority — meaning tweets from accounts with a large following or those showing higher levels of engagement would be reviewed first.

The feature is rolling out at a time when social networks are being pressured to clean up the misinformation they’ve allowed to spread across their platforms, or risk regulation that will enforce such cleanups and perhaps even enact penalties for not doing so.

The flagging option is not the only way Twitter is working to fight misinformation. The company also runs an experiment called Birdwatch, which aims to crowdsource fact-checking by allowing Twitter users to annotate misleading tweets with factual information. This service is still in pilot testing and being updated based on user feedback.

Continue Reading

Trending