Connect with us

Biz & IT

Android users’ security and privacy at risk from shadowy ecosystem of pre-installed software, study warns

Published

on

A large-scale independent study of pre-installed Android apps has cast a critical spotlight on the privacy and security risks that preloaded software poses to users of the Google developed mobile platform.

The researchers behind the paper, which has been published in preliminary form ahead of a future presentation at the IEEE Symposium on Security and Privacy, unearthed a complex ecosystem of players with a primary focus on advertising and “data-driven services” — which they argue the average Android user is unlikely to be unaware of (while also likely lacking the ability to uninstall/evade the baked in software’s privileged access to data and resources themselves).

The study, which was carried out by researchers at the Universidad Carlos III de Madrid (UC3M) and the IMDEA Networks Institute, in collaboration with the International Computer Science Institute (ICSI) at Berkeley (USA) and Stony Brook University of New York (US), encompassed more than 82,000 pre-installed Android apps across more than 1,700 devices manufactured by 214 brands, according to the IMDEA institute.

“The study shows, on the one hand, that the permission model on the Android operating system and its apps allow a large number of actors to track and obtain personal user information,” it writes. “At the same time, it reveals that the end user is not aware of these actors in the Android terminals or of the implications that this practice could have on their privacy.  Furthermore, the presence of this privileged software in the system makes it difficult to eliminate it if one is not an expert user.”

An example of a well-known app that can come pre-installed on certain Android devices is Facebook .

Earlier this year the social network giant was revealed to have inked an unknown number of agreements with device makers to preload its app. And while the company has claimed these pre-installs are just placeholders — unless or until a user chooses to actively engage with and download the Facebook app, Android users essentially have to take those claims on trust with no ability to verify the company’s claims (short of finding a friendly security researcher to conduct a traffic analysis) nor remove the app from their device themselves. Facebook pre-loads can only be disabled, not deleted entirely.

The company’s preloads also sometimes include a handful of other Facebook-branded system apps which are even less visible on the device and whose function is even more opaque.

Facebook previously confirmed to TechCrunch there’s no ability for Android users to delete any of its preloaded Facebook system apps either.

Facebook uses Android system apps to ensure people have the best possible user experience including reliably receiving notifications and having the latest version of our apps. These system apps only support the Facebook family of apps and products, are designed to be off by default until a person starts using a Facebook app, and can always be disabled,” a Facebook spokesperson told us earlier this month.

But the social network is just one of scores of companies involved in a sprawling, opaque and seemingly interlinked data gathering and trading ecosystem that Android supports and which the researchers set out to shine a light into.

In all 1,200 developers were identified behind the pre-installed software they found in the data-set they examined, as well as more than 11,000 third party libraries (SDKs). Many of the preloaded apps were found to display what the researchers dub potentially dangerous or undesired behavior.

The data-set underpinning their analysis was collected via crowd-sourcing methods — using a purpose-built app (called Firmware Scanner), and pulling data from the Lumen Privacy Monitor app. The latter provided the researchers with visibility on mobile traffic flow — via anonymized network flow metadata obtained from its users. 

They also crawled the Google Play Store to compare their findings on pre-installed apps with publicly available apps — and found that just 9% of the package names in their dataset were publicly indexed on Play. 

Another concerning finding relates to permissions. In addition to standard permissions defined in Android (i.e. which can be controlled by the user) the researchers say they identified more than 4,845 owner or “personalized” permissions by different actors in the manufacture and distribution of devices.

So that means they found systematic user permissions workarounds being enabled by scores of commercial deals cut in a non-transparency data-driven background Android software ecosystem.

“This type of permission allows the apps advertised on Google Play to evade Android’s permission model to access user data without requiring their consent upon installation of a new app,” writes the IMDEA.

The top-line conclusion of the study is that the supply chain around Android’s open source model is characterized by a lack of transparency — which in turn has enabled an ecosystem to grow unchecked and get established that’s rife with potentially harmful behaviors and even backdoored access to sensitive data, all without most Android users’ consent or awareness. (On the latter front the researchers carried out a small-scale survey of consent forms of some Android phones to examine user awareness.)

tl;dr the phrase ‘if it’s free you’re the product’ is a too trite cherry atop a staggeringly large yet entirely submerged data-gobbling iceberg. (Not least because Android smartphones don’t tend to be entirely free.)

“Potential partnerships and deals — made behind closed doors between stakeholders — may have made user data a commodity before users purchase their devices or decide to install software of their own,” the researchers warn. “Unfortunately, due to a lack of central authority or trust system to allow verification and attribution of the self-signed certificates that are used to sign apps, and due to a lack of any mechanism to identify the purpose and legitimacy of many of these apps and custom permissions, it is difficult to attribute unwanted and harmful app behaviors to the party or parties responsible. This has broader negative implications for accountability and liability in this ecosystem as a whole.”

The researchers go on to make a series of recommendations intended to address the lack of transparency and accountability in the Android ecosystem — including suggesting the introduction and use of certificates signed by globally-trusted certificate authorities, or a certificate transparency repository “dedicated to providing details and attribution for certificates used to sign various Android apps, including pre-installed apps, even if self-signed”.

They also suggest Android devices should be required to document all pre-installed apps, plus their purpose, and name the entity responsible for each piece of software — and do so in a manner that is “accessible and understandable to users”.

“[Android] users are not clearly informed about third-party software that is installed on their devices, including third-party tracking and advertising services embedded in many pre-installed apps, the types of data they collect from them, the capabilities and the amount of control they have on their devices, and the partnerships that allow information to be shared and control to be given to various other companies through custom permissions, backdoors, and side-channels. This necessitates a new form of privacy policy suitable for preinstalled apps to be defined and enforced to ensure that private information is at least communicated to the user in a clear and accessible way, accompanied by mechanisms to enable users to make informed decisions about how or whether to use such devices without having to root their devices,” they argue, calling for overhaul of what’s long been a moribund T&Cs system, from a consumer rights point of view.

In conclusion they couch the study as merely scratching the surface of “a much larger problem”, saying their hope for the work is to bring more attention to the pre-installed Android software ecosystem and encourage more critical examination of its impact on users’ privacy and security.

They also write that they intend to continue to work on improving the tools used to gather the data-set, as well as saying their plan is to “gradually” make the data-set itself available to the research community and regulators to encourage others to dive in.  

Google has responded to the paper with the following statement — attributed to a spokesperson:

We appreciate the work of the researchers and have been in contact with them regarding concerns we have about their methodology. Modern smartphones include system software designed by their manufacturers to ensure their devices run properly and meet user expectations. The researchers’ methodology is unable to differentiate pre-installed system software — such as diallers, app stores and diagnostic tools–from malicious software that has accessed the device at a later time, making it difficult to draw clear conclusions. We work with our OEM partners to help them ensure the quality and security of all apps they decide to pre-install on devices, and provide tools and infrastructure to our partners to help them scan their software for behavior that violates our standards for privacy and security. We also provide our partners with clear policies regarding the safety of pre-installed apps, and regularly give them information about potentially dangerous pre-loads we’ve identified.

This report was updated with comment from Google

Source link

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Biz & IT

A new app helps Iranians hide messages in plain sight

Published

on

Enlarge / An anti-government graffiti that reads in Farsi “Death to the dictator” is sprayed at a wall north of Tehran on September 30, 2009.

Getty Images

Amid ever-increasing government Internet control, surveillance, and censorship in Iran, a new Android app aims to give Iranians a way to speak freely.

Nahoft, which means “hidden” in Farsi, is an encryption tool that turns up to 1,000 characters of Farsi text into a jumble of random words. You can send this mélange to a friend over any communication platform—Telegram, WhatsApp, Google Chat, etc.—and then they run it through Nahoft on their device to decipher what you’ve said.

Released last week on Google Play by United for Iran, a San Francisco–based human rights and civil liberties group, Nahoft is designed to address multiple aspects of Iran’s Internet crackdown. In addition to generating coded messages, the app can also encrypt communications and embed them imperceptibly in image files, a technique known as steganography. Recipients then use Nahoft to inspect the image file on their end and extract the hidden message.

Iranians can use end-to-end encrypted apps like WhatsApp for secure communications, but Nahoft, which is open source, has a crucial feature in its back pocket for when those aren’t accessible. The Iranian regime has repeatedly imposed near-total Internet blackouts in particular regions or across the entire country, including for a full week in November 2019. Even without connectivity, though, if you already have Nahoft downloaded, you can still use it locally on your device. Enter the message you want to encrypt, and the app spits out the coded Farsi message. From there you can write that string of seemingly random words in a letter, or read it to another Nahoft user over the phone, and they can enter it into their app manually to see what you were really trying to say.

“When the Internet goes down in Iran, people can’t communicate with their families inside and outside the country, and for activists everything comes to a screeching halt,” says Firuzeh Mahmoudi, United for Iran’s executive director, who lived through the 1979 Iranian revolution and left the country when she was 12. “And more and more the government is moving toward layered filtering, banning different digital platforms, and trying to come up with alternatives for international services like social media. This is not looking great; it’s the direction that we definitely don’t want to see. So this is where the app comes in.”

Iran is a highly connected country. More than 57 million of its 83 million citizens use the Internet. But in recent years the country’s government has been extremely focused on developing a massive state-controlled network, or intranet, known as the “National Information Network” or SHOMA. This increasingly gives the government the ability to filter and censor data, and to block specific services, from social networks to circumvention tools like proxies and VPNs.

This is why Nahoft was intentionally designed as an app that functions locally on your device rather than as a communication platform. In the case of a full Internet shutdown, users will need to have already downloaded the app to use it. But in general, it will be difficult for the Iranian government to block Nahoft as long as Google Play is still accessible there, according to United for Iran strategic adviser Reza Ghazinouri. Since Google Play traffic is encrypted, Iranian surveillance can’t see which apps users download. So far, Nahoft has been downloaded 4,300 times. It’s possible, Ghazinouri says, that the government will eventually develop its own app store and block international offerings, but for now that capability seems far off. In China, for example, Google Play is banned in favor of offerings from Chinese tech giants like Huawei and a curated version of the iOS App Store.

Ghazinouri and journalist Mohammad Heydari came up with the idea for Nahoft in 2012 and submitted it as part of United for Iran’s second “Irancubator” tech accelerator, which started last year. Operator Foundation, a Texas nonprofit development group focused on Internet freedom, engineered the Nahoft app. And the German penetration testing firm Cure53 conducted two security audits of the app and its encryption scheme, which draws from proven protocols. United for Iran has published the findings from these audits along with detailed reports about how it fixed the problems Cure53 found. In the original app review from December 2020, for example, Cure53 found some major issues, including critical weaknesses in the steganographic technique used to embed messages in photo files. All of these vulnerabilities were fixed before the second audit, which turned up more moderate issues like Android denial-of-service vulnerabilities and a bypass for the in-app auto-delete passcode. Those issues were also fixed before launch, and the app’s Github repository contains notes about the improvements.

The stakes are extremely high for an app that Iranians could rely on to circumvent government surveillance and restrictions. Any flaws in the cryptography’s implementation could put people’s secret communications, and potentially their safety, at risk. Ghazinouri says the group took every precaution it could think of. For example, the random word jumbles the app produces are specifically designed to seem inconspicuous and benign. Using real words makes it less likely that a content scanner will flag the coded messages. And United for Iran researchers worked with Operator Foundation to confirm that current off-the-shelf scanning tools don’t detect the encryption algorithm used to generate the coded words. That makes it less likely that censors will be able to detect encoded messages and create a filter to block them.

You can set a passcode needed to open Nahoft and set an additional “destruction code” that will wipe all data from the app when entered.

“There has always been a gap between communities in need and the people who claim to work for them and develop tools for them,” Ghazinouri says. “We’re trying to shrink that gap. And the app is open source, so experts can audit the code for themselves. Encryption is an area where you can’t just ask people to trust you, and we don’t expect anyone to trust us blindly.”

In a 2020 academic keynote, “Crypto for the People,” Brown University cryptographer Seny Kamara made a similar point. The forces and incentives that typically guide cryptographic inquiry and creation of encryption tools, he argued, overlook and dismiss the specific community needs of marginalized people.

Kamara has not audited the code or cryptographic design of Nahoft, but he told WIRED that the goals of the project fit with his ideas about encryption tools made by the people, for the people.

“In terms of what the app is trying to accomplish, I think this is a good example of an important security and privacy problem that the tech industry and academia have no incentive to solve,” he says.

With Iran’s Internet freedom rapidly deteriorating, Nahoft could become a vital lifeline to keep open communication going within the country and beyond.

This story originally appeared on wired.com.

Continue Reading

Biz & IT

SpaceX Starlink will come out of beta next month, Elon Musk says

Published

on

Enlarge / Screenshot from the Starlink order page, with the street address blotted out.

SpaceX’s Starlink satellite-broadband service will emerge from beta in October, CEO Elon Musk said last night. Musk provided the answer of “next month” in response to a Twitter user who asked when Starlink will come out of beta.

SpaceX began sending email invitations to Starlink’s public beta in October 2020. The service is far from perfect as trees can disrupt the line-of-sight connections to satellites and the satellite dishes go into “thermal shutdown” in hot areas. But for people in areas where wired ISPs have never deployed cable or fiber, Starlink is still a promising alternative and service should improve as SpaceX launches more satellites and refines its software.

SpaceX has said it is serving over 100,000 Starlink users in a dozen countries from more than 1,700 satellites. The company has been taking preorders for post-beta service and said in May that “over half a million people have placed an order or put down a deposit for Starlink.”

It is still possible to place pre-orders and submit $99 deposits at the Starlink website, but the site notes that “Depending on location, some orders may take 6 months or more to fulfill.” The deposits are fully refundable.

First 500,000 to order will “likely” get service

There are capacity limits imposed by the laws of physics, and SpaceX hasn’t guaranteed that every person who pre-ordered will actually get Starlink. Musk said in May that the first 500,000 people will “most likely” get service, but that SpaceX will face “[m]ore of a challenge when we get into the several million user range.”

We asked Musk today how many orders will be fulfilled by the end of 2021 and will update this article if we get a response. Musk has said the capacity limits will primarily be a problem in densely populated urban areas, so rural people should have a good chance at getting service.

SpaceX has US permission to deploy 1 million user terminals across the country and is seeking a license to deploy up to 5 million terminals. The number of Starlink pre-orders is up to 600,000 and SpaceX is reportedly speeding up its production of dishes to meet demand, as PCMag wrote last week. 

No changes to pricing yet

In beta, SpaceX has been charging a one-time fee of $499 for the user terminal, mounting tripod, and router, plus $99 per month for service. SpaceX hasn’t announced any changes to the pricing, but that could change when it moves from beta to commercial availability.

In April, SpaceX president and COO Gwynne Shotwell said that Starlink will likely avoid “tiered pricing” and “try to keep [pricing] as simple as possible and transparent as possible.” Shotwell said that SpaceX would keep Starlink in beta “until the network is reliable and great and something we’d be proud of.” SpaceX is also working on ruggedized user terminals for aircraft, ships, large trucks, and RVs.

SpaceX has a Federal Communications Commission license to launch nearly 12,000 low-Earth orbit satellites and is seeking permission to launch an additional 30,000. Amazon, which plans its own satellite constellation, has been urging the FCC to reject the current version of SpaceX’s next-generation Starlink plan. Satellite operator Viasat supported Amazon’s protest and separately urged a federal appeals court to halt SpaceX launches, but judges rejected Viasat’s request for a stay.

Continue Reading

Biz & IT

Telegram emerges as new dark web for cyber criminals

Published

on

Telegram has exploded as a hub for cybercriminals looking to buy, sell, and share stolen data and hacking tools, new research shows, as the messaging app emerges as an alternative to the dark web.

An investigation by cyber intelligence group Cyberint, together with the Financial Times, found a ballooning network of hackers sharing data leaks on the popular messaging platform, sometimes in channels with tens of thousands of subscribers, lured by its ease of use and light-touch moderation.

In many cases, the content resembled that of the marketplaces found on the dark web, a group of hidden websites that are popular among hackers and accessed using specific anonymizing software.

“We have recently been witnessing a 100 per cent-plus rise in Telegram usage by cybercriminals,” said Tal Samra, cyber threat analyst at Cyberint.

“Its encrypted messaging service is increasingly popular among threat actors conducting fraudulent activity and selling stolen data… as it is more convenient to use than the dark web.”

The rise in nefarious activity comes as users flocked to the encrypted chat app earlier this year after changes to the privacy policy of Facebook-owned rival WhatsApp prompted many to seek out alternatives.

Launched in 2013, Telegram allows users to broadcast messages to a following via “channels” or create public and private groups that are simple for others to access. Users can also send and receive large data files, including text and zip files, directly via the app.

The platform said it has more than 500 million active users and topped 1 billion downloads in August, according to data from SensorTower.

But its use by the cyber criminal underworld could increase pressure on the Dubai-headquartered platform to bolster its content moderation as it plans a future initial public offering and explores introducing advertising to its service.

According to Cyberint, the number of mentions in Telegram of “Email:pass” and “Combo”—hacker parlance used to indicate that stolen email and passwords lists are being shared—rose fourfold over the past year, to nearly 3,400.

In one public Telegram channel called “combolist,” which had more than 47,000 subscribers, hackers sell or simply circulate large data dumps of hundreds of thousands of leaked usernames and passwords.

Ad for data posted on Telegram.
Enlarge / Ad for data posted on Telegram.

A post titled “Combo List Gaming HQ” offered 300,000 emails and passwords that it claimed were useful for hacking video game platforms such as Minecraft, Origin, or Uplay. Another purported to have 600,000 logins for users of the services of Russian Internet group Yandex, others for Google and Yahoo.

Telegram removed the channel on Thursday after it was contacted by the Financial Times for comment.

Yet email password leaks account for only a fraction of the worrisome activity on the Telegram marketplace. Other types of data traded include financial data such as credit card information, copies of passports and credentials for bank accounts and sites such as Netflix, the research found. Online criminals also share malicious software, exploits and hacking guides via the app, Cyberint said.

Meanwhile, links to Telegram groups or channels shared inside forums on the dark web jumped to more than 1 million in 2021, from 172,035 the previous year, as hackers increasingly direct users to the platform as an easier-to-use alternative or parallel information center.

The research follows a separate report earlier this year by vpnMentor, which found data dumps circulating on Telegram from previous hacks and data leaks of companies including Facebook, marketing software provider Click.org, and dating site Meet Mindful, among others.

“In general, it appears that most data leaks and hacks are only shared on Telegram after being sold on the dark web—or the hacker failed to find a buyer and decided to share the information publicly and move on,” vpnMentor said.

Still, it dubbed the trend “a serious escalation in the ongoing surge of cyber crime,” noting that some users in these groups appeared less tech savvy than a typical dark web user.

Telegram said it was unable to verify the vpnMentor findings because the researchers had not shared details identifying which channels these alleged leaks were in.

Samra said the transition for cybercriminals from the dark web to Telegram was taking place in part because of the anonymity afforded by encryption—but noted that many of these groups were also public.

Post from a Telegram channel called
Enlarge / Post from a Telegram channel called “combolist.”

Telegram is also more accessible, provides better functionality, and is generally less likely to be tracked by law enforcement when compared to dark web forums, he added.

“In some cases, it’s easier to find buyers on Telegram rather than a forum because everything is smoother and quicker. Access is easier… and data can be shared much more openly.”

Hackers are less inclined to use WhatsApp both for privacy reasons and because it displays users’ numbers in group chats, unlike Telegram, Cyberint said. Encrypted app Signal remains smaller and tends to be used for more general messaging among people who know each other rather than forum-style groups, it added.

Telegram has long taken a more lax approach to content moderation than larger social media apps such as Facebook and Twitter, attracting scrutiny for allowing hate groups and conspiracy theories to flourish. In January, it began shutting down public extremist and white supremacist groups—for the first time—in the wake of the Capitol riots amid concerns it was being used to promote violence.

The Cyberint research—particularly the uncovering of public, searchable groups for cybercriminals—raises further questions about Telegram’s content moderation policies and enforcement at a time when chief executive Pavel Durov has said the company is preparing to sell advertisements in public Telegram channels.

It also comes as the company prepares to head for public markets after raising more than $1 billion through bond sales in March to investors including to Mubadala Investment Company, the Gulf emirate’s large sovereign wealth fund, and Abu Dhabi Catalyst Partners, a joint venture between Mubadala and the $4 billion New York hedge fund Falcon Edge Capital.

Telegram said in a statement that it “has a policy for removing personal data shared without consent.” It added that each day, its “ever growing force of professional moderators” removes more than 10,000 public communities for terms of service violations following user reports.

© 2021 The Financial Times Ltd. All rights reserved Not to be redistributed, copied, or modified in any way.

Continue Reading

Trending