Connect with us

Biz & IT

Android users’ security and privacy at risk from shadowy ecosystem of pre-installed software, study warns

Published

on

A large-scale independent study of pre-installed Android apps has cast a critical spotlight on the privacy and security risks that preloaded software poses to users of the Google developed mobile platform.

The researchers behind the paper, which has been published in preliminary form ahead of a future presentation at the IEEE Symposium on Security and Privacy, unearthed a complex ecosystem of players with a primary focus on advertising and “data-driven services” — which they argue the average Android user is unlikely to be unaware of (while also likely lacking the ability to uninstall/evade the baked in software’s privileged access to data and resources themselves).

The study, which was carried out by researchers at the Universidad Carlos III de Madrid (UC3M) and the IMDEA Networks Institute, in collaboration with the International Computer Science Institute (ICSI) at Berkeley (USA) and Stony Brook University of New York (US), encompassed more than 82,000 pre-installed Android apps across more than 1,700 devices manufactured by 214 brands, according to the IMDEA institute.

“The study shows, on the one hand, that the permission model on the Android operating system and its apps allow a large number of actors to track and obtain personal user information,” it writes. “At the same time, it reveals that the end user is not aware of these actors in the Android terminals or of the implications that this practice could have on their privacy.  Furthermore, the presence of this privileged software in the system makes it difficult to eliminate it if one is not an expert user.”

An example of a well-known app that can come pre-installed on certain Android devices is Facebook .

Earlier this year the social network giant was revealed to have inked an unknown number of agreements with device makers to preload its app. And while the company has claimed these pre-installs are just placeholders — unless or until a user chooses to actively engage with and download the Facebook app, Android users essentially have to take those claims on trust with no ability to verify the company’s claims (short of finding a friendly security researcher to conduct a traffic analysis) nor remove the app from their device themselves. Facebook pre-loads can only be disabled, not deleted entirely.

The company’s preloads also sometimes include a handful of other Facebook-branded system apps which are even less visible on the device and whose function is even more opaque.

Facebook previously confirmed to TechCrunch there’s no ability for Android users to delete any of its preloaded Facebook system apps either.

Facebook uses Android system apps to ensure people have the best possible user experience including reliably receiving notifications and having the latest version of our apps. These system apps only support the Facebook family of apps and products, are designed to be off by default until a person starts using a Facebook app, and can always be disabled,” a Facebook spokesperson told us earlier this month.

But the social network is just one of scores of companies involved in a sprawling, opaque and seemingly interlinked data gathering and trading ecosystem that Android supports and which the researchers set out to shine a light into.

In all 1,200 developers were identified behind the pre-installed software they found in the data-set they examined, as well as more than 11,000 third party libraries (SDKs). Many of the preloaded apps were found to display what the researchers dub potentially dangerous or undesired behavior.

The data-set underpinning their analysis was collected via crowd-sourcing methods — using a purpose-built app (called Firmware Scanner), and pulling data from the Lumen Privacy Monitor app. The latter provided the researchers with visibility on mobile traffic flow — via anonymized network flow metadata obtained from its users. 

They also crawled the Google Play Store to compare their findings on pre-installed apps with publicly available apps — and found that just 9% of the package names in their dataset were publicly indexed on Play. 

Another concerning finding relates to permissions. In addition to standard permissions defined in Android (i.e. which can be controlled by the user) the researchers say they identified more than 4,845 owner or “personalized” permissions by different actors in the manufacture and distribution of devices.

So that means they found systematic user permissions workarounds being enabled by scores of commercial deals cut in a non-transparency data-driven background Android software ecosystem.

“This type of permission allows the apps advertised on Google Play to evade Android’s permission model to access user data without requiring their consent upon installation of a new app,” writes the IMDEA.

The top-line conclusion of the study is that the supply chain around Android’s open source model is characterized by a lack of transparency — which in turn has enabled an ecosystem to grow unchecked and get established that’s rife with potentially harmful behaviors and even backdoored access to sensitive data, all without most Android users’ consent or awareness. (On the latter front the researchers carried out a small-scale survey of consent forms of some Android phones to examine user awareness.)

tl;dr the phrase ‘if it’s free you’re the product’ is a too trite cherry atop a staggeringly large yet entirely submerged data-gobbling iceberg. (Not least because Android smartphones don’t tend to be entirely free.)

“Potential partnerships and deals — made behind closed doors between stakeholders — may have made user data a commodity before users purchase their devices or decide to install software of their own,” the researchers warn. “Unfortunately, due to a lack of central authority or trust system to allow verification and attribution of the self-signed certificates that are used to sign apps, and due to a lack of any mechanism to identify the purpose and legitimacy of many of these apps and custom permissions, it is difficult to attribute unwanted and harmful app behaviors to the party or parties responsible. This has broader negative implications for accountability and liability in this ecosystem as a whole.”

The researchers go on to make a series of recommendations intended to address the lack of transparency and accountability in the Android ecosystem — including suggesting the introduction and use of certificates signed by globally-trusted certificate authorities, or a certificate transparency repository “dedicated to providing details and attribution for certificates used to sign various Android apps, including pre-installed apps, even if self-signed”.

They also suggest Android devices should be required to document all pre-installed apps, plus their purpose, and name the entity responsible for each piece of software — and do so in a manner that is “accessible and understandable to users”.

“[Android] users are not clearly informed about third-party software that is installed on their devices, including third-party tracking and advertising services embedded in many pre-installed apps, the types of data they collect from them, the capabilities and the amount of control they have on their devices, and the partnerships that allow information to be shared and control to be given to various other companies through custom permissions, backdoors, and side-channels. This necessitates a new form of privacy policy suitable for preinstalled apps to be defined and enforced to ensure that private information is at least communicated to the user in a clear and accessible way, accompanied by mechanisms to enable users to make informed decisions about how or whether to use such devices without having to root their devices,” they argue, calling for overhaul of what’s long been a moribund T&Cs system, from a consumer rights point of view.

In conclusion they couch the study as merely scratching the surface of “a much larger problem”, saying their hope for the work is to bring more attention to the pre-installed Android software ecosystem and encourage more critical examination of its impact on users’ privacy and security.

They also write that they intend to continue to work on improving the tools used to gather the data-set, as well as saying their plan is to “gradually” make the data-set itself available to the research community and regulators to encourage others to dive in.  

Google has responded to the paper with the following statement — attributed to a spokesperson:

We appreciate the work of the researchers and have been in contact with them regarding concerns we have about their methodology. Modern smartphones include system software designed by their manufacturers to ensure their devices run properly and meet user expectations. The researchers’ methodology is unable to differentiate pre-installed system software — such as diallers, app stores and diagnostic tools–from malicious software that has accessed the device at a later time, making it difficult to draw clear conclusions. We work with our OEM partners to help them ensure the quality and security of all apps they decide to pre-install on devices, and provide tools and infrastructure to our partners to help them scan their software for behavior that violates our standards for privacy and security. We also provide our partners with clear policies regarding the safety of pre-installed apps, and regularly give them information about potentially dangerous pre-loads we’ve identified.

This report was updated with comment from Google

Source link

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Biz & IT

Deepfake celebrities begin shilling products on social media, causing alarm

Published

on

Enlarge / A cropped portion of the AI-generated version of Hanks that the actor shared on his Instagram feed.

Tom Hanks

News of AI deepfakes spread quickly when you’re Tom Hanks. On Sunday, the actor posted a warning on Instagram about an unauthorized AI-generated version of himself being used to sell a dental plan. Hanks’ warning spread in the media, including The New York Times. The next day, CBS anchor Gayle King warned of a similar scheme using her likeness to sell a weight-loss product. The now widely reported incidents have raised new concerns about the use of AI in digital media.

“BEWARE!! There’s a video out there promoting some dental plan with an AI version of me. I have nothing to do with it,” wrote Hanks on his Instagram feed. Similarly, King shared an AI-augmented video with the words “Fake Video” stamped across it, stating, “I’ve never heard of this product or used it! Please don’t be fooled by these AI videos.”

Also on Monday, YouTube celebrity MrBeast posted on social media network X about a similar scam that features a modified video of him with manipulated speech and lip movements promoting a fraudulent iPhone 15 giveaway. “Lots of people are getting this deepfake scam ad of me,” he wrote. “Are social media platforms ready to handle the rise of AI deepfakes? This is a serious problem.”

A screenshot of Tom Hanks' Instagram post warning of an AI-generated version of him selling a dental plan.
Enlarge / A screenshot of Tom Hanks’ Instagram post warning of an AI-generated version of him selling a dental plan.

Tom Hanks / Instagram

We have not seen the original Hanks video, but from examples provided by King and MrBeast, it appears the scammers likely took existing videos of the celebrities and used software to change lip movements to match AI-generated voice clones of them that had been trained on vocal samples pulled from publicly available work.

The news comes amid a larger debate on the ethical and legal implications of AI in the media and entertainment industry. The recent Writers Guild of America strike featured concerns about AI as a significant point of contention. SAG-AFTRA, the union representing Hollywood actors, has expressed worries that AI could be used to create digital replicas of actors without proper compensation or approval. And recently, Robin Williams’ daughter, Zelda Williams, made the news when she complained about people cloning her late father’s voice without permission.

As we’ve warned, convincing AI deepfakes are an increasingly pressing issue that may undermine shared trust and threaten the reliability of communications technologies by casting doubt on someone’s identity. Dealing with it is a tricky problem. Currently, companies like Google and OpenAI have plans to watermark AI-generated content and add metadata to track provenance. But historically, those watermarks have been easily defeated, and open source AI tools that do not add watermarks are available.

A screenshot of Gayle King's Instagram post warning of an AI-modified video of the CBS anchor.

A screenshot of Gayle King’s Instagram post warning of an AI-modified video of the CBS anchor.

Gayle King / Instagram

Similarly, attempts at restricting AI software through regulation may remove generative AI tools from legitimate researchers while keeping them in the hands of those who may use them for fraud. Meanwhile, social media networks will likely need to step up moderation efforts, reacting quickly when suspicious content is flagged by users.

As we wrote last December in a feature on the spread of easy-to-make deepfakes, “The provenance of each photo we see will become that much more important; much like today, we will need to completely trust who is sharing the photos to believe any of them. But during a transition period before everyone is aware of this technology, synthesized fakes might cause a measure of chaos.”

Almost a year later, with technology advancing rapidly, a small taste of that chaos is arguably descending upon us, and our advice could just as easily be applied to video and photos. Whether attempts at regulation currently underway in many countries will have any effect is an open question.

Continue Reading

Biz & IT

Researchers show how easy it is to defeat AI watermarks

Published

on

James Marshall/Getty Images

Soheil Feizi considers himself an optimistic person. But the University of Maryland computer science professor is blunt when he sums up the current state of watermarking AI images. “We don’t have any reliable watermarking at this point,” he says. “We broke all of them.”

For one of the two types of AI watermarking he tested for a new study— “low perturbation” watermarks, which are invisible to the naked eye—he’s even more direct: “There’s no hope.”

Feizi and his coauthors looked at how easy it is for bad actors to evade watermarking attempts. (He calls it “washing out” the watermark.) In addition to demonstrating how attackers might remove watermarks, the study shows how it’s possible to add watermarks to human-generated images, triggering false positives. Released online this week, the preprint paper has yet to be peer-reviewed; Feizi has been a leading figure examining how AI detection might work, so it is research worth paying attention to, even in this early stage.

It’s timely research. Watermarking has emerged as one of the more promising strategies to identify AI-generated images and text. Just as physical watermarks are embedded on paper money and stamps to prove authenticity, digital watermarks are meant to trace the origins of images and text online, helping people spot deepfaked videos and bot-authored books. With the US presidential elections on the horizon in 2024, concerns over manipulated media are high—and some people are already getting fooled. Former US President Donald Trump, for instance, shared a fake video of Anderson Cooper on his social platform Truth Social; Cooper’s voice had been AI-cloned.

This summer, OpenAI, Alphabet, Meta, Amazon, and several other major AI players pledged to develop watermarking technology to combat misinformation. In late August, Google’s DeepMind released a beta version of its new watermarking tool, SynthID. The hope is that these tools will flag AI content as it’s being generated, in the same way that physical watermarking authenticates dollars as they’re being printed.

It’s a solid, straightforward strategy, but it might not be a winning one. This study is not the only work pointing to watermarking’s major shortcomings. “It is well established that watermarking can be vulnerable to attack,” says Hany Farid, a professor at the UC Berkeley School of Information.

This August, researchers at the University of California, Santa Barbara and Carnegie Mellon coauthored another paper outlining similar findings, after conducting their own experimental attacks. “All invisible watermarks are vulnerable,” it reads. This newest study goes even further. While some researchers have held out hope that visible (“high perturbation”) watermarks might be developed to withstand attacks, Feizi and his colleagues say that even this more promising type can be manipulated.

The flaws in watermarking haven’t dissuaded tech giants from offering it up as a solution, but people working within the AI detection space are wary. “Watermarking at first sounds like a noble and promising solution, but its real-world applications fail from the onset when they can be easily faked, removed, or ignored,” Ben Colman, the CEO of AI-detection startup Reality Defender, says.

“Watermarking is not effective,” adds Bars Juhasz, the cofounder of Undetectable, a startup devoted to helping people evade AI detectors. “Entire industries, such as ours, have sprang up to make sure that it’s not effective.” According to Juhasz, companies like his are already capable of offering quick watermark-removal services.

Others do think that watermarking has a place in AI detection—as long as we understand its limitations. “It is important to understand that nobody thinks that watermarking alone will be sufficient,” Farid says. “But I believe robust watermarking is part of the solution.” He thinks that improving upon watermarking and then using it in combination with other technologies will make it harder for bad actors to create convincing fakes.

Some of Feizi’s colleagues think watermarking has its place, too. “Whether this is a blow to watermarking depends a lot on the assumptions and hopes placed in watermarking as a solution,” says Yuxin Wen, a PhD student at the University of Maryland who coauthored a recent paper suggesting a new watermarking technique. For Wen and his co-authors, including computer science professor Tom Goldstein, this study is an opportunity to reexamine the expectations placed on watermarking, rather than reason to dismiss its use as one authentication tool among many.

“There will always be sophisticated actors who are able to evade detection,” Goldstein says. “It’s ok to have a system that can only detect some things.” He sees watermarks as a form of harm reduction, and worthwhile for catching lower-level attempts at AI fakery, even if they can’t prevent high-level attacks.

This tempering of expectations may already be happening. In its blog post announcing SynthID, DeepMind is careful to hedge its bets, noting that the tool “isn’t foolproof” and “isn’t perfect.”

Feizi is largely skeptical that watermarking is a good use of resources for companies like Google. “Perhaps we should get used to the fact that we are not going to be able to reliably flag AI-generated images,” he says.

Still, his paper is slightly sunnier in its conclusions. “Based on our results, designing a robust watermark is a challenging but not necessarily impossible task,” it reads.

This story originally appeared on wired.com.

Continue Reading

Biz & IT

Vulnerable Arm GPU drivers under active exploitation. Patches may not be available

Published

on

Getty Images

Arm warned on Monday of active ongoing attacks targeting a vulnerability in device drivers for its Mali line of GPUs, which run on a host of devices, including Google Pixels and other Android handsets, Chromebooks, and hardware running Linux.

“A local non-privileged user can make improper GPU memory processing operations to gain access to already freed memory,” Arm officials wrote in an advisory. “This issue is fixed in Bifrost, Valhall and Arm 5th Gen GPU Architecture Kernel Driver r43p0. There is evidence that this vulnerability may be under limited, targeted exploitation. Users are recommended to upgrade if they are impacted by this issue.”

The advisory continued: “A local non-privileged user can make improper GPU processing operations to access a limited amount outside of buffer bounds or to exploit a software race condition. If the system’s memory is carefully prepared by the user, then this in turn could give them access to already freed memory.”

Getting access to system memory that’s no longer in use is a common mechanism for loading malicious code into a location an attacker can then execute. This code often allows them to exploit other vulnerabilities or to install malicious payloads for spying on the phone user. Attackers often gain local access to a mobile device by tricking users into downloading malicious applications from unofficial repositories. The advisory mentions drivers for the affected GPUs being vulnerable but makes no mention of microcode that runs inside the chips themselves.

The most prevalent platform affected by the vulnerability is Google’s line of Pixels, which are one of the only Android models to receive security updates on a timely basis. Google patched Pixels in its September update against the vulnerability, which is tracked as CVE-2023-4211. Google has also patched Chromebooks that use the vulnerable GPUs. Any device that shows a patch level of 2023-09-01 or later is immune to attacks that exploit the vulnerability. The device driver on patched devices will show as version r44p1 or r45p0.

CVE-2023-4211 is present in a range of Arm GPUs released over the past decade. The Arm chips affected are:

  • Midgard GPU Kernel  Driver: All versions from r12p0 – r32p0
  • Bifrost GPU Kernel Driver: All versions from r0p0 – r42p0
  • Valhall GPU Kernel Driver: All versions from r19p0 – r42p0
  • Arm 5th Gen GPU Architecture Kernel Driver: All versions from r41p0 – r42p0

Devices believed to use the affected chips include the Google Pixel 7, Samsung S20 and S21, Motorola Edge 40, OnePlus Nord 2, Asus ROG Phone 6, Redmi Note 11, 12, Honor 70 Pro, RealMe GT, Xiaomi 12 Pro, Oppo Find X5 Pro, and Reno 8 Pro and some phones from Mediatek.

Arm also makes drivers for the affected chips available for Linux devices.

Little is currently known about the vulnerability, other than that Arm credited discovery of the active exploitations to Maddie Stone, a researcher in Google’s Project Zero team. Project Zero tracks vulnerabilities in widely used devices, particularly when they’re subjected to zero-day or n-day attacks, which refer to those targeting vulnerabilities for which there are no patches available or those that have very recently been patched.

Arm’s Monday advisory disclosed two additional vulnerabilities that have also received patches. CVE-2023-33200 and CVE-2023-34970 both allow a non-privileged user to exploit a race condition to perform improper GPU operations to access already freed memory.

All three vulnerabilities are exploitable by an attacker with local access to the device, which is typically achieved by tricking users into downloading applications from unofficial repositories.

It’s currently unknown what other platforms, if any, have patches available. Until this information can be tracked down, people should check with the manufacturer of their device. Sadly, many vulnerable Android devices receive patches months or even years after becoming available, if at all.

Continue Reading

Trending