Connect with us

Biz & IT

Android users’ security and privacy at risk from shadowy ecosystem of pre-installed software, study warns

Published

on

A large-scale independent study of pre-installed Android apps has cast a critical spotlight on the privacy and security risks that preloaded software poses to users of the Google developed mobile platform.

The researchers behind the paper, which has been published in preliminary form ahead of a future presentation at the IEEE Symposium on Security and Privacy, unearthed a complex ecosystem of players with a primary focus on advertising and “data-driven services” — which they argue the average Android user is unlikely to be unaware of (while also likely lacking the ability to uninstall/evade the baked in software’s privileged access to data and resources themselves).

The study, which was carried out by researchers at the Universidad Carlos III de Madrid (UC3M) and the IMDEA Networks Institute, in collaboration with the International Computer Science Institute (ICSI) at Berkeley (USA) and Stony Brook University of New York (US), encompassed more than 82,000 pre-installed Android apps across more than 1,700 devices manufactured by 214 brands, according to the IMDEA institute.

“The study shows, on the one hand, that the permission model on the Android operating system and its apps allow a large number of actors to track and obtain personal user information,” it writes. “At the same time, it reveals that the end user is not aware of these actors in the Android terminals or of the implications that this practice could have on their privacy.  Furthermore, the presence of this privileged software in the system makes it difficult to eliminate it if one is not an expert user.”

An example of a well-known app that can come pre-installed on certain Android devices is Facebook .

Earlier this year the social network giant was revealed to have inked an unknown number of agreements with device makers to preload its app. And while the company has claimed these pre-installs are just placeholders — unless or until a user chooses to actively engage with and download the Facebook app, Android users essentially have to take those claims on trust with no ability to verify the company’s claims (short of finding a friendly security researcher to conduct a traffic analysis) nor remove the app from their device themselves. Facebook pre-loads can only be disabled, not deleted entirely.

The company’s preloads also sometimes include a handful of other Facebook-branded system apps which are even less visible on the device and whose function is even more opaque.

Facebook previously confirmed to TechCrunch there’s no ability for Android users to delete any of its preloaded Facebook system apps either.

Facebook uses Android system apps to ensure people have the best possible user experience including reliably receiving notifications and having the latest version of our apps. These system apps only support the Facebook family of apps and products, are designed to be off by default until a person starts using a Facebook app, and can always be disabled,” a Facebook spokesperson told us earlier this month.

But the social network is just one of scores of companies involved in a sprawling, opaque and seemingly interlinked data gathering and trading ecosystem that Android supports and which the researchers set out to shine a light into.

In all 1,200 developers were identified behind the pre-installed software they found in the data-set they examined, as well as more than 11,000 third party libraries (SDKs). Many of the preloaded apps were found to display what the researchers dub potentially dangerous or undesired behavior.

The data-set underpinning their analysis was collected via crowd-sourcing methods — using a purpose-built app (called Firmware Scanner), and pulling data from the Lumen Privacy Monitor app. The latter provided the researchers with visibility on mobile traffic flow — via anonymized network flow metadata obtained from its users. 

They also crawled the Google Play Store to compare their findings on pre-installed apps with publicly available apps — and found that just 9% of the package names in their dataset were publicly indexed on Play. 

Another concerning finding relates to permissions. In addition to standard permissions defined in Android (i.e. which can be controlled by the user) the researchers say they identified more than 4,845 owner or “personalized” permissions by different actors in the manufacture and distribution of devices.

So that means they found systematic user permissions workarounds being enabled by scores of commercial deals cut in a non-transparency data-driven background Android software ecosystem.

“This type of permission allows the apps advertised on Google Play to evade Android’s permission model to access user data without requiring their consent upon installation of a new app,” writes the IMDEA.

The top-line conclusion of the study is that the supply chain around Android’s open source model is characterized by a lack of transparency — which in turn has enabled an ecosystem to grow unchecked and get established that’s rife with potentially harmful behaviors and even backdoored access to sensitive data, all without most Android users’ consent or awareness. (On the latter front the researchers carried out a small-scale survey of consent forms of some Android phones to examine user awareness.)

tl;dr the phrase ‘if it’s free you’re the product’ is a too trite cherry atop a staggeringly large yet entirely submerged data-gobbling iceberg. (Not least because Android smartphones don’t tend to be entirely free.)

“Potential partnerships and deals — made behind closed doors between stakeholders — may have made user data a commodity before users purchase their devices or decide to install software of their own,” the researchers warn. “Unfortunately, due to a lack of central authority or trust system to allow verification and attribution of the self-signed certificates that are used to sign apps, and due to a lack of any mechanism to identify the purpose and legitimacy of many of these apps and custom permissions, it is difficult to attribute unwanted and harmful app behaviors to the party or parties responsible. This has broader negative implications for accountability and liability in this ecosystem as a whole.”

The researchers go on to make a series of recommendations intended to address the lack of transparency and accountability in the Android ecosystem — including suggesting the introduction and use of certificates signed by globally-trusted certificate authorities, or a certificate transparency repository “dedicated to providing details and attribution for certificates used to sign various Android apps, including pre-installed apps, even if self-signed”.

They also suggest Android devices should be required to document all pre-installed apps, plus their purpose, and name the entity responsible for each piece of software — and do so in a manner that is “accessible and understandable to users”.

“[Android] users are not clearly informed about third-party software that is installed on their devices, including third-party tracking and advertising services embedded in many pre-installed apps, the types of data they collect from them, the capabilities and the amount of control they have on their devices, and the partnerships that allow information to be shared and control to be given to various other companies through custom permissions, backdoors, and side-channels. This necessitates a new form of privacy policy suitable for preinstalled apps to be defined and enforced to ensure that private information is at least communicated to the user in a clear and accessible way, accompanied by mechanisms to enable users to make informed decisions about how or whether to use such devices without having to root their devices,” they argue, calling for overhaul of what’s long been a moribund T&Cs system, from a consumer rights point of view.

In conclusion they couch the study as merely scratching the surface of “a much larger problem”, saying their hope for the work is to bring more attention to the pre-installed Android software ecosystem and encourage more critical examination of its impact on users’ privacy and security.

They also write that they intend to continue to work on improving the tools used to gather the data-set, as well as saying their plan is to “gradually” make the data-set itself available to the research community and regulators to encourage others to dive in.  

Google has responded to the paper with the following statement — attributed to a spokesperson:

We appreciate the work of the researchers and have been in contact with them regarding concerns we have about their methodology. Modern smartphones include system software designed by their manufacturers to ensure their devices run properly and meet user expectations. The researchers’ methodology is unable to differentiate pre-installed system software — such as diallers, app stores and diagnostic tools–from malicious software that has accessed the device at a later time, making it difficult to draw clear conclusions. We work with our OEM partners to help them ensure the quality and security of all apps they decide to pre-install on devices, and provide tools and infrastructure to our partners to help them scan their software for behavior that violates our standards for privacy and security. We also provide our partners with clear policies regarding the safety of pre-installed apps, and regularly give them information about potentially dangerous pre-loads we’ve identified.

This report was updated with comment from Google

Source link

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published.

Biz & IT

Tesla shows off underwhelming human robot prototype at AI Day 2022

Published

on

Enlarge / The walking Optimus prototype demonstrated at the AI Day 2022 event.

Tesla

Today at Tesla’s “AI Day” press event, Tesla CEO Elon Musk unveiled an early prototype of its Optimus humanoid robot, which emerged from behind a curtain, walked around, waved, and “raised the roof” with its hands to the beat of techno music.

It was a risky reveal for the prototype, which seemed somewhat unsteady on its feet. “Literally the first time the robot has operated without a tether was on stage tonight,” said Musk. Shortly afterward, Tesla employees rolled a sleeker-looking Optimus model supported by a stand onto the stage that could not yet stand on its own. It waved and lifted its legs. Later, it slumped over while Musk spoke.

Video of Tesla AI Day 2022

The entire live robot demonstration lasted roughly seven minutes, and the firm also played a demonstration video of the walking Optimus prototype slowly picking up a box and putting it down, slowly watering a plant, and slowly moving metal parts in a factory-like setting—all while tethered to an overhead cable. The video also showed a 3D-rendered view of the world that represents what the Optimus robot can see.

Three stages of the Tesla Optimus robot so far, presented at AI Day 2022.
Enlarge / Three stages of the Tesla Optimus robot so far, presented at AI Day 2022.

Tesla

Tesla first announced its plans to built a humanoid robot during its AI Day event in August of last year. During that earlier event, a human dressed in a spandex suit resembling a robot and did the Charleston on stage, which prompted skepticism in the press.

At the AI Event today, Musk and his team emphasized that the walking prototype was an early demo developed in roughly six months using “semi-off the shelf actuators,” and that the sleeker model much more closely resembled the “Version 1” unit they wanted to ship. He said it would probably be able to walk in a few weeks.

Goals of the Optimus project include high-volume production (possibly “millions of units sold,” said Musk), low-cost (“probably less than $20,000”), and high-reliability. Comparing the plans for Optimus to existing humanoid robots from competitors, Musk also emphasized that the Optimus robot should have the brains-on-board to work autonomously, citing Tesla’s work with its automotive Autopilot system.

Tesla shared some specifications of its
Enlarge / Tesla shared some specifications of its “Latest Generation” prototype Optimus robot.

Tesla

Shortly afterward, Musk handed over the stage to Tesla engineers that gave jargon-heavy overviews about developing the power systems, actuators, and joint mechanisms that would make Optimus possible, replete with fancy graphs but with few concrete specifics about how they would apply to a shipping product. “We are carrying over most of our design experience from the car to the robot,” said one engineer, while another engineer said they drew much of their inspiration from human biology, especially in joint design.

Earlier in the demonstration, Musk said that they were having the event to “convince some of the most talented people in the world to come to Tesla and help bring this to fruition.” Musk also emphasized the public nature of Tesla several times, mentioning that if the public doesn’t like what Tesla is doing they could purchase stock and vote against it. “If I go crazy, you can fire me,” he said.

[This is a developing story and will be updated as new information comes in.]

Continue Reading

Biz & IT

High-severity Microsoft Exchange 0-day under attack threatens 220,000 servers

Published

on

Microsoft late Thursday confirmed the existence of two critical vulnerabilities in its Exchange application that have already compromised multiple servers and pose a serious risk to an estimated 220,000 more around the world.

The currently unpatched security flaws have been under active exploit since early August, when Vietnam-based security firm GTSC discovered customer networks had been infected with malicious webshells and that the initial entry point was some sort of Exchange vulnerability. The mystery exploit looked almost identical to an Exchange zero-day from 2021 called ProxyShell, but the customers’ servers had all been patched against the vulnerability, which is tracked as CVE-2021-34473. Eventually, the researchers discovered the unknown hackers were exploiting a new Exchange vulnerability.

Webshells, backdoors, and fake sites

“After successfully mastering the exploit, we recorded attacks to collect information and create a foothold in the victim’s system,” the researchers wrote in a post published on Wednesday. “The attack team also used various techniques to create backdoors on the affected system and perform lateral movements to other servers in the system.”

On Thursday evening, Microsoft confirmed that the vulnerabilities were new and said it was scrambling to develop and release a patch. The new vulnerabilities are: CVE-2022-41040, a server-side request forgery vulnerability, and CVE-2022-41082, which allows remote code execution when PowerShell is accessible to the attacker.

“​​At this time, Microsoft is aware of limited targeted attacks using the two vulnerabilities to get into users’ systems,” members of the Microsoft Security Response Center team wrote. “In these attacks, CVE-2022-41040 can enable an authenticated attacker to remotely trigger CVE-2022-41082.” Team members stressed that successful attacks require valid credentials for at least one email user on the server.

The vulnerability affects on-premises Exchange servers and, strictly speaking, not Microsoft’s hosted Exchange service. The huge caveat is that many organizations using Microsoft’s cloud offering choose an option that uses a mix of on-premises and cloud hardware. These hybrid environments are as vulnerable as standalone on-premises ones.

Searches on Shodan indicate there are currently more than 200,000 on-premises Exchange servers exposed to the Internet and more than 1,000 hybrid configurations.

Wednesday’s GTSC post said the attackers are exploiting the zero-day to infect servers with webshells, a text interface that allows them to issue commands. These webshells contain simplified Chinese characters, leading the researchers to speculate the hackers are fluent in Chinese. Commands issued also bear the signature of the China Chopper, a webshell commonly used by Chinese-speaking threat actors, including several advanced persistent threat groups known to be backed by the People’s Republic of China.

GTSC went on to say that the malware the threat actors eventually install emulates Microsoft’s Exchange Web Service. It also makes a connection to the IP address 137[.]184[.]67[.]33, which is hardcoded in the binary. Independent researcher Kevin Beaumont said the address hosts a fake website with only a single user with one minute of login time and has been active only since August.

Kevin Beaumont

The malware then sends and receives data that’s encrypted with an RC4 encryption key that’s generated at runtime. Beaumont went on to say that the backdoor malware appears to be novel, meaning this is the first time it has been used in the wild.

People running on-premises Exchange servers should take immediate action. Specifically, they should apply a blocking rule that prevents servers from accepting known attack patterns. The rule can be applied by going to “IIS Manager -> Default Web Site -> URL Rewrite -> Actions.” For the time being, Microsoft also recommends people block HTTP port 5985 and HTTPS port 5986, which attackers need to exploit CVE-2022-41082.

Microsoft’s advisory contains a host of other suggestions for detecting infections and preventing exploits until a patch is available.

Continue Reading

Biz & IT

Bruce Willis sells deepfake rights to his likeness for commercial use

Published

on

Enlarge / Deepfake Bruce Willis as he appeared in a 2021 commercial for Russian mobile company MegaFon.

MegaFon

Bruce Willis has sold the “digital twin” rights to his likeness for commercial video production use, according to a report by The Telegraph. This move allows the Hollywood actor to digitally appear in future commercials and possibly even films, and he has already appeared in a Russian commercial using the technology.

Willis, who has been diagnosed with a language disorder called aphasia, announced that he would be “stepping away” from acting earlier this year. Instead, he will license his digital rights through a company called Deepcake. The company is based in Tbilisi, Georgia, and is doing business in America while being registered as a corporation in Delaware.

In 2021, a deepfake Bruce Willis appeared in a Russian cell phone commercial for MegaFon.

Deepcake obtained Willis’ likeness by training a deep learning neural network model on his appearances in blockbuster action films from the 1990s. With his facial appearance known, the model can then apply Willis’ head to another actor with a similar build in a process commonly called a deepfake. Deepfakes have become popular in recent years on TikTok, with unauthorized deepfakes of Tom Cruise and Keanu Reeves gathering large followings.

In a statement on Deepcake’s website, Bruce Willis reportedly said:

I liked the precision of my character. It’s a great opportunity for me to go back in time. The neural network was trained on [the] content of Die Hard and Fifth Element, so my character is similar to the images of that time.

With the advent of the modern technology, I could communicate, work, and participate in filming, even being on another continent. It’s a brand new and interesting experience for me, and I [am] grateful to our team.

According to Deepcake’s website, the firm aims to disrupt the traditional casting process by undercutting it in price, saying that its method “allows us to succeed in tasks minus travel expenses, expensive filming days, insurance, and other costs. You pay for endorsement contract with the celeb’s agent, and a fee for Deepcake’s services. This is game-changingly low.”

While the Telegraph report claims that Willis is “the first Hollywood star to sell his rights to allow a ‘digital twin’ of himself to be created for use on screen,” Ars Technica could not verify that claim outside of the context of a first license with the firm Deepcake. Evidence suggests that a similar licensing precedent exists—in Hollywood, deepfakes have already been used in several Star Wars films and TV shows, for example.

Looking deeper, the concept of having a “digital twin” isn’t new to Willis, either. In 1998, he starred in a PlayStation video game called Apocalypse that involved digitizing his face and capturing the motion of his body as he acted out scenes. Still, it’s notable to see an aging actor sidelined by illness who is willing to volunteer a digital double to work for him. James Earl Jones did so recently with his voice for Darth Vader. It’s possible we’ll see much more of this as deepfake technology improves.

Continue Reading

Trending