Connect with us

Biz & IT

Samsung spilled SmartThings app source code and secret keys

Published

on

A development lab used by Samsung engineers was leaking highly sensitive source code, credentials and secret keys for several internal projects — including its SmartThings platform, a security researcher found.

The electronics giant left dozens of internal coding projects on a GitLab instance hosted on a Samsung-owned domain, Vandev Lab. The instance, used by staff to share and contribute code to various Samsung apps, services and projects, was spilling data because the projects were set to “public” and not properly protected with a password, allowing anyone to look inside at each project, access and download the source code.

Mossab Hussein, a security researcher at Dubai-based cybersecurity firm SpiderSilk who discovered the exposed files, said one project contained credentials that allowed access to the entire AWS account that was being used, including more than 100 S3 storage buckets that contained logs and analytics data.

Many of the folders, he said, contained logs and analytics data for Samsung’s SmartThings and Bixby services, but also several employees’ exposed private GitLab tokens stored in plaintext, which allowed him to gain additional access from 42 public projects to 135 projects, including many private projects.

Samsung told him some of the files were for testing but Hussein challenged the claim, saying source code found in the GitLab repository contained the same code as the Android app, published in Google Play on April 10.

The app, which has since been updated, has more than 100 million installs to date.

“I had the private token of a user who had full access to all 135 projects on that GitLab,” he said, which could have allowed him to make code changes using a staffer’s own account.

Hussein shared several screenshots and a video of his findings for TechCrunch to examine and verify.

The exposed GitLab instance also contained private certificates for Samsung’s SmartThings’ iOS and Android apps.

Hussein also found several internal documents and slideshows among the exposed files.

“The real threat lies in the possibility of someone acquiring this level of access to the application source code, and injecting it with malicious code without the company knowing,” he said.

Through exposed private keys and tokens, Hussein documented a vast amount of access that if obtained by a malicious actor could have been “disastrous,” he said.

A screenshot of the exposed AWS credentials, allowing access to buckets with GitLab private tokens (Image: supplied)

Hussein, a white-hat hacker and data breach discoverer, reported the findings to Samsung on April 10. In the days following, Samsung began revoking the AWS credentials, but it’s not known if the remaining secret keys and certificates were revoked.

Samsung still hasn’t closed the case on Hussein’s vulnerability report, close to a month after he first disclosed the issue.

“Recently, an individual security researcher reported a vulnerability through our security rewards program regarding one of our testing platforms,” Samsung spokesperson Zach Dugan told TechCrunch when reached prior to publication. “We quickly revoked all keys and certificates for the reported testing platform and while we have yet to find evidence that any external access occurred, we are currently investigating this further.”

Hussein said Samsung took until April 30 to revoke the GitLab private keys. Samsung also declined to answer specific questions we had and provided no evidence that the Samsung-owned development environment was for testing.

Hussein is no stranger to reporting security vulnerabilities. He recently disclosed a vulnerable back-end database at Blind, an anonymous social networking site popular among Silicon Valley employees — and found a server leaking a rolling list of user passwords for scientific journal giant Elsevier.

Samsung’s data leak, he said, was his biggest find to date.

“I haven’t seen a company this big handle their infrastructure using weird practices like that,” he said.

Read more:

Source link



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Biz & IT

How law enforcement gets around your smartphone’s encryption

Published

on

Enlarge / Uberwachung, Symbolbild, Datensicherheit, Datenhoheit

Westend61 | Getty Images

Lawmakers and law enforcement agencies around the world, including in the United States, have increasingly called for backdoors in the encryption schemes that protect your data, arguing that national security is at stake. But new research indicates governments already have methods and tools that, for better or worse, let them access locked smartphones thanks to weaknesses in the security schemes of Android and iOS.

Cryptographers at Johns Hopkins University used publicly available documentation from Apple and Google as well as their own analysis to assess the robustness of Android and iOS encryption. They also studied more than a decade’s worth of reports about which of these mobile security features law enforcement and criminals have previously bypassed, or can currently, using special hacking tools. The researchers have dug into the current mobile privacy state of affairs and provided technical recommendations for how the two major mobile operating systems can continue to improve their protections.

“It just really shocked me, because I came into this project thinking that these phones are really protecting user data well,” says Johns Hopkins cryptographer Matthew Green, who oversaw the research. “Now I’ve come out of the project thinking almost nothing is protected as much as it could be. So why do we need a backdoor for law enforcement when the protections that these phones actually offer are so bad?”

Before you delete all your data and throw your phone out the window, though, it’s important to understand the types of privacy and security violations the researchers were specifically looking at. When you lock your phone with a passcode, fingerprint lock, or face recognition lock, it encrypts the contents of the device. Even if someone stole your phone and pulled the data off it, they would only see gibberish. Decoding all the data would require a key that only regenerates when you unlock your phone with a passcode, or face or finger recognition. And smartphones today offer multiple layers of these protections and different encryption keys for different levels of sensitive data. Many keys are tied to unlocking the device, but the most sensitive require additional authentication. The operating system and some special hardware are in charge of managing all of those keys and access levels so that, for the most part, you never even have to think about it.

With all of that in mind, the researchers assumed it would be extremely difficult for an attacker to unearth any of those keys and unlock some amount of data. But that’s not what they found.

“On iOS in particular, the infrastructure is in place for this hierarchical encryption that sounds really good,” says Maximilian Zinkus, a PhD student at Johns Hopkins who led the analysis of iOS. “But I was definitely surprised to see then how much of it is unused.” Zinkus says that the potential is there, but the operating systems don’t extend encryption protections as far as they could.

When an iPhone has been off and boots up, all the data is in a state Apple calls “Complete Protection.” The user must unlock the device before anything else can really happen, and the device’s privacy protections are very high. You could still be forced to unlock your phone, of course, but existing forensic tools would have a difficult time pulling any readable data off it. Once you’ve unlocked your phone that first time after reboot, though, a lot of data moves into a different mode—Apple calls it “Protected Until First User Authentication,” but researchers often simply call it “After First Unlock.”

If you think about it, your phone is almost always in the AFU state. You probably don’t restart your smartphone for days or weeks at a time, and most people certainly don’t power it down after each use. (For most, that would mean hundreds of times a day.) So how effective is AFU security? That’s where the researchers started to have concerns.

The main difference between Complete Protection and AFU relates to how quick and easy it is for applications to access the keys to decrypt data. When data is in the Complete Protection state, the keys to decrypt it are stored deep within the operating system and encrypted themselves. But once you unlock your device the first time after reboot, lots of encryption keys start getting stored in quick access memory, even while the phone is locked. At this point an attacker could find and exploit certain types of security vulnerabilities in iOS to grab encryption keys that are accessible in memory and decrypt big chunks of data from the phone.

Based on available reports about smartphone access tools, like those from the Israeli law enforcement contractor Cellebrite and US-based forensic access firm Grayshift, the researchers realized that this is how almost all smartphone access tools likely work right now. It’s true that you need a specific type of operating system vulnerability to grab the keys—and both Apple and Google patch as many of those flaws as possible—but if you can find it, the keys are available, too.

The researchers found that Android has a similar setup to iOS with one crucial difference. Android has a version of “Complete Protection” that applies before the first unlock. After that, the phone data is essentially in the AFU state. But where Apple provides the option for developers to keep some data under the more stringent Complete Protection locks all the time—something a banking app, say, might take them up on—Android doesn’t have that mechanism after first unlocking. Forensic tools exploiting the right vulnerability can grab even more decryption keys, and ultimately access even more data, on an Android phone.

Tushar Jois, another Johns Hopkins PhD candidate who led the analysis of Android, notes that the Android situation is even more complex because of the many device makers and Android implementations in the ecosystem. There are more versions and configurations to defend, and across the board users are less likely to be getting the latest security patches than iOS users.

“Google has done a lot of work on improving this, but the fact remains that a lot of devices out there aren’t receiving any updates,” Jois says. “Plus different vendors have different components that they put into their final product, so on Android you can not only attack the operating system level, but other different layers of software that can be vulnerable in different ways and incrementally give attackers more and more data access. It makes an additional attack surface, which means there are more things that can be broken.”

The researchers shared their findings with the Android and iOS teams ahead of publication. An Apple spokesperson told WIRED that the company’s security work is focused on protecting users from hackers, thieves, and criminals looking to steal personal information. The types of attacks the researchers are looking at are very costly to develop, the spokesperson pointed out; they require physical access to the target device and only work until Apple patches the vulnerabilities they exploit. Apple also stressed that its goal with iOS is to balance security and convenience.

“Apple devices are designed with multiple layers of security in order to protect against a wide range of potential threats, and we work constantly to add new protections for our users’ data,” the spokesperson said in a statement. “As customers continue to increase the amount of sensitive information they store on their devices, we will continue to develop additional protections in both hardware and software to protect their data.”

Similarly, Google stressed that these Android attacks depend on physical access and the existence of the right type of exploitable flaws. “We work to patch these vulnerabilities on a monthly basis and continually harden the platform so that bugs and vulnerabilities do not become exploitable in the first place,” a spokesperson said in a statement. “You can expect to see additional hardening in the next release of Android.”

To understand the difference in these encryption states, you can do a little demo for yourself on iOS or Android. When your best friend calls your phone, their name usually shows up on the call screen because it’s in your contacts. But if you restart your device, don’t unlock it, and then have your friend call you, only their number will show up, not their name. That’s because the keys to decrypt your address book data aren’t in memory yet.

The researchers also dove deep into how both Android and iOS handle cloud backups—another area where encryption guarantees can erode.

“It’s the same type of thing where there’s great crypto available, but it’s not necessarily in use all the time,” Zinkus says. “And when you back up, you also expand what data is available on other devices. So if your Mac is also seized in a search, that potentially increases law enforcement access to cloud data.”

Though the smartphone protections that are currently available are adequate for a number of “threat models” or potential attacks, the researchers have concluded that they fall short on the question of specialized forensic tools that governments can easily buy for law enforcement and intelligence investigations. A recent report from researchers at the nonprofit Upturn found nearly 50,000 examples of US police in all 50 states using mobile device forensic tools to get access to smartphone data between 2015 and 2019. And while citizens of some countries may think it is unlikely that their devices will ever specifically be subject to this type of search, widespread mobile surveillance is ubiquitous in many regions of the world and at a growing number of border crossings. The tools are also proliferating in other settings like US schools.

As long as mainstream mobile operating systems have these privacy weaknesses, though, it’s even more difficult to explain why governments around the world—including the US, UK, Australia, and India—have mounted major calls for tech companies to undermine the encryption in their products.

This story originally appeared on wired.com.

Continue Reading

Biz & IT

The NSA warns enterprises to beware of third-party DNS resolvers

Published

on

Getty Images

DNS over HTTPS is a new protocol that protects domain-lookup traffic from eavesdropping and manipulation by malicious parties. Rather than an end-user device communicating with a DNS server over a plaintext channel—as DNS has done for more than three decades—DoH, as DNS over HTTPS is known, encrypts requests and responses using the same encryption websites rely on to send and receive HTTPS traffic.

Using DoH or a similar protocol known as DoT—short for DNS over TLS—is a no brainer in 2021, since DNS traffic can be every bit as sensitive as any other data sent over the Internet. On Thursday, however, the National Security Agency said in some cases Fortune 500 companies, large government agencies, and other enterprise users are better off not using it. The reason: the same encryption that thwarts malicious third parties can hamper engineers’ efforts to secure their networks.

“DoH provides the benefit of encrypted DNS transactions, but it can also bring issues to enterprises, including a false sense of security, bypassing of DNS monitoring and protections, concerns for internal network configurations and information, and exploitation of upstream DNS traffic,” NSA officials wrote in published recommendations. “In some cases, individual client applications may enable DoH using external resolvers, causing some of these issues automatically.”

DNS refresher

More about the potential pitfalls of DoH later. First, a quick refresher on how the DNS—short for domain name system—works.

When people send emails, browse a website, or do just about anything else on the Internet, their devices need a way to translate a domain name into the numerical IP address servers use to locate other servers. For this, the devices send a domain lookup request to a DNS resolver, which is a server or group of servers that typically belong to the ISP, or enterprise organization the user is connected to.

If the DNS resolver already knows the IP address for the requested domain, it will immediately send it back to the end user. If not, the resolver forwards the request to an external DNS server and waits for a response. Once the DNS resolver has the answer, it sends the corresponding IP address to the client device.

The image below shows a setup that’s typical in many enterprise networks:

NSA

Astonishingly, this process is by default unencrypted. That means that anyone who happens to have the ability to monitor the connection between an organization’s end users and the DNS resolver—say, a malicious insider or a hacker who already has a toehold in the network—can build a comprehensive log of every site and IP address these people connect to. More worrying still, this malicious party might also be able to send users to malicious sites by replacing a domain’s correct IP address with a malicious one.

A double-edged sword

DoH and DoT were created to fix all of this. Just as transport layer security encryption authenticates Web traffic and hides it from prying eyes, DoH and DoT do the same thing for DNS traffic. For now, DoH and DoT are add-on protections that require extra work on the part of end users of the administrators who serve them.

The easiest way for people to get these protections now is to configure their operating system (for instance Windows 10 or macOS), browser (such as Firefox or Chrome), or another app that supports either DoH or DoT.

Thursday’s recommendations from the NSA warn that these types of setups can put enterprises at risk—particularly when the protection involves DoH. The reason: device-enabled DoH bypasses network defenses such as DNS inspection, which monitors domain lookups and IP address responses for signs of malicious activity. Instead of the traffic passing through the enterprise’s fortified DNS resolver, DoH configured on the end-user device bundles the packets in an encrypted envelope and sends it to an off-premises DoH resolver.

NSA officials wrote:

Many organizations use enterprise DNS resolvers or specific external DNS providers as a key element in the overall network security architecture. These protective DNS services may filter domains and IP addresses based on known malicious domains, restricted content categories, reputation information, typosquatting protections, advanced analysis, DNS Security Extensions (DNSSEC) validation, or other reasons. When DoH is used with external DoH resolvers and the enterprise DNS service is bypassed, the organization’s devices can lose these important defenses. This also prevents local-level DNS caching and the performance improvements it can bring.

Malware can also leverage DoH to perform DNS lookups that bypass enterprise DNS resolvers and network monitoring tools, often for command and control or exfiltration purposes.

There are other risks as well. For instance, when an end-user device with DoH enabled tries to connect to a domain inside the enterprise network, it will first send a DNS query to the external DoH resolver. Even if the request eventually fails over to the enterprise DNS resolver, it can still divulge internal network information in the process. What’s more, funneling lookups for internal domains to an outside resolver can create network performance problems.

The image immediately below shows how DoH with an external resolver can completely bypass the enterprise DNS resolver and the many security defenses it may provide.

NSA

Bring your own DoH

The answer, Thursday’s recommendations said, are for enterprises wanting DoH to rely on their own DoH-enabled resolvers, which besides decrypting the request and returning an answer also provide inspection, logging, and other protections.

The recommendations go on to say that enterprises should configure network security devices to block all known external DoH servers. Blocking outgoing DoT traffic is more straightforward, since it always travels on port 853, which enterprises can block wholesale. That option isn’t available for curbing outgoing DoH traffic because it uses port 443, which can’t be blocked.

The image below shows the recommended enterprise set up.

NSA

DoH from external resolvers are fine for people connecting from home or small offices, Thursday’s recommendations said. I’d go a step further and say that it’s nothing short of crazy for people to use unencrypted DNS in 2021, after all the revelations over the past decade.

For enterprises, things are more nuanced.

Continue Reading

Biz & IT

AT&T kills off the failed TV service formerly known as DirecTV Now

Published

on

Enlarge / AT&T corporate offices on November 10, 2020, in El Segundo, California.

AT&T is killing off the online-video service formerly known as DirecTV Now and introducing a no-contract option for the newer online service that replaced it.

AT&T unveiled DirecTV Now late in 2016, the year after AT&T bought the DirecTV satellite company. Prices originally started at $35 a month for the live-TV online service, and it had signed up 1.86 million subscribers by Q3 2018. But customers quickly fled as AT&T repeatedly raised prices and cut down on the use of promotional deals, leaving the service with just 683,000 subscribers at the end of Q3 2020.

In 2019, AT&T changed the name from DirecTV Now to AT&T TV Now, creating confusion among customers and its own employees because the company simultaneously unveiled another online streaming service called AT&T TV.

AT&T TV was pitched as a more robust replacement for satellite TV, and it even mimicked cable and satellite by imposing contracts, hidden fees, and a big second-year price hike. Going forward, AT&T TV Now will no longer be offered to new customers, and AT&T TV will be the flagship for AT&T’s live-TV streaming business. “AT&T TV Now has merged with AT&T TV,” the service’s website says in an update flagged in a news article by TV Answer Man yesterday.

For existing users, “AT&T TV Now customers’ service and plans remain in effect” without any changes, an AT&T spokesperson told Ars. “We have no other price changes to announce at this time.”

Convoluted pricing, an AT&T tradition

Previously, AT&T TV was only available with a contract. There is now a no-contract option that costs more in the first year but could be cheaper in the long run if customers use it for multiple years.

The no-contract AT&T TV prices are $69.99 per month for 65 channels; $84.99 for 90 channels and one year of HBO Max; $94.99 for 130 channels and one year of HBO Max; and $139.99 for 140 channels and HBO Max without the one-year time limit. There’s no regional sports network fee in these packages.

The first-year prices for contract plans range from $59.99 to $129.99, plus a regional sports network fee of up to $8.49 for all packages except the cheapest one. Including the sports fee, the first-year prices on most of the contract plans are $10 or so cheaper than the equivalent no-contract options. An exception is the “premier” package with 140 channels and HBO Max, which costs about $140 the first year regardless of whether you have a contract or not.

Customers who select the two-year contract will get a big price hike the second year, with base prices ranging from $93 to $183 per month plus the sports fee. The second-year prices could actually be more than that since it’s based on the “then-prevailing rate,” which AT&T could change. The contract option also requires a $19.95 activation fee and an early termination fee of $15 for each month remaining on the contract.

Keep going

There is no automatic price increase after 12 months for the no-contract option, but that is not a guarantee that prices won’t rise. AT&T’s fine print says that “pricing, channels, features, and terms are subject to change and may be modified or discontinued at any time without notice.”

There’s another factor that makes the no-contract price $10 higher if you want a lot of cloud-DVR storage. While the contract option comes with 500 hours of cloud DVR storage, the no-contract option only comes with 20 hours unless you pay an extra $10 per month to upgrade to 500 hours. The contract option also comes with one free AT&T TV device, which costs $5 per month for 24 months on the no-contract plan. Third-party streaming devices also work with the service, so there’s no requirement to buy this.

There’s no price change right now for existing AT&T TV customers. Despite the new no-contract option, the contracts for existing AT&T TV customers “remain in effect,” AT&T told Ars. As is always the case with AT&T TV services, the pricing tiers are convoluted, so new customers should examine them carefully before signing up. This table provides a breakdown of key differences between contract and no-contract options:

AT&T

Multimillion-customer exodus

For financial reporting purposes, AT&T TV is part of a category AT&T calls “Premium TV” services, which also includes DirecTV satellite and U-verse wireline TV. AT&T has lost nearly 8 million customers from the category in the past few years, dropping from over 25 million in early 2017 to 17.1 million at the end of September 2020.

More customer losses could be on the way, as AT&T is raising prices on both DirecTV and U-verse effective January 17. AT&T is trying to sell DirecTV, but offers so far have reportedly valued the satellite provider at about a third of the $49 billion AT&T paid in 2015.

Continue Reading

Trending