GM is sprucing up its smartphone app for owners of the all-electric Chevrolet Bolt through a collaboration with charging network companies EVgo, ChargePoint and Greenlots.
The idea is to take aggregate dynamic data from each of the EV charging networks so owners can have a “more seamless charging experience.” In short: GM wants to make it easier and more intuitive for Bolt EV owners to find and access charging. Removing hurdles from the charging experience can go a long way in convincing more people to buy the Bolt EV, or any EV for that matter.
The partnership with EVgo, ChargePoint and Greenlots is a notable start considering that collectively that means more than 31,000 charging ports.
“GM believes in an all-electric future, and this is a significant step to make charging easier for our customers,” said Doug Parks, General Motors vice president of Autonomous and Electric Vehicle Programs. “By collaborating with these three companies, we expect to reduce barriers to create a stronger EV infrastructure for the future. This is an important step toward achieving GM’s vision of a world with zero emissions.”
GM plans to take the aggregate charging data from EVgo, ChargePoint and Greenlots and use it to improve the myChevrolet app. For instance, owners will be able to see if a charging station is available and compatible with the Bolt EV. It also will provide real-time data on charge stations to report if a charging station is working.
GM plans to create an app interface that will streamline the enrollment process for each of these networks. The automaker wants owners to be able to activate a charging session using the app instead of a membership card, but didn’t say when that feature would be rolled out.
GM recently made a few updates to the myChevrolet app that lets owners project the energy assist to the vehicle’s infotainment system via Apple CarPlay and Android Auto for drivers with model year 2017 or newer Bolt EVs.
This means Bolt EV drivers can access information through their infotainment system, like vehicle range, charging station locations and search, as well as route planning that takes into consideration charging stops along the way if the destination is out of range.
Original purchasers of new Bolt EVs will have access to these features at no additional cost for five years from the vehicle delivery date, according to GM.
GM doesn’t provide updates about the Bolt EV, and more broadly its electric vehicle program, at the same pace and frequency as say Tesla. But the company is still ramping up and expanding. GM recently expanded a battery lab, and a new LG Electronics plant in Michigan has come online.
The LG Electronics facility in Hazel Park started making battery packs this fall to supply GM’s Orion Assembly Plant, where the automaker builds the all-electric Chevrolet Bolt.
GM’s plan to launch 20 new all-electric vehicles globally by 2023 and increase production of the Chevy Bolt.
Security firm Malwarebytes said it was breached by the same nation-state-sponsored hackers who compromised a dozen or more US government agencies and private companies.
The attackers are best known for first hacking into Austin, Texas-based SolarWinds, compromising its software-distribution system, and using it to infect the networks of customers who used SolarWinds’ network management software. In an online notice, however, Malwarebytes said the attackers used a different vector.
“While Malwarebytes does not use SolarWinds, we, like many other companies were recently targeted by the same threat actor,” the notice stated. “We can confirm the existence of another intrusion vector that works by abusing applications with privileged access to Microsoft Office 365 and Azure environments.”
Investigators have determined the attacker gained access to a limited subset of internal company emails. So far, the investigators have found no evidence of unauthorized access or compromise in any Malwarebytes production environments.
The notice isn’t the first time investigators have said the SolarWinds software supply chain attack wasn’t the sole means of infection.
When the mass compromise came to light last month, Microsoft said the hackers also stole signing certificates that allowed them to impersonate any of a target’s existing users and accounts through the Security Assertion Markup Language. Typically abbreviated as SAML, the XML-based language provides a way for identity providers to exchange authentication and authorization data with service providers.
Twelve days ago, the Cybersecurity & Infrastructure Security Agency, said the attackers may have obtained initial access by using password guessing or password spraying or by exploiting administrative or service credentials.
“In our particular instance, the threat actor added a self-signed certificate with credentials to the service principal account,” Malwarebytes researcher Marcin Kleczynski wrote. “From there, they can authenticate using the key and make API calls to request emails via MSGraph.”
Last week, email management provider Mimecast also said that hackers compromised a digital certificate it issued and used it to target select customers who use it to encrypt data they sent and received through the company’s cloud-based service. While Mimecast didn’t say the certificate compromise was related to the ongoing attack, the similarities make it likely the two attacks are related.
Because the attackers used their access to the SolarWinds network to compromise the company’s software build system, Malwarebytes researchers investigated the possibility that they too were being used to infect their customers. So far, Malwarebytes said it has no evidence of such an infection. The company has also inspected its source code repositories for signs of malicious changes.
Malwarebytes said it first learned of the infection from Microsoft on December 15, two days after the SolarWinds hack was first disclosed. Microsoft identified the network compromise through suspicious activity from a third-party application in Malwarebytes’ Microsoft Office 365 tenant. The tactics, techniques, and procedures in the Malwarebytes attack were similar in key ways to the threat actor involved in the SolarWinds attacks.
Malwarebytes’ notice marks the fourth time a company has disclosed it was targeted by the SolarWinds hackers. Microsoft and security firms FireEye and CrowdStrike have also been targeted, although CrowdStrike has said the attempt to infect its network was unsuccessful. Government agencies reported to be affected include the Departments of Defense, Justice, Treasury, Commerce, and Homeland Security as well as the National Institutes of Health.
If you’re in IT, you probably remember the first time you walked into a real data center—not just a server closet, but an actual raised-floor data center, where the door wooshes open in a blast of cold air and noise and you’re confronted with rows and rows of racks, monolithic and gray, stuffed full of servers with cooling fans screaming and blinkenlights blinking like mad. The data center is where the cool stuff is—the pizza boxes, the blade servers, the NASes and the SANs. Some of its residents are more exotic—the Big Iron in all its massive forms, from Z-series to Superdome and all points in between.
For decades, data centers have been the beating hearts of many businesses—the fortified secret rooms where huge amounts of capital sit, busily transforming electricity into revenue. And they’re sometimes a place for IT to hide, too—it’s kind of a standing joke that whenever a user you don’t want to see is stalking around the IT floor, your best bet to avoid contact is just to badge into the data center and wait for them to go away. (But, uh, I never did that ever. I promise.)
But the last few years have seen a massive shift in the relationship between companies and their data—and the places where that data lives. Sure, it’s always convenient to own your own servers and storage, but why tie up all that capital when you don’t have to? Why not just go to the cloud buffet and pay for what you want to eat and nothing more?
There will always be some reason for some companies to have data centers—the cloud, for all its attractiveness, can’t quite do everything. (Not yet, at least.) But the list of objections to going off-premises for your computing needs is rapidly shrinking—and we’re going to talk a bit about what comes next.
Join us for a chat!
We’ll be holding a livestreamed discussion on the future of the data center on Tuesday, January 20, at 3:15pm Eastern Time (that’s 12:15pm Pacific Time, and 8:15pm UTC). On the panel will be Ars Infosec Editor Emeritus Sean Gallagher and myself, along with special guest Ivan Nekrasov, data center demand generation manager and field marketing consultant for Dell Technologies.
If you’d like to pitch us questions during the event, please feel free to register here and join us during the meeting tomorrow on Zoom. For folks who just want to watch, the live conversation will be available on Twitter, and we’ll embed the finished version (with transcript) on this story page like we did with our last livestream. Register and join in, or check back here after the event to watch!
Lawmakers and law enforcement agencies around the world, including in the United States, have increasingly called for backdoors in the encryption schemes that protect your data, arguing that national security is at stake. But new research indicates governments already have methods and tools that, for better or worse, let them access locked smartphones thanks to weaknesses in the security schemes of Android and iOS.
Cryptographers at Johns Hopkins University used publicly available documentation from Apple and Google as well as their own analysis to assess the robustness of Android and iOS encryption. They also studied more than a decade’s worth of reports about which of these mobile security features law enforcement and criminals have previously bypassed, or can currently, using special hacking tools. The researchers have dug into the current mobile privacy state of affairs and provided technical recommendations for how the two major mobile operating systems can continue to improve their protections.
“It just really shocked me, because I came into this project thinking that these phones are really protecting user data well,” says Johns Hopkins cryptographer Matthew Green, who oversaw the research. “Now I’ve come out of the project thinking almost nothing is protected as much as it could be. So why do we need a backdoor for law enforcement when the protections that these phones actually offer are so bad?”
Before you delete all your data and throw your phone out the window, though, it’s important to understand the types of privacy and security violations the researchers were specifically looking at. When you lock your phone with a passcode, fingerprint lock, or face recognition lock, it encrypts the contents of the device. Even if someone stole your phone and pulled the data off it, they would only see gibberish. Decoding all the data would require a key that only regenerates when you unlock your phone with a passcode, or face or finger recognition. And smartphones today offer multiple layers of these protections and different encryption keys for different levels of sensitive data. Many keys are tied to unlocking the device, but the most sensitive require additional authentication. The operating system and some special hardware are in charge of managing all of those keys and access levels so that, for the most part, you never even have to think about it.
With all of that in mind, the researchers assumed it would be extremely difficult for an attacker to unearth any of those keys and unlock some amount of data. But that’s not what they found.
“On iOS in particular, the infrastructure is in place for this hierarchical encryption that sounds really good,” says Maximilian Zinkus, a PhD student at Johns Hopkins who led the analysis of iOS. “But I was definitely surprised to see then how much of it is unused.” Zinkus says that the potential is there, but the operating systems don’t extend encryption protections as far as they could.
When an iPhone has been off and boots up, all the data is in a state Apple calls “Complete Protection.” The user must unlock the device before anything else can really happen, and the device’s privacy protections are very high. You could still be forced to unlock your phone, of course, but existing forensic tools would have a difficult time pulling any readable data off it. Once you’ve unlocked your phone that first time after reboot, though, a lot of data moves into a different mode—Apple calls it “Protected Until First User Authentication,” but researchers often simply call it “After First Unlock.”
If you think about it, your phone is almost always in the AFU state. You probably don’t restart your smartphone for days or weeks at a time, and most people certainly don’t power it down after each use. (For most, that would mean hundreds of times a day.) So how effective is AFU security? That’s where the researchers started to have concerns.
The main difference between Complete Protection and AFU relates to how quick and easy it is for applications to access the keys to decrypt data. When data is in the Complete Protection state, the keys to decrypt it are stored deep within the operating system and encrypted themselves. But once you unlock your device the first time after reboot, lots of encryption keys start getting stored in quick access memory, even while the phone is locked. At this point an attacker could find and exploit certain types of security vulnerabilities in iOS to grab encryption keys that are accessible in memory and decrypt big chunks of data from the phone.
Based on available reports about smartphone access tools, like those from the Israeli law enforcement contractor Cellebrite and US-based forensic access firm Grayshift, the researchers realized that this is how almost all smartphone access tools likely work right now. It’s true that you need a specific type of operating system vulnerability to grab the keys—and both Apple and Google patch as many of those flaws as possible—but if you can find it, the keys are available, too.
The researchers found that Android has a similar setup to iOS with one crucial difference. Android has a version of “Complete Protection” that applies before the first unlock. After that, the phone data is essentially in the AFU state. But where Apple provides the option for developers to keep some data under the more stringent Complete Protection locks all the time—something a banking app, say, might take them up on—Android doesn’t have that mechanism after first unlocking. Forensic tools exploiting the right vulnerability can grab even more decryption keys, and ultimately access even more data, on an Android phone.
Tushar Jois, another Johns Hopkins PhD candidate who led the analysis of Android, notes that the Android situation is even more complex because of the many device makers and Android implementations in the ecosystem. There are more versions and configurations to defend, and across the board users are less likely to be getting the latest security patches than iOS users.
“Google has done a lot of work on improving this, but the fact remains that a lot of devices out there aren’t receiving any updates,” Jois says. “Plus different vendors have different components that they put into their final product, so on Android you can not only attack the operating system level, but other different layers of software that can be vulnerable in different ways and incrementally give attackers more and more data access. It makes an additional attack surface, which means there are more things that can be broken.”
The researchers shared their findings with the Android and iOS teams ahead of publication. An Apple spokesperson told WIRED that the company’s security work is focused on protecting users from hackers, thieves, and criminals looking to steal personal information. The types of attacks the researchers are looking at are very costly to develop, the spokesperson pointed out; they require physical access to the target device and only work until Apple patches the vulnerabilities they exploit. Apple also stressed that its goal with iOS is to balance security and convenience.
“Apple devices are designed with multiple layers of security in order to protect against a wide range of potential threats, and we work constantly to add new protections for our users’ data,” the spokesperson said in a statement. “As customers continue to increase the amount of sensitive information they store on their devices, we will continue to develop additional protections in both hardware and software to protect their data.”
Similarly, Google stressed that these Android attacks depend on physical access and the existence of the right type of exploitable flaws. “We work to patch these vulnerabilities on a monthly basis and continually harden the platform so that bugs and vulnerabilities do not become exploitable in the first place,” a spokesperson said in a statement. “You can expect to see additional hardening in the next release of Android.”
To understand the difference in these encryption states, you can do a little demo for yourself on iOS or Android. When your best friend calls your phone, their name usually shows up on the call screen because it’s in your contacts. But if you restart your device, don’t unlock it, and then have your friend call you, only their number will show up, not their name. That’s because the keys to decrypt your address book data aren’t in memory yet.
The researchers also dove deep into how both Android and iOS handle cloud backups—another area where encryption guarantees can erode.
“It’s the same type of thing where there’s great crypto available, but it’s not necessarily in use all the time,” Zinkus says. “And when you back up, you also expand what data is available on other devices. So if your Mac is also seized in a search, that potentially increases law enforcement access to cloud data.”
Though the smartphone protections that are currently available are adequate for a number of “threat models” or potential attacks, the researchers have concluded that they fall short on the question of specialized forensic tools that governments can easily buy for law enforcement and intelligence investigations. A recent report from researchers at the nonprofit Upturn found nearly 50,000 examples of US police in all 50 states using mobile device forensic tools to get access to smartphone data between 2015 and 2019. And while citizens of some countries may think it is unlikely that their devices will ever specifically be subject to this type of search, widespread mobile surveillance is ubiquitous in many regions of the world and at a growing number of border crossings. The tools are also proliferating in other settings like US schools.
As long as mainstream mobile operating systems have these privacy weaknesses, though, it’s even more difficult to explain why governments around the world—including the US, UK, Australia, and India—have mounted major calls for tech companies to undermine the encryption in their products.