Connect with us

Biz & IT

WarGames for real: How one 1983 exercise nearly triggered WWIII

Published

on

Update, 11/29/20: It’s a very different Thanksgiving weekend here in 2020, but even if tables were smaller and travel non-existent, Ars staff is off for the holiday in order to recharge, take a mental afk break, and maybe stream a movie or five. But five years ago around this time, we were following a newly declassified government report from 1990 that outlined a KGB computer model… one that almost pulled a WarGames, just IRL. With the film now streaming on Netflix (thus setting our off day schedule), we thought we’d resurface this story for an accompanying Sunday read. This piece first published on November 25, 2015, and it appears unchanged below.

“Let’s play Global Thermonuclear War.”

Thirty-two years ago, just months after the release of the movie WarGames, the world came the closest it ever has to nuclear Armageddon. In the movie version of a global near-death experience, a teenage hacker messing around with an artificial intelligence program that just happened to control the American nuclear missile force unleashes chaos. In reality, a very different computer program run by the Soviets fed growing paranoia about the intentions of the United States, very nearly triggering a nuclear war.

The software in question was a KGB computer model constructed as part of Operation RYAN (РЯН), details of which were obtained from Oleg Gordievsky, the KGB’s London section chief who was at the same time spying for Britain’s MI6. Named for an acronym for “Nuclear Missile Attack” (Ракетное Ядерное Нападение), RYAN was an intelligence operation started in 1981 to help the intelligence agency forecast if the US and its allies were planning a nuclear strike. The KGB believed that by analyzing quantitative data from intelligence on US and NATO activities relative to the Soviet Union, they could predict when a sneak attack was most likely.

As it turned out, Exercise Able Archer ’83 triggered that forecast. The war game, which was staged over two weeks in November of 1983, simulated the procedures that NATO would go through prior to a nuclear launch. Many of these procedures and tactics were things the Soviets had never seen, and the whole exercise came after a series of feints by US and NATO forces to size up Soviet defenses and the downing of Korean Air Lines Flight 007 on September 1, 1983. So as Soviet leaders monitored the exercise and considered the current climate, they put one and one together. Able Archer, according to Soviet leadership at least, must have been a cover for a genuine surprise attack planned by the US, then led by a president possibly insane enough to do it.

While some studies, including an analysis some 12 years ago by historian Fritz Earth, have downplayed the actual Soviet response to Able Archer, a newly published declassified 1990 report from the President’s Foreign Intelligence Advisory Board (PFIAB) to President George H. W. Bush obtained by the National Security Archive suggests that the danger was all too real. The document was classified as Top Secret with the code word UMBRA, denoting the most sensitive compartment of classified material, and it cites data from sources that to this day remain highly classified. When combined with previously released CIA, National Security Agency (NSA), and Defense Department documents, this PFIAB report shows that only the illness of Soviet leader Yuri Andropov—and the instincts of one mid-level Soviet officer—may have prevented a nuclear launch.

The balance of paranoia

As Able Archer ’83 was getting underway, the US defense and intelligence community believed the Soviet Union was strategically secure. A top-secret Defense Department-CIA Joint Net Assessment published in November of 1983 stated, “The Soviets, in our view, have some clear advantages today, and these advantages are projected to continue, although differences may narrow somewhat in the next 10 years. It is likely, however, that the Soviets do not see their advantage as being as great as we would assess.”

The assessment was spot on—the Soviets certainly did not see it this way. In 1981, the KGB foreign intelligence directorate ran a computer analysis using an early version of the RYAN system, seeking the “correlation of world forces” between the USSR and the United States. The numbers suggested one thing: the Soviet Union was losing the Cold War, and the US might soon be in a strategically dominant position. And if that happened, the Soviets believed its adversary would strike to destroy them and their Warsaw Pact allies.

This data was everything the leadership expected given the intransigence of the Reagan administration. The US’ aggressive foreign policy in the late 1970s and early 1980s confused and worried the USSR. They didn’t understand the reaction to the invasion of Afghanistan, which they thought the US would just recognize as a vital security operation.

The US was even funding the mujaheddin fighting them, “training and sending armed terrorists,” as Communist Party Secretary Mikhail Suslov put it in a 1980 speech (those trainees including a young Saudi inspired to jihad by the name of Osama bin Laden). And in Nicaragua, the US was funneling arms to the Contras fighting the Sandinista government of Daniel Ortega. All the while, Reagan was refusing to engage the Soviets on arms control. This mounting evidence convinced some in the Soviet leadership that Reagan was willing to go even further in his efforts to destroy what he would soon describe as the “evil empire.”

USSR had plenty of reason to think the US also believed it could win a nuclear war. The rhetoric of the Reagan administration was backed up by a surge in military capabilities, and much of the Soviet military’s nuclear capabilities were vulnerable to surprise attack. In 1983, the United States was in the midst of its biggest military buildup in decades. And thanks to a direct line into some of the US’ most sensitive communications, the KGB had plenty of bad news to share about that with the Kremlin.

The seaborne leg of the Soviet strategic force was especially vulnerable. The US Navy’s SOSUS (sound surveillance system), a network of hydrophone arrays, tracked nearly every Russian submarine that entered the Atlantic and much of the Pacific, and US antisubmarine forces (P-3 Orion patrol planes, fast attack subs, and destroyers and frigates) were practically on top of, or in the wake of, Soviet ballistic missile subs during their patrols. The US had mapped out the “Yankee Patrol Boxes” where Soviet Navaga-class (NATO designation “Yankee”) ballistic missile subs stationed themselves off the US’ east and west coasts. Again, the Soviets knew all of this thanks to the spy John Walker, so confidence in their sub fleet’s survivability was likely low.

The air-based leg of the Soviet triad was no better off.  By the 1980s, the Soviet Union had the largest air force in the world. But the deployment of the Tomahawk cruise missile, initial production of the US Air Force’s AGM-86 Air Launched Cruise Missile, and the pending deployment of Pershing II intermediate range ballistic missiles to Europe meant that NATO could strike at Soviet air fields with very little warning. Unfortunately, the Soviet strategic air force needed as much warning as it could get. Soviet long-range bombers were “kept at a low state of readiness,” the advisory board report noted. Hours or days would have been required to get bombers ready for an all-out war. In all likelihood, the Soviet leadership assumed their entire bomber force would be caught on the ground in a sneak attack and wiped out.

Even theater nuclear forces like the RSD-10 Pioneer—one of the weapons systems that prompted the deployment of the Pershing II to Europe—were vulnerable. They generally didn’t have warheads or missiles loaded into their mobile launcher systems when not on alert. The only leg not overly vulnerable to a first strike by NATO was the Soviets’ intermediate and intercontinental ballistic missile (ICBM) force. Its readiness was in question, however. According to the 1990 briefing paper by the PFIAB, about 95 percent of the Soviet ICBM force was ready to respond to an attack alert within 15 minutes during the early 1980s. The silo-based missiles were out of range of anything but US submarine-launched and land-based ballistic missiles.

The viability of the ICBM force as a response to sneak attack was based entirely on how much warning time the Soviets had. In 1981, they brought a new over-the-horizon ballistic missile early warning (BMEW) radar system on-line. One year later, the Soviets activated the US-KS nuclear launch warning satellite network, known as “Oko” (Russian for “eye”). These two measures gave the Soviet command and control structure about 30 minutes’ warning of any US ICBM launch. But the deployment of Pershing II missiles to Europe could cut warning time to less than eight minutes, and attacks from US sub-launched missiles would have warning times in some cases of less than five minutes.

And then, President Ronald Reagan announced the Strategic Defense Initiative (SDI) or “Star Wars” program—the predecessor to the current Missile Defense Agency efforts to counter limited ballistic missile attacks. While SDI was presented as defensive, it would likely only be effective if the US dramatically reduced the number of Soviet ICBMs launched by making a first strike. More than ever before, SDI convinced the Soviet leadership that Reagan was aiming to make a nuclear war against them winnable.

Combined with his ongoing anti-Soviet rhetoric, USSR leadership saw Reagan as an existential threat against the country on par with Hitler. In fact, they publicly made that comparison, accusing the Reagan administration of pushing the world closer to another global war. And maybe, they thought, the US president already believed it was possible to defeat the Soviets with a surprise attack.

Continue Reading

Biz & IT

Security firm Malwarebytes was infected by same hackers who hit SolarWinds

Published

on

Security firm Malwarebytes said it was breached by the same nation-state-sponsored hackers who compromised a dozen or more US government agencies and private companies.

The attackers are best known for first hacking into Austin, Texas-based SolarWinds, compromising its software-distribution system, and using it to infect the networks of customers who used SolarWinds’ network management software. In an online notice, however, Malwarebytes said the attackers used a different vector.

“While Malwarebytes does not use SolarWinds, we, like many other companies were recently targeted by the same threat actor,” the notice stated. “We can confirm the existence of another intrusion vector that works by abusing applications with privileged access to Microsoft Office 365 and Azure environments.”

Investigators have determined the attacker gained access to a limited subset of internal company emails. So far, the investigators have found no evidence of unauthorized access or compromise in any Malwarebytes production environments.

The notice isn’t the first time investigators have said the SolarWinds software supply chain attack wasn’t the sole means of infection.

When the mass compromise came to light last month, Microsoft said the hackers also stole signing certificates that allowed them to impersonate any of a target’s existing users and accounts through the Security Assertion Markup Language. Typically abbreviated as SAML, the XML-based language provides a way for identity providers to exchange authentication and authorization data with service providers.

Twelve days ago, the Cybersecurity & Infrastructure Security Agency, said the attackers may have obtained initial access by using password guessing or password spraying or by exploiting administrative or service credentials.

Mimecast

“In our particular instance, the threat actor added a self-signed certificate with credentials to the service principal account,” Malwarebytes researcher Marcin Kleczynski wrote. “From there, they can authenticate using the key and make API calls to request emails via MSGraph.”

Last week, email management provider Mimecast also said that hackers compromised a digital certificate it issued and used it to target select customers who use it to encrypt data they sent and received through the company’s cloud-based service. While Mimecast didn’t say the certificate compromise was related to the ongoing attack, the similarities make it likely the two attacks are related.

Because the attackers used their access to the SolarWinds network to compromise the company’s software build system, Malwarebytes researchers investigated the possibility that they too were being used to infect their customers. So far, Malwarebytes said it has no evidence of such an infection. The company has also inspected its source code repositories for signs of malicious changes.

Malwarebytes said it first learned of the infection from Microsoft on December 15, two days after the SolarWinds hack was first disclosed. Microsoft identified the network compromise through suspicious activity from a third-party application in Malwarebytes’ Microsoft Office 365 tenant. The tactics, techniques, and procedures in the Malwarebytes attack were similar in key ways to the threat actor involved in the SolarWinds attacks.

Malwarebytes’ notice marks the fourth time a company has disclosed it was targeted by the SolarWinds hackers. Microsoft and security firms FireEye and CrowdStrike have also been targeted, although CrowdStrike has said the attempt to infect its network was unsuccessful. Government agencies reported to be affected include the Departments of Defense, Justice, Treasury, Commerce, and Homeland Security as well as the National Institutes of Health.

Continue Reading

Biz & IT

Ars online IT roundtable tomorrow: What’s the future of the data center?

Published

on

If you’re in IT, you probably remember the first time you walked into a real data center—not just a server closet, but an actual raised-floor data center, where the door wooshes open in a blast of cold air and noise and you’re confronted with rows and rows of racks, monolithic and gray, stuffed full of servers with cooling fans screaming and blinkenlights blinking like mad. The data center is where the cool stuff is—the pizza boxes, the blade servers, the NASes and the SANs. Some of its residents are more exotic—the Big Iron in all its massive forms, from Z-series to Superdome and all points in between.

For decades, data centers have been the beating hearts of many businesses—the fortified secret rooms where huge amounts of capital sit, busily transforming electricity into revenue. And they’re sometimes a place for IT to hide, too—it’s kind of a standing joke that whenever a user you don’t want to see is stalking around the IT floor, your best bet to avoid contact is just to badge into the data center and wait for them to go away. (But, uh, I never did that ever. I promise.)

But the last few years have seen a massive shift in the relationship between companies and their data—and the places where that data lives. Sure, it’s always convenient to own your own servers and storage, but why tie up all that capital when you don’t have to? Why not just go to the cloud buffet and pay for what you want to eat and nothing more?

There will always be some reason for some companies to have data centers—the cloud, for all its attractiveness, can’t quite do everything. (Not yet, at least.) But the list of objections to going off-premises for your computing needs is rapidly shrinking—and we’re going to talk a bit about what comes next.

Join us for a chat!

We’ll be holding a livestreamed discussion on the future of the data center on Tuesday, January 20, at 3:15pm Eastern Time (that’s 12:15pm Pacific Time, and 8:15pm UTC). On the panel will be Ars Infosec Editor Emeritus Sean Gallagher and myself, along with special guest Ivan Nekrasov, data center demand generation manager and field marketing consultant for Dell Technologies.

If you’d like to pitch us questions during the event, please feel free to register here and join us during the meeting tomorrow on Zoom. For folks who just want to watch, the live conversation will be available on Twitter, and we’ll embed the finished version (with transcript) on this story page like we did with our last livestream. Register and join in, or check back here after the event to watch!

Continue Reading

Biz & IT

How law enforcement gets around your smartphone’s encryption

Published

on

Enlarge / Uberwachung, Symbolbild, Datensicherheit, Datenhoheit

Westend61 | Getty Images

Lawmakers and law enforcement agencies around the world, including in the United States, have increasingly called for backdoors in the encryption schemes that protect your data, arguing that national security is at stake. But new research indicates governments already have methods and tools that, for better or worse, let them access locked smartphones thanks to weaknesses in the security schemes of Android and iOS.

Cryptographers at Johns Hopkins University used publicly available documentation from Apple and Google as well as their own analysis to assess the robustness of Android and iOS encryption. They also studied more than a decade’s worth of reports about which of these mobile security features law enforcement and criminals have previously bypassed, or can currently, using special hacking tools. The researchers have dug into the current mobile privacy state of affairs and provided technical recommendations for how the two major mobile operating systems can continue to improve their protections.

“It just really shocked me, because I came into this project thinking that these phones are really protecting user data well,” says Johns Hopkins cryptographer Matthew Green, who oversaw the research. “Now I’ve come out of the project thinking almost nothing is protected as much as it could be. So why do we need a backdoor for law enforcement when the protections that these phones actually offer are so bad?”

Before you delete all your data and throw your phone out the window, though, it’s important to understand the types of privacy and security violations the researchers were specifically looking at. When you lock your phone with a passcode, fingerprint lock, or face recognition lock, it encrypts the contents of the device. Even if someone stole your phone and pulled the data off it, they would only see gibberish. Decoding all the data would require a key that only regenerates when you unlock your phone with a passcode, or face or finger recognition. And smartphones today offer multiple layers of these protections and different encryption keys for different levels of sensitive data. Many keys are tied to unlocking the device, but the most sensitive require additional authentication. The operating system and some special hardware are in charge of managing all of those keys and access levels so that, for the most part, you never even have to think about it.

With all of that in mind, the researchers assumed it would be extremely difficult for an attacker to unearth any of those keys and unlock some amount of data. But that’s not what they found.

“On iOS in particular, the infrastructure is in place for this hierarchical encryption that sounds really good,” says Maximilian Zinkus, a PhD student at Johns Hopkins who led the analysis of iOS. “But I was definitely surprised to see then how much of it is unused.” Zinkus says that the potential is there, but the operating systems don’t extend encryption protections as far as they could.

When an iPhone has been off and boots up, all the data is in a state Apple calls “Complete Protection.” The user must unlock the device before anything else can really happen, and the device’s privacy protections are very high. You could still be forced to unlock your phone, of course, but existing forensic tools would have a difficult time pulling any readable data off it. Once you’ve unlocked your phone that first time after reboot, though, a lot of data moves into a different mode—Apple calls it “Protected Until First User Authentication,” but researchers often simply call it “After First Unlock.”

If you think about it, your phone is almost always in the AFU state. You probably don’t restart your smartphone for days or weeks at a time, and most people certainly don’t power it down after each use. (For most, that would mean hundreds of times a day.) So how effective is AFU security? That’s where the researchers started to have concerns.

The main difference between Complete Protection and AFU relates to how quick and easy it is for applications to access the keys to decrypt data. When data is in the Complete Protection state, the keys to decrypt it are stored deep within the operating system and encrypted themselves. But once you unlock your device the first time after reboot, lots of encryption keys start getting stored in quick access memory, even while the phone is locked. At this point an attacker could find and exploit certain types of security vulnerabilities in iOS to grab encryption keys that are accessible in memory and decrypt big chunks of data from the phone.

Based on available reports about smartphone access tools, like those from the Israeli law enforcement contractor Cellebrite and US-based forensic access firm Grayshift, the researchers realized that this is how almost all smartphone access tools likely work right now. It’s true that you need a specific type of operating system vulnerability to grab the keys—and both Apple and Google patch as many of those flaws as possible—but if you can find it, the keys are available, too.

The researchers found that Android has a similar setup to iOS with one crucial difference. Android has a version of “Complete Protection” that applies before the first unlock. After that, the phone data is essentially in the AFU state. But where Apple provides the option for developers to keep some data under the more stringent Complete Protection locks all the time—something a banking app, say, might take them up on—Android doesn’t have that mechanism after first unlocking. Forensic tools exploiting the right vulnerability can grab even more decryption keys, and ultimately access even more data, on an Android phone.

Tushar Jois, another Johns Hopkins PhD candidate who led the analysis of Android, notes that the Android situation is even more complex because of the many device makers and Android implementations in the ecosystem. There are more versions and configurations to defend, and across the board users are less likely to be getting the latest security patches than iOS users.

“Google has done a lot of work on improving this, but the fact remains that a lot of devices out there aren’t receiving any updates,” Jois says. “Plus different vendors have different components that they put into their final product, so on Android you can not only attack the operating system level, but other different layers of software that can be vulnerable in different ways and incrementally give attackers more and more data access. It makes an additional attack surface, which means there are more things that can be broken.”

The researchers shared their findings with the Android and iOS teams ahead of publication. An Apple spokesperson told WIRED that the company’s security work is focused on protecting users from hackers, thieves, and criminals looking to steal personal information. The types of attacks the researchers are looking at are very costly to develop, the spokesperson pointed out; they require physical access to the target device and only work until Apple patches the vulnerabilities they exploit. Apple also stressed that its goal with iOS is to balance security and convenience.

“Apple devices are designed with multiple layers of security in order to protect against a wide range of potential threats, and we work constantly to add new protections for our users’ data,” the spokesperson said in a statement. “As customers continue to increase the amount of sensitive information they store on their devices, we will continue to develop additional protections in both hardware and software to protect their data.”

Similarly, Google stressed that these Android attacks depend on physical access and the existence of the right type of exploitable flaws. “We work to patch these vulnerabilities on a monthly basis and continually harden the platform so that bugs and vulnerabilities do not become exploitable in the first place,” a spokesperson said in a statement. “You can expect to see additional hardening in the next release of Android.”

To understand the difference in these encryption states, you can do a little demo for yourself on iOS or Android. When your best friend calls your phone, their name usually shows up on the call screen because it’s in your contacts. But if you restart your device, don’t unlock it, and then have your friend call you, only their number will show up, not their name. That’s because the keys to decrypt your address book data aren’t in memory yet.

The researchers also dove deep into how both Android and iOS handle cloud backups—another area where encryption guarantees can erode.

“It’s the same type of thing where there’s great crypto available, but it’s not necessarily in use all the time,” Zinkus says. “And when you back up, you also expand what data is available on other devices. So if your Mac is also seized in a search, that potentially increases law enforcement access to cloud data.”

Though the smartphone protections that are currently available are adequate for a number of “threat models” or potential attacks, the researchers have concluded that they fall short on the question of specialized forensic tools that governments can easily buy for law enforcement and intelligence investigations. A recent report from researchers at the nonprofit Upturn found nearly 50,000 examples of US police in all 50 states using mobile device forensic tools to get access to smartphone data between 2015 and 2019. And while citizens of some countries may think it is unlikely that their devices will ever specifically be subject to this type of search, widespread mobile surveillance is ubiquitous in many regions of the world and at a growing number of border crossings. The tools are also proliferating in other settings like US schools.

As long as mainstream mobile operating systems have these privacy weaknesses, though, it’s even more difficult to explain why governments around the world—including the US, UK, Australia, and India—have mounted major calls for tech companies to undermine the encryption in their products.

This story originally appeared on wired.com.

Continue Reading

Trending