After launching on iOS, Twitter is giving Android users the ability to easily switch between seeing the reverse-chronological “latest tweets” and the algorithmic “top tweets” feeds on their home page. The company announced the rollout at a media event in New York.
The “sparkle button” is a way for Twitter to appease long-time power tweeters while also shifting more of its user base to the algorithmic feed, which the company says has served to increase the number of conversations happening on the platform.
You can read more about the company’s algorithmic feed thinking here:
Months after an earth-shattering New York Times investigation exposed Google parent company Alphabet’s $90 million …
Update, 11/29/20: It’s a very different Thanksgiving weekend here in 2020, but even if tables were smaller and travel non-existent, Ars staff is off for the holiday in order to recharge, take a mental afk break, and maybe stream a movie or five. But five years ago around this time, we were following a newly declassified government report from 1990 that outlined a KGB computer model… one that almost pulled a WarGames, just IRL. With the film now streaming on Netflix (thus setting our off day schedule), we thought we’d resurface this story for an accompanying Sunday read. This piece first published on November 25, 2015, and it appears unchanged below.
“Let’s play Global Thermonuclear War.”
Thirty-two years ago, just months after the release of the movie WarGames, the world came the closest it ever has to nuclear Armageddon. In the movie version of a global near-death experience, a teenage hacker messing around with an artificial intelligence program that just happened to control the American nuclear missile force unleashes chaos. In reality, a very different computer program run by the Soviets fed growing paranoia about the intentions of the United States, very nearly triggering a nuclear war.
The software in question was a KGB computer model constructed as part of Operation RYAN (РЯН), details of which were obtained from Oleg Gordievsky, the KGB’s London section chief who was at the same time spying for Britain’s MI6. Named for an acronym for “Nuclear Missile Attack” (Ракетное Ядерное Нападение), RYAN was an intelligence operation started in 1981 to help the intelligence agency forecast if the US and its allies were planning a nuclear strike. The KGB believed that by analyzing quantitative data from intelligence on US and NATO activities relative to the Soviet Union, they could predict when a sneak attack was most likely.
As it turned out, Exercise Able Archer ’83 triggered that forecast. The war game, which was staged over two weeks in November of 1983, simulated the procedures that NATO would go through prior to a nuclear launch. Many of these procedures and tactics were things the Soviets had never seen, and the whole exercise came after a series of feints by US and NATO forces to size up Soviet defenses and the downing of Korean Air Lines Flight 007 on September 1, 1983. So as Soviet leaders monitored the exercise and considered the current climate, they put one and one together. Able Archer, according to Soviet leadership at least, must have been a cover for a genuine surprise attack planned by the US, then led by a president possibly insane enough to do it.
While some studies, including an analysis some 12 years ago by historian Fritz Earth, have downplayed the actual Soviet response to Able Archer, a newly published declassified 1990 report from the President’s Foreign Intelligence Advisory Board (PFIAB) to President George H. W. Bush obtained by the National Security Archive suggests that the danger was all too real. The document was classified as Top Secret with the code word UMBRA, denoting the most sensitive compartment of classified material, and it cites data from sources that to this day remain highly classified. When combined with previously released CIA, National Security Agency (NSA), and Defense Department documents, this PFIAB report shows that only the illness of Soviet leader Yuri Andropov—and the instincts of one mid-level Soviet officer—may have prevented a nuclear launch.
The balance of paranoia
As Able Archer ’83 was getting underway, the US defense and intelligence community believed the Soviet Union was strategically secure. A top-secret Defense Department-CIA Joint Net Assessment published in November of 1983 stated, “The Soviets, in our view, have some clear advantages today, and these advantages are projected to continue, although differences may narrow somewhat in the next 10 years. It is likely, however, that the Soviets do not see their advantage as being as great as we would assess.”
The assessment was spot on—the Soviets certainly did not see it this way. In 1981, the KGB foreign intelligence directorate ran a computer analysis using an early version of the RYAN system, seeking the “correlation of world forces” between the USSR and the United States. The numbers suggested one thing: the Soviet Union was losing the Cold War, and the US might soon be in a strategically dominant position. And if that happened, the Soviets believed its adversary would strike to destroy them and their Warsaw Pact allies.
This data was everything the leadership expected given the intransigence of the Reagan administration. The US’ aggressive foreign policy in the late 1970s and early 1980s confused and worried the USSR. They didn’t understand the reaction to the invasion of Afghanistan, which they thought the US would just recognize as a vital security operation.
The US was even funding the mujaheddin fighting them, “training and sending armed terrorists,” as Communist Party Secretary Mikhail Suslov put it in a 1980 speech (those trainees including a young Saudi inspired to jihad by the name of Osama bin Laden). And in Nicaragua, the US was funneling arms to the Contras fighting the Sandinista government of Daniel Ortega. All the while, Reagan was refusing to engage the Soviets on arms control. This mounting evidence convinced some in the Soviet leadership that Reagan was willing to go even further in his efforts to destroy what he would soon describe as the “evil empire.”
USSR had plenty of reason to think the US also believed it could win a nuclear war. The rhetoric of the Reagan administration was backed up by a surge in military capabilities, and much of the Soviet military’s nuclear capabilities were vulnerable to surprise attack. In 1983, the United States was in the midst of its biggest military buildup in decades. And thanks to a direct line into some of the US’ most sensitive communications, the KGB had plenty of bad news to share about that with the Kremlin.
The seaborne leg of the Soviet strategic force was especially vulnerable. The US Navy’s SOSUS (sound surveillance system), a network of hydrophone arrays, tracked nearly every Russian submarine that entered the Atlantic and much of the Pacific, and US antisubmarine forces (P-3 Orion patrol planes, fast attack subs, and destroyers and frigates) were practically on top of, or in the wake of, Soviet ballistic missile subs during their patrols. The US had mapped out the “Yankee Patrol Boxes” where Soviet Navaga-class (NATO designation “Yankee”) ballistic missile subs stationed themselves off the US’ east and west coasts. Again, the Soviets knew all of this thanks to the spy John Walker, so confidence in their sub fleet’s survivability was likely low.
The air-based leg of the Soviet triad was no better off. By the 1980s, the Soviet Union had the largest air force in the world. But the deployment of the Tomahawk cruise missile, initial production of the US Air Force’s AGM-86 Air Launched Cruise Missile, and the pending deployment of Pershing II intermediate range ballistic missiles to Europe meant that NATO could strike at Soviet air fields with very little warning. Unfortunately, the Soviet strategic air force needed as much warning as it could get. Soviet long-range bombers were “kept at a low state of readiness,” the advisory board report noted. Hours or days would have been required to get bombers ready for an all-out war. In all likelihood, the Soviet leadership assumed their entire bomber force would be caught on the ground in a sneak attack and wiped out.
Even theater nuclear forces like the RSD-10 Pioneer—one of the weapons systems that prompted the deployment of the Pershing II to Europe—were vulnerable. They generally didn’t have warheads or missiles loaded into their mobile launcher systems when not on alert. The only leg not overly vulnerable to a first strike by NATO was the Soviets’ intermediate and intercontinental ballistic missile (ICBM) force. Its readiness was in question, however. According to the 1990 briefing paper by the PFIAB, about 95 percent of the Soviet ICBM force was ready to respond to an attack alert within 15 minutes during the early 1980s. The silo-based missiles were out of range of anything but US submarine-launched and land-based ballistic missiles.
The viability of the ICBM force as a response to sneak attack was based entirely on how much warning time the Soviets had. In 1981, they brought a new over-the-horizon ballistic missile early warning (BMEW) radar system on-line. One year later, the Soviets activated the US-KS nuclear launch warning satellite network, known as “Oko” (Russian for “eye”). These two measures gave the Soviet command and control structure about 30 minutes’ warning of any US ICBM launch. But the deployment of Pershing II missiles to Europe could cut warning time to less than eight minutes, and attacks from US sub-launched missiles would have warning times in some cases of less than five minutes.
And then, President Ronald Reagan announced the Strategic Defense Initiative (SDI) or “Star Wars” program—the predecessor to the current Missile Defense Agency efforts to counter limited ballistic missile attacks. While SDI was presented as defensive, it would likely only be effective if the US dramatically reduced the number of Soviet ICBMs launched by making a first strike. More than ever before, SDI convinced the Soviet leadership that Reagan was aiming to make a nuclear war against them winnable.
Combined with his ongoing anti-Soviet rhetoric, USSR leadership saw Reagan as an existential threat against the country on par with Hitler. In fact, they publicly made that comparison, accusing the Reagan administration of pushing the world closer to another global war. And maybe, they thought, the US president already believed it was possible to defeat the Soviets with a surprise attack.
Julian Green was explaining the big problem with meetings when our meeting started to glitch. The pixels of his face rearranged themselves. A sentence came out as hiccups. Then he sputtered, froze, and ghosted.
Green and I had been chatting on Headroom, a new video conferencing platform he and cofounder Andrew Rabinovich launched this fall. The glitch, they assured me, was not caused by their software, but by Green’s Wi-Fi connection. “I think the rest of my street is on homeschool,” he said, a problem that Headroom was not built to solve. It was built instead for other issues: the tedium of taking notes, the coworkers who drone on and on, and the difficulty in keeping everyone engaged. As we spoke, software tapped out a real-time transcription in a window next to our faces. It kept a running tally of how many words each person had said (Rabinovich dominated). Once our meeting was over, Headroom’s software would synthesize the concepts from the transcript; identify key topics, dates, ideas, and action items; and, finally, spit out a record that could be searched at a later time. It would even try to measure how much each participant was paying attention.
Meetings have become the necessary evil of the modern workplace, spanning an elaborate taxonomy: daily stand-ups, sit-downs, all-hands, one-on-ones, brown-bags, status checks, brainstorms, debriefs, design reviews. But as time spent in these corporate conclaves goes up, work seems to suffer. Researchers have found that meetings correlate with a decline in workplace happiness, productivity, and even company market share. And in a year when so many office interactions have gone digital, the usual tedium of meeting culture is compounded by the fits and starts of teleconferencing.
Recently, a new wave of startups has emerged to optimize those meetings with, what else, technology. Macro (“give your meeting superpowers”) makes a collaborative interface for Zoom. Mmhmm offers interactive backgrounds and slide-share tools for presenters. Fireflies, an AI transcription tool, integrates with popular video conferencing platforms to create a searchable record of each meeting. And Sidekick (“make your remote team feel close again”) sells a dedicated tablet for video calls.
The idea behind Headroom, which was conceived pre-pandemic, is to improve on both the in-person and virtual problems with meetings, using AI. (Rabinovich used to head AI at Magic Leap.) The use of video conferencing was already on the rise before 2020; this year it exploded, and Green and Rabinovich are betting that the format is here to stay as more companies grow accustomed to having remote employees. Over the last nine months, though, many people have learned firsthand that virtual meetings bring new challenges, like interpreting body language from other people on-screen or figuring out if anyone is actually listening.
“One of the hard things in a videoconference is when someone is speaking and I want to tell them that I like it,” says Green. In person, he says, “you might head nod or make a small aha.” But on a video chat, the speaker might not see if they’re presenting slides, or if the meeting is crowded with too many squares, or if everyone who’s making verbal cues is on mute. “You can’t tell if it’s crickets or if people are loving it.”
Headroom aims to tackle the social distance of virtual meetings in a few ways. First, it uses computer vision to translate approving gestures into digital icons, amplifying each thumbs up or head nod with little emojis that the speaker can see. Those emojis also get added to the official transcript, which is automatically generated by software to spare someone the task of taking notes. Green and Rabinovich say this type of monitoring is made clear to all participants at the start of every meeting, and teams can opt out of features if they choose.
More uniquely, Headroom’s software uses emotion recognition to take the temperature of the room periodically, and to gauge how much attention participants are paying to whoever’s speaking. Those metrics are displayed in a window on-screen, designed mostly to give the speaker real-time feedback that can sometimes disappear in the virtual context. “If five minutes ago everyone was super into what I’m saying and now they’re not, maybe I should think about shutting up,” says Green.
Emotion recognition is still a nascent field of AI. “The goal is to basically try to map the facial expressions as captured by facial landmarks: the rise of the eyebrow, the shape of the mouth, the opening of the pupils,” says Rabinovich. Each of these facial movements can be represented as data, which in theory can then be translated into an emotion: happy, sad, bored, confused. In practice, the process is rarely so straightforward. Emotion recognition software has a history of mislabeling people of color; one program, used by airport security, overestimated how often Black men showed negative emotions, like “anger.” Affective computing also fails to take cultural cues into context, like whether someone is averting their eyes out of respect, shame, or shyness.
For Headroom’s purposes, Rabinovich argues that these inaccuracies aren’t as important. “We care less if you’re happy or super happy, so long that we’re able to tell if you’re involved,” says Rabinovich. But Alice Xiang, the head of fairness, transparency, and accountability research at the Partnership on AI, says even basic facial recognition still has problems—like failing to detect when Asian individuals have their eyes open—because they are often trained on white faces. “If you have smaller eyes, or hooded eyes, it might be the case that the facial recognition concludes you are constantly looking down or closing your eyes when you’re not,” says Xiang. These sorts of disparities can have real-world consequences as facial recognition software gains more widespread use in the workplace. Headroom is not the first to bring such software into the office. HireVue, a recruiting technology firm, recently introduced an emotion recognition software that suggests a job candidate’s “employability,” based on factors like facial movements and speaking voice.
Constance Hadley, a researcher at Boston University’s Questrom School of Business, says that gathering data on people’s behavior during meetings can reveal what is and isn’t working within that setup, which could be useful for employers and employees alike. But when people know their behavior is being monitored, it can change how they act in unintended ways. “If the monitoring is used to understand patterns as they exist, that’s great,” says Hadley. “But if it’s used to incentivize certain types of behavior, then it can end up triggering dysfunctional behavior.” In Hadley’s classes, when students know that 25 percent of the grade is participation, students raise their hands more often, but they don’t necessarily say more interesting things. When Green and Rabinovich demonstrated their software to me, I found myself raising my eyebrows, widening my eyes, and grinning maniacally to change my levels of perceived emotion.
In Hadley’s estimation, when meetings are conducted is just as important as how. Poorly scheduled meetings can rob workers of the time to do their own tasks, and a deluge of meetings can make people feel like they’re wasting time while drowning in work. Naturally, there are software solutions to this, too. Clockwise, an AI time management platform launched in 2019, uses an algorithm to optimize the timing of meetings. “Time has become a shared asset inside a company, not a personal asset,” says Matt Martin, the founder of Clockwise. “People are balancing all these different threads of communication, the velocity has gone up, the demands of collaboration are more intense. And yet, the core of all of that, there’s not a tool for anyone to express, ‘This is the time I need to actually get my work done. Do not distract me!’”
Clockwise syncs with someone’s Google calendar to analyze how they’re spending their time, and how they could do so more optimally. The software adds protective time blocks based on an individual’s stated preferences. It might reserve a chunk of “do not disturb” time for getting work done in the afternoons. (It also automatically blocks off time for lunch. “As silly as that sounds, it makes a big difference,” says Martin.) And by analyzing multiple calendars within the same workforce or team, the software can automatically move meetings like a “team sync” or a “weekly 1×1” into time slots that work for everyone. The software optimizes for creating more uninterrupted blocks of time, when workers can get into “deep work” without distraction.
Clockwise, which launched in 2019, just closed an $18 million funding round and says it’s gaining traction in Silicon Valley. So far, it has 200,000 users, most of whom work for companies like Uber, Netflix, and Twitter; about half of its users are engineers. Headroom is similarly courting clients in the tech industry, where Green and Rabinovich feel they best understand the problems with meetings. But it’s not hard to imagine similar software creeping beyond the Silicon Valley bubble. Green, who has school-age children, has been exasperated by parts of their remote learning experience. There are two dozen students in their classes, and the teacher can’t see all of them at once. “If the teacher is presenting slides, they actually can see none of them,” he says. “They don’t even see if the kids have their hands up to ask a question.”
Indeed, the pains of teleconferencing aren’t limited to offices. As more and more interaction is mediated by screens, more software tools will surely try to optimize the experience. Other problems, like laggy Wi-Fi, will be someone else’s to solve.
Comcast is raising prices for cable TV and Internet service on January 1, 2021, with price hikes coming both to standard monthly rates and to hidden fees that aren’t included in advertised prices.
TV customers are getting an especially raw deal, as Comcast is adding up to $4.50 a month to the “Broadcast TV” fee and $2 to the Regional Sports Network (RSN) fee. That’s an increase of up to $78 a year solely from two fees that aren’t included in advertised rates.
As in past years, even customers who still are on promotional pricing will not be spared from the Broadcast TV and RSN fee increases. “Customers on promotional pricing will not see that pricing change until the end of the promotion, but the RSN and Broadcast TV fees will increase because they’re not part of the promotional pricing,” a Comcast spokesperson told Ars.
Without the upcoming increase, the Broadcast TV fee currently ranges from $7.90 to $14.95 depending on the market, the spokesperson said. The RSN fee maxes out at $8.75 a month in most of Comcast’s territory, but Comcast said this fee is $14.45 for Chicago-area customers with access to the Sinclair-owned Marquee Sports Network that airs Chicago Cubs games. The RSN fee is not charged in some markets that don’t have RSNs.
Six Internet-only packages that cost $53 to $113 a month will all rise $3 a month, and the price for professional installations or in-home service visits is rising from $70 to $100. Comcast revealed price increases in a notice that has been shared on Reddit:
While the above price-increase notice is for Chicago only, a Comcast spokesperson confirmed to Ars that price hikes will be nationwide. The Chicago price-change list doesn’t include the Regional Sports Network fee “because their RSN fee increased on October 1, 2020 with the addition of the Marquee Sports Network. The RSN Fee will increase by $2 in all other markets effective January 1, 2021,” Comcast told Ars.
“Other changes for 2021 include a Broadcast TV Fee increase of up to $4.50 depending on the market; $3 increase for Internet-only service; and up to to a $2.50 increase for TV boxes on the primary outlet, with a decrease of up to $2.45 for TV boxes on additional outlets,” the Comcast spokesperson added. The fee for a customer’s primary TV box is rising from $5 to $7.50, while the fee for additional boxes is being lowered from $9.95 to $7.50.
While the Chicago price list says the base price of the Choice TV package is rising from $25 to $30 a month, it’s not clear which TV packages will get price increases in other areas. Comcast told us that changes to base TV prices will vary by market.
Comcast charges a $30 monthly fee to upgrade from the 1.2TB plan to unlimited data, or $25 a month for customers who purchase xFi Complete, which includes unlimited data and rental of the Comcast gateway modem/router. The xFi Complete fee is only $20 in some markets, but Comcast told Ars it is raising the price in those markets to $25 to match what’s charged in the rest of the country.
Comcast blames programmers
Comcast defended the price increases with this statement:
Rising programming costs—most notably for broadcast TV and sports—continue to be the biggest factors driving price increases for all content distributors and their customers, not just Comcast. We’re continuing to work hard to manage these costs for our customers while investing in our network to provide the best, most reliable broadband service in the country and the flexibility to choose our industry-leading video platform with X1 or the highest quality streaming product with Flex, the only free streaming TV device with voice remote that’s included with broadband service.
But Comcast can’t solely blame other programmers for price hikes because Comcast itself owns NBCUniversal and thus determines the price of all NBCUniversal content, including the national channels and eight RSNs in major markets. Despite Comcast owning NBC, the cable company recently warned customers that they could lose NBC channels if Comcast is unable to reach a new carriage contract with… NBC. The absurd situation was summarized by TechDirt in an article aptly titled, “Comcast Tells Customers They May Lose Access To Comcast Channels If Comcast Can’t Agree With Comcast.”
On the broadband side, Comcast seems to be justifying price hikes based on the company’s investment in improving its network. But Comcast reduced capital spending on its cable division in 2019 and reduced cable-division capital spending again in the first nine months of 2020.
As we reported Monday, Comcast will also be enforcing the 1.2TB monthly data cap throughout its entire 39-state territory in 2021. Currently, Comcast enforces the cap in 27 states.
Comcast is the largest cable company and broadband provider in the US, followed by Charter, which has also raised prices on a regular basis. The companies do not compete against each other and each has a virtual monopoly over high-speed wired broadband in large portions of the US. Charter is raising prices on its Spectrum service in December. Charter is prohibited from imposing data caps until May 2023 thanks to a merger condition, but has petitioned the Federal Communications Commission to drop the data-cap ban in May 2021 instead.
Disclosure: The Advance/Newhouse Partnership, which owns 13 percent of Charter, is part of Advance Publications. Advance Publications owns Condé Nast, which owns Ars Technica.