Connect with us

Biz & IT

Google and Qualcomm launch a dev kit for building Assistant-enabled headphones

Published

on

Qualcomm today announced that it has partnered with Google to create a reference design and development kit for building Assistant-enabled Bluetooth headphones. Traditionally, building these headphones wasn’t exactly straightforward and involved building a lot of the hardware and software stack, something top-tier manufacturers could afford to do, but that kept second- or third-tier headphone developers from adding voice assistant capabilities to their devices.

“As wireless Bluetooth devices like headphones and earbuds become more popular, we need to make it easier to have the same great Assistant experience across many headsets,” Google’s Tomer Amarilio writes in today’s announcement.

The aptly named “Qualcomm Smart Headset Development Kit” is powered by a Qualcomm QCC5100-series Bluetooth audio chip and provides a full reference board for developing new headsets and interacting with the Assistant. What’s interesting — and somewhat unusual for Qualcomm — is that the company also built its own Bluetooth earbuds as a full reference design. These feature the ability to hold down a button to start an Assistant session, for example, as well as volume buttons. They are definitely not stylish headphones you’d want to use on your commute, given that they are bulky enough to feature a USB port. But they are meant to provide manufacturers with a design they can then use to build their own devices.

In addition to making it easier for developers to integrate the Assistant, the reference design also supports Google’s Fast Pair technology that makes connecting a new headset to an Android Phone without the usual hassle that comes with connecting a headset for the first time.

“Demand for voice control and assistance on-the-go is rapidly gaining traction across the consumer landscape,” said Chris Havell, senior director, product marketing, voice and music at Qualcomm. “Combined with our Smart Headset Platform, this reference design offers flexibility for manufacturers wanting to deliver highly differentiated user experiences that take advantage of the power and popularity of Google cloud-based services.”

Source link

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Biz & IT

Does Tor provide more benefit or harm? New paper says it depends

Published

on

The Tor anonymity network has generated controversy almost constantly since its inception almost two decades ago. Supporters say it’s a vital service for protecting online privacy and circumventing censorship, particularly in countries with poor human rights records. Critics, meanwhile, argue that Tor shields criminals distributing child-abuse images, trafficking in illegal drugs, and engaging in other illicit activities.

Researchers on Monday unveiled new estimates that attempt to measure the potential harms and benefits of Tor. They found that, worldwide, almost 7 percent of Tor users connect to hidden services, which the researchers contend are disproportionately more likely to offer illicit services or content compared with normal Internet sites. Connections to hidden services were significantly higher in countries rated as more politically “free” relative to those that are “partially free” or “not free.”

Licit versus illicit

Specifically, the fraction of Tor users globally accessing hidden sites is 6.7, a relatively small proportion. Those users, however, aren’t evenly distributed geographically. In countries with regimes rated “not free” by this scoring from an organization called Freedom House, access to hidden services was just 4.8 percent. In “free” countries, the proportion jumped to 7.8 percent.

Here’s a graph of the breakdown:

More politically “free” countries have higher proportions of Hidden Services traffic than is present in either “partially free” or “not free” nations. Each point indicates the average daily percentage of anonymous services accessed in a given country. The white regions represent the kernel density distributions for each ordinal category of political freedom (“free,” “partially free,” and “not free”
Enlarge / More politically “free” countries have higher proportions of Hidden Services traffic than is present in either “partially free” or “not free” nations. Each point indicates the average daily percentage of anonymous services accessed in a given country. The white regions represent the kernel density distributions for each ordinal category of political freedom (“free,” “partially free,” and “not free”

In a paper, the researchers wrote:

The Tor anonymity network can be used for both licit and illicit purposes. Our results provide a clear, if probabilistic, estimation of the extent to which users of Tor engage in either form of activity. Generally, users of Tor in politically “free” countries are significantly more likely to be using the network in likely illicit ways. A host of additional questions remain, given the anonymous nature of Tor and other similar systems such as I2P and Freenet. Our results narrowly suggest, however, users of Tor in more repressive “not free” regimes tend to be far more likely to venture via the Tor network to Clear Web content and so are comparatively less likely to be engaged in activities that would be widely deemed malicious.

The estimates are based on a sample comprising 1 percent of Tor entry nodes, which the researchers monitored from December 31, 2018, to August 18, 2019, with an interruption to data collection from May 4 to May 13. By analyzing directory lookups and other unique signatures in the traffic, the researchers distinguished when a Tor client was visiting normal Internet websites or anonymous (or Dark Web) services.

The researchers—from Virginia Tech in Blacksburg, Virginia; Skidmore College in Saratoga Springs, New York; and Cyber Espion in Portsmouth, United Kingdom—acknowledged that the estimates aren’t perfect, In part, that’s because the estimates are based on the unprovable assumption that the overwhelming majority of Dark Web sites provide illicit content or services.

The paper, however, argues that the findings can be useful for policymakers who are trying to gauge the benefits of Tor relative to the harms it creates. The researchers view the results through the lenses of the 2015 paper titled The Dark Web Dilemma: Tor, Anonymity and Online Policing and On Liberty, the essay published by English philosopher John Stuart Mill in 1859.

Dark Web dilemma

The researchers in Monday’s paper wrote:

These results have a number of consequences for research and policy. First, the results suggest that anonymity-granting technologies such as Tor present a clear public policy challenge and include clear political context and geographical components. This policy challenge is referred to in the literature as the “Dark Web dilemma.” At the root of the dilemma is the so-called “harm principle” proposed in On Liberty by John Stuart Mill. In this principle, it is morally permissible to undertake any action so long as it does not cause someone else harm.

The challenge of the Tor anonymity network, as intimated by its dual use nature, is that maximal policy solutions all promise to cause harm to some party. Leaving the Tor network up and free from law enforcement investigation is likely to lead to direct and indirect harms that result from the system being used by those engaged in child exploitation, drug exchange, and the sale of firearms, although these harms are of course highly heterogeneous in terms of their potential negative social impacts and some, such as personal drug use, might also have predominantly individual costs in some cases.

Conversely, simply working to shut down Tor would cause harm to dissidents and human rights activists, particularly, our results suggest, in more repressive, less politically free regimes where technological protections are often needed the most.

Our results showing the uneven distribution of likely licit and illicit users of Tor across countries also suggest that there may be a looming public policy conflagration on the horizon. The Tor network, for example, runs on ∼6,000–6,500 volunteer nodes. While these nodes are distributed across a number of countries, it is plausible that many of these infrastructural points cluster in politically free liberal democratic countries. Additionally, the Tor Project, which manages the code behind the network, is an incorporated not for profit in the United States and traces both its intellectual origins and a large portion of its financial resources to the US government.

In other words, much of the physical and protocol infrastructure of the Tor anonymity network is clustered disproportionately in free regimes, especially the United States. Linking this trend with a strict interpretation of our current results suggests that the harms from the Tor anonymity network cluster in free countries hosting the infrastructure of Tor and that the benefits cluster in disproportionately highly repressive regimes.

A “flawed” assumption

It didn’t take long for people behind the Tor Project to question the findings and the assumptions that led to them. In an email, Isabela Bagueros, executive director of the Tor Project, wrote:

The authors of this research paper have chosen to categorize all .onion sites and all traffic to these sites as “illicit” and all traffic on the “Clear Web” as ‘licit.’

This assumption is flawed. Many popular websites, tools, and services use onion services to offer privacy and censorship-circumvention benefits to their users. For example, Facebook offers an onion service. Global news organizations, including The New York Times, BBC, Deutsche Welle, Mada Masr, and Buzzfeed, offer onion services.

Whistleblowing platforms, filesharing tools, messaging apps, VPNs, browsers, email services, and free software projects also use onion services to offer privacy protections to their users, including Riseup, OnionShare, SecureDrop, GlobaLeaks, ProtonMail, Debian, Mullvad VPN, Ricochet Refresh, Briar, and Qubes OS.

(For even more examples, and quotes from website admins that use onion services on why they use Tor: https://blog.torproject.org/more-onions-end-of-campaign)

Writing off traffic to these widely-used sites and services as “illicit” is a generalization that demonizes people and organizations who choose technology that allows them to protect their privacy and circumvent censorship. In a world of increasing surveillance capitalism and internet censorship, online privacy is necessary for many of us to exercise our human rights to freely access information, share our ideas, and communicate with one another. Incorrectly identifying all onion service traffic as “illicit” harms the fight to protect encryption and benefits the powers that be that are trying to weaken or entirely outlaw strong privacy technology.

Secondly, we look forward to hearing the researchers describe their methodology in more detail, so the scientific community has the possibility to assess whether their approach is accurate and safe. The copy of the paper provided does not outline their methodology, so there is no way for the Tor Project or other researchers to assess the accuracy of their findings.

The paper is unlikely to convert Tor supporters to critics or vice versa. It does, however, provide a timely estimate of overall Tor usage and geographic breakdown that will be of interest to many policymakers.

Continue Reading

Biz & IT

WarGames for real: How one 1983 exercise nearly triggered WWIII

Published

on

Update, 11/29/20: It’s a very different Thanksgiving weekend here in 2020, but even if tables were smaller and travel non-existent, Ars staff is off for the holiday in order to recharge, take a mental afk break, and maybe stream a movie or five. But five years ago around this time, we were following a newly declassified government report from 1990 that outlined a KGB computer model… one that almost pulled a WarGames, just IRL. With the film now streaming on Netflix (thus setting our off day schedule), we thought we’d resurface this story for an accompanying Sunday read. This piece first published on November 25, 2015, and it appears unchanged below.

“Let’s play Global Thermonuclear War.”

Thirty-two years ago, just months after the release of the movie WarGames, the world came the closest it ever has to nuclear Armageddon. In the movie version of a global near-death experience, a teenage hacker messing around with an artificial intelligence program that just happened to control the American nuclear missile force unleashes chaos. In reality, a very different computer program run by the Soviets fed growing paranoia about the intentions of the United States, very nearly triggering a nuclear war.

The software in question was a KGB computer model constructed as part of Operation RYAN (РЯН), details of which were obtained from Oleg Gordievsky, the KGB’s London section chief who was at the same time spying for Britain’s MI6. Named for an acronym for “Nuclear Missile Attack” (Ракетное Ядерное Нападение), RYAN was an intelligence operation started in 1981 to help the intelligence agency forecast if the US and its allies were planning a nuclear strike. The KGB believed that by analyzing quantitative data from intelligence on US and NATO activities relative to the Soviet Union, they could predict when a sneak attack was most likely.

As it turned out, Exercise Able Archer ’83 triggered that forecast. The war game, which was staged over two weeks in November of 1983, simulated the procedures that NATO would go through prior to a nuclear launch. Many of these procedures and tactics were things the Soviets had never seen, and the whole exercise came after a series of feints by US and NATO forces to size up Soviet defenses and the downing of Korean Air Lines Flight 007 on September 1, 1983. So as Soviet leaders monitored the exercise and considered the current climate, they put one and one together. Able Archer, according to Soviet leadership at least, must have been a cover for a genuine surprise attack planned by the US, then led by a president possibly insane enough to do it.

While some studies, including an analysis some 12 years ago by historian Fritz Earth, have downplayed the actual Soviet response to Able Archer, a newly published declassified 1990 report from the President’s Foreign Intelligence Advisory Board (PFIAB) to President George H. W. Bush obtained by the National Security Archive suggests that the danger was all too real. The document was classified as Top Secret with the code word UMBRA, denoting the most sensitive compartment of classified material, and it cites data from sources that to this day remain highly classified. When combined with previously released CIA, National Security Agency (NSA), and Defense Department documents, this PFIAB report shows that only the illness of Soviet leader Yuri Andropov—and the instincts of one mid-level Soviet officer—may have prevented a nuclear launch.

The balance of paranoia

As Able Archer ’83 was getting underway, the US defense and intelligence community believed the Soviet Union was strategically secure. A top-secret Defense Department-CIA Joint Net Assessment published in November of 1983 stated, “The Soviets, in our view, have some clear advantages today, and these advantages are projected to continue, although differences may narrow somewhat in the next 10 years. It is likely, however, that the Soviets do not see their advantage as being as great as we would assess.”

The assessment was spot on—the Soviets certainly did not see it this way. In 1981, the KGB foreign intelligence directorate ran a computer analysis using an early version of the RYAN system, seeking the “correlation of world forces” between the USSR and the United States. The numbers suggested one thing: the Soviet Union was losing the Cold War, and the US might soon be in a strategically dominant position. And if that happened, the Soviets believed its adversary would strike to destroy them and their Warsaw Pact allies.

This data was everything the leadership expected given the intransigence of the Reagan administration. The US’ aggressive foreign policy in the late 1970s and early 1980s confused and worried the USSR. They didn’t understand the reaction to the invasion of Afghanistan, which they thought the US would just recognize as a vital security operation.

The US was even funding the mujaheddin fighting them, “training and sending armed terrorists,” as Communist Party Secretary Mikhail Suslov put it in a 1980 speech (those trainees including a young Saudi inspired to jihad by the name of Osama bin Laden). And in Nicaragua, the US was funneling arms to the Contras fighting the Sandinista government of Daniel Ortega. All the while, Reagan was refusing to engage the Soviets on arms control. This mounting evidence convinced some in the Soviet leadership that Reagan was willing to go even further in his efforts to destroy what he would soon describe as the “evil empire.”

USSR had plenty of reason to think the US also believed it could win a nuclear war. The rhetoric of the Reagan administration was backed up by a surge in military capabilities, and much of the Soviet military’s nuclear capabilities were vulnerable to surprise attack. In 1983, the United States was in the midst of its biggest military buildup in decades. And thanks to a direct line into some of the US’ most sensitive communications, the KGB had plenty of bad news to share about that with the Kremlin.

The seaborne leg of the Soviet strategic force was especially vulnerable. The US Navy’s SOSUS (sound surveillance system), a network of hydrophone arrays, tracked nearly every Russian submarine that entered the Atlantic and much of the Pacific, and US antisubmarine forces (P-3 Orion patrol planes, fast attack subs, and destroyers and frigates) were practically on top of, or in the wake of, Soviet ballistic missile subs during their patrols. The US had mapped out the “Yankee Patrol Boxes” where Soviet Navaga-class (NATO designation “Yankee”) ballistic missile subs stationed themselves off the US’ east and west coasts. Again, the Soviets knew all of this thanks to the spy John Walker, so confidence in their sub fleet’s survivability was likely low.

The air-based leg of the Soviet triad was no better off.  By the 1980s, the Soviet Union had the largest air force in the world. But the deployment of the Tomahawk cruise missile, initial production of the US Air Force’s AGM-86 Air Launched Cruise Missile, and the pending deployment of Pershing II intermediate range ballistic missiles to Europe meant that NATO could strike at Soviet air fields with very little warning. Unfortunately, the Soviet strategic air force needed as much warning as it could get. Soviet long-range bombers were “kept at a low state of readiness,” the advisory board report noted. Hours or days would have been required to get bombers ready for an all-out war. In all likelihood, the Soviet leadership assumed their entire bomber force would be caught on the ground in a sneak attack and wiped out.

Even theater nuclear forces like the RSD-10 Pioneer—one of the weapons systems that prompted the deployment of the Pershing II to Europe—were vulnerable. They generally didn’t have warheads or missiles loaded into their mobile launcher systems when not on alert. The only leg not overly vulnerable to a first strike by NATO was the Soviets’ intermediate and intercontinental ballistic missile (ICBM) force. Its readiness was in question, however. According to the 1990 briefing paper by the PFIAB, about 95 percent of the Soviet ICBM force was ready to respond to an attack alert within 15 minutes during the early 1980s. The silo-based missiles were out of range of anything but US submarine-launched and land-based ballistic missiles.

The viability of the ICBM force as a response to sneak attack was based entirely on how much warning time the Soviets had. In 1981, they brought a new over-the-horizon ballistic missile early warning (BMEW) radar system on-line. One year later, the Soviets activated the US-KS nuclear launch warning satellite network, known as “Oko” (Russian for “eye”). These two measures gave the Soviet command and control structure about 30 minutes’ warning of any US ICBM launch. But the deployment of Pershing II missiles to Europe could cut warning time to less than eight minutes, and attacks from US sub-launched missiles would have warning times in some cases of less than five minutes.

And then, President Ronald Reagan announced the Strategic Defense Initiative (SDI) or “Star Wars” program—the predecessor to the current Missile Defense Agency efforts to counter limited ballistic missile attacks. While SDI was presented as defensive, it would likely only be effective if the US dramatically reduced the number of Soviet ICBMs launched by making a first strike. More than ever before, SDI convinced the Soviet leadership that Reagan was aiming to make a nuclear war against them winnable.

Combined with his ongoing anti-Soviet rhetoric, USSR leadership saw Reagan as an existential threat against the country on par with Hitler. In fact, they publicly made that comparison, accusing the Reagan administration of pushing the world closer to another global war. And maybe, they thought, the US president already believed it was possible to defeat the Soviets with a surprise attack.

Continue Reading

Biz & IT

AI can run your work meetings now

Published

on

Enlarge / Headroom is one of several apps advertising AI as the solution for your messy virtual/video meetings.

Julian Green was explaining the big problem with meetings when our meeting started to glitch. The pixels of his face rearranged themselves. A sentence came out as hiccups. Then he sputtered, froze, and ghosted.

Green and I had been chatting on Headroom, a new video conferencing platform he and cofounder Andrew Rabinovich launched this fall. The glitch, they assured me, was not caused by their software, but by Green’s Wi-Fi connection. “I think the rest of my street is on homeschool,” he said, a problem that Headroom was not built to solve. It was built instead for other issues: the tedium of taking notes, the coworkers who drone on and on, and the difficulty in keeping everyone engaged. As we spoke, software tapped out a real-time transcription in a window next to our faces. It kept a running tally of how many words each person had said (Rabinovich dominated). Once our meeting was over, Headroom’s software would synthesize the concepts from the transcript; identify key topics, dates, ideas, and action items; and, finally, spit out a record that could be searched at a later time. It would even try to measure how much each participant was paying attention.

Meetings have become the necessary evil of the modern workplace, spanning an elaborate taxonomy: daily stand-ups, sit-downs, all-hands, one-on-ones, brown-bags, status checks, brainstorms, debriefs, design reviews. But as time spent in these corporate conclaves goes up, work seems to suffer. Researchers have found that meetings correlate with a decline in workplace happiness, productivity, and even company market share. And in a year when so many office interactions have gone digital, the usual tedium of meeting culture is compounded by the fits and starts of teleconferencing.

Recently, a new wave of startups has emerged to optimize those meetings with, what else, technology. Macro (“give your meeting superpowers”) makes a collaborative interface for Zoom. Mmhmm offers interactive backgrounds and slide-share tools for presenters. Fireflies, an AI transcription tool, integrates with popular video conferencing platforms to create a searchable record of each meeting. And Sidekick (“make your remote team feel close again”) sells a dedicated tablet for video calls.

The idea behind Headroom, which was conceived pre-pandemic, is to improve on both the in-person and virtual problems with meetings, using AI. (Rabinovich used to head AI at Magic Leap.) The use of video conferencing was already on the rise before 2020; this year it exploded, and Green and Rabinovich are betting that the format is here to stay as more companies grow accustomed to having remote employees. Over the last nine months, though, many people have learned firsthand that virtual meetings bring new challenges, like interpreting body language from other people on-screen or figuring out if anyone is actually listening.

“One of the hard things in a videoconference is when someone is speaking and I want to tell them that I like it,” says Green. In person, he says, “you might head nod or make a small aha.” But on a video chat, the speaker might not see if they’re presenting slides, or if the meeting is crowded with too many squares, or if everyone who’s making verbal cues is on mute. “You can’t tell if it’s crickets or if people are loving it.”

Headroom aims to tackle the social distance of virtual meetings in a few ways. First, it uses computer vision to translate approving gestures into digital icons, amplifying each thumbs up or head nod with little emojis that the speaker can see. Those emojis also get added to the official transcript, which is automatically generated by software to spare someone the task of taking notes. Green and Rabinovich say this type of monitoring is made clear to all participants at the start of every meeting, and teams can opt out of features if they choose.

More uniquely, Headroom’s software uses emotion recognition to take the temperature of the room periodically, and to gauge how much attention participants are paying to whoever’s speaking. Those metrics are displayed in a window on-screen, designed mostly to give the speaker real-time feedback that can sometimes disappear in the virtual context. “If five minutes ago everyone was super into what I’m saying and now they’re not, maybe I should think about shutting up,” says Green.

Emotion recognition is still a nascent field of AI. “The goal is to basically try to map the facial expressions as captured by facial landmarks: the rise of the eyebrow, the shape of the mouth, the opening of the pupils,” says Rabinovich. Each of these facial movements can be represented as data, which in theory can then be translated into an emotion: happy, sad, bored, confused. In practice, the process is rarely so straightforward. Emotion recognition software has a history of mislabeling people of color; one program, used by airport security, overestimated how often Black men showed negative emotions, like “anger.” Affective computing also fails to take cultural cues into context, like whether someone is averting their eyes out of respect, shame, or shyness.

For Headroom’s purposes, Rabinovich argues that these inaccuracies aren’t as important. “We care less if you’re happy or super happy, so long that we’re able to tell if you’re involved,” says Rabinovich. But Alice Xiang, the head of fairness, transparency, and accountability research at the Partnership on AI, says even basic facial recognition still has problems—like failing to detect when Asian individuals have their eyes open—because they are often trained on white faces. “If you have smaller eyes, or hooded eyes, it might be the case that the facial recognition concludes you are constantly looking down or closing your eyes when you’re not,” says Xiang. These sorts of disparities can have real-world consequences as facial recognition software gains more widespread use in the workplace. Headroom is not the first to bring such software into the office. HireVue, a recruiting technology firm, recently introduced an emotion recognition software that suggests a job candidate’s “employability,” based on factors like facial movements and speaking voice.

Constance Hadley, a researcher at Boston University’s Questrom School of Business, says that gathering data on people’s behavior during meetings can reveal what is and isn’t working within that setup, which could be useful for employers and employees alike. But when people know their behavior is being monitored, it can change how they act in unintended ways. “If the monitoring is used to understand patterns as they exist, that’s great,” says Hadley. “But if it’s used to incentivize certain types of behavior, then it can end up triggering dysfunctional behavior.” In Hadley’s classes, when students know that 25 percent of the grade is participation, students raise their hands more often, but they don’t necessarily say more interesting things. When Green and Rabinovich demonstrated their software to me, I found myself raising my eyebrows, widening my eyes, and grinning maniacally to change my levels of perceived emotion.

In Hadley’s estimation, when meetings are conducted is just as important as how. Poorly scheduled meetings can rob workers of the time to do their own tasks, and a deluge of meetings can make people feel like they’re wasting time while drowning in work. Naturally, there are software solutions to this, too. Clockwise, an AI time management platform launched in 2019, uses an algorithm to optimize the timing of meetings. “Time has become a shared asset inside a company, not a personal asset,” says Matt Martin, the founder of Clockwise. “People are balancing all these different threads of communication, the velocity has gone up, the demands of collaboration are more intense. And yet, the core of all of that, there’s not a tool for anyone to express, ‘This is the time I need to actually get my work done. Do not distract me!’”

Clockwise syncs with someone’s Google calendar to analyze how they’re spending their time, and how they could do so more optimally. The software adds protective time blocks based on an individual’s stated preferences. It might reserve a chunk of “do not disturb” time for getting work done in the afternoons. (It also automatically blocks off time for lunch. “As silly as that sounds, it makes a big difference,” says Martin.) And by analyzing multiple calendars within the same workforce or team, the software can automatically move meetings like a “team sync” or a “weekly 1×1” into time slots that work for everyone. The software optimizes for creating more uninterrupted blocks of time, when workers can get into “deep work” without distraction.

Clockwise, which launched in 2019, just closed an $18 million funding round and says it’s gaining traction in Silicon Valley. So far, it has 200,000 users, most of whom work for companies like Uber, Netflix, and Twitter; about half of its users are engineers. Headroom is similarly courting clients in the tech industry, where Green and Rabinovich feel they best understand the problems with meetings. But it’s not hard to imagine similar software creeping beyond the Silicon Valley bubble. Green, who has school-age children, has been exasperated by parts of their remote learning experience. There are two dozen students in their classes, and the teacher can’t see all of them at once. “If the teacher is presenting slides, they actually can see none of them,” he says. “They don’t even see if the kids have their hands up to ask a question.”

Indeed, the pains of teleconferencing aren’t limited to offices. As more and more interaction is mediated by screens, more software tools will surely try to optimize the experience. Other problems, like laggy Wi-Fi, will be someone else’s to solve.

This story first appeared on wired.com

Continue Reading

Trending