Connect with us

Biz & IT

AI can run your work meetings now

Published

on

Enlarge / Headroom is one of several apps advertising AI as the solution for your messy virtual/video meetings.

Julian Green was explaining the big problem with meetings when our meeting started to glitch. The pixels of his face rearranged themselves. A sentence came out as hiccups. Then he sputtered, froze, and ghosted.

Green and I had been chatting on Headroom, a new video conferencing platform he and cofounder Andrew Rabinovich launched this fall. The glitch, they assured me, was not caused by their software, but by Green’s Wi-Fi connection. “I think the rest of my street is on homeschool,” he said, a problem that Headroom was not built to solve. It was built instead for other issues: the tedium of taking notes, the coworkers who drone on and on, and the difficulty in keeping everyone engaged. As we spoke, software tapped out a real-time transcription in a window next to our faces. It kept a running tally of how many words each person had said (Rabinovich dominated). Once our meeting was over, Headroom’s software would synthesize the concepts from the transcript; identify key topics, dates, ideas, and action items; and, finally, spit out a record that could be searched at a later time. It would even try to measure how much each participant was paying attention.

Meetings have become the necessary evil of the modern workplace, spanning an elaborate taxonomy: daily stand-ups, sit-downs, all-hands, one-on-ones, brown-bags, status checks, brainstorms, debriefs, design reviews. But as time spent in these corporate conclaves goes up, work seems to suffer. Researchers have found that meetings correlate with a decline in workplace happiness, productivity, and even company market share. And in a year when so many office interactions have gone digital, the usual tedium of meeting culture is compounded by the fits and starts of teleconferencing.

Recently, a new wave of startups has emerged to optimize those meetings with, what else, technology. Macro (“give your meeting superpowers”) makes a collaborative interface for Zoom. Mmhmm offers interactive backgrounds and slide-share tools for presenters. Fireflies, an AI transcription tool, integrates with popular video conferencing platforms to create a searchable record of each meeting. And Sidekick (“make your remote team feel close again”) sells a dedicated tablet for video calls.

The idea behind Headroom, which was conceived pre-pandemic, is to improve on both the in-person and virtual problems with meetings, using AI. (Rabinovich used to head AI at Magic Leap.) The use of video conferencing was already on the rise before 2020; this year it exploded, and Green and Rabinovich are betting that the format is here to stay as more companies grow accustomed to having remote employees. Over the last nine months, though, many people have learned firsthand that virtual meetings bring new challenges, like interpreting body language from other people on-screen or figuring out if anyone is actually listening.

“One of the hard things in a videoconference is when someone is speaking and I want to tell them that I like it,” says Green. In person, he says, “you might head nod or make a small aha.” But on a video chat, the speaker might not see if they’re presenting slides, or if the meeting is crowded with too many squares, or if everyone who’s making verbal cues is on mute. “You can’t tell if it’s crickets or if people are loving it.”

Headroom aims to tackle the social distance of virtual meetings in a few ways. First, it uses computer vision to translate approving gestures into digital icons, amplifying each thumbs up or head nod with little emojis that the speaker can see. Those emojis also get added to the official transcript, which is automatically generated by software to spare someone the task of taking notes. Green and Rabinovich say this type of monitoring is made clear to all participants at the start of every meeting, and teams can opt out of features if they choose.

More uniquely, Headroom’s software uses emotion recognition to take the temperature of the room periodically, and to gauge how much attention participants are paying to whoever’s speaking. Those metrics are displayed in a window on-screen, designed mostly to give the speaker real-time feedback that can sometimes disappear in the virtual context. “If five minutes ago everyone was super into what I’m saying and now they’re not, maybe I should think about shutting up,” says Green.

Emotion recognition is still a nascent field of AI. “The goal is to basically try to map the facial expressions as captured by facial landmarks: the rise of the eyebrow, the shape of the mouth, the opening of the pupils,” says Rabinovich. Each of these facial movements can be represented as data, which in theory can then be translated into an emotion: happy, sad, bored, confused. In practice, the process is rarely so straightforward. Emotion recognition software has a history of mislabeling people of color; one program, used by airport security, overestimated how often Black men showed negative emotions, like “anger.” Affective computing also fails to take cultural cues into context, like whether someone is averting their eyes out of respect, shame, or shyness.

For Headroom’s purposes, Rabinovich argues that these inaccuracies aren’t as important. “We care less if you’re happy or super happy, so long that we’re able to tell if you’re involved,” says Rabinovich. But Alice Xiang, the head of fairness, transparency, and accountability research at the Partnership on AI, says even basic facial recognition still has problems—like failing to detect when Asian individuals have their eyes open—because they are often trained on white faces. “If you have smaller eyes, or hooded eyes, it might be the case that the facial recognition concludes you are constantly looking down or closing your eyes when you’re not,” says Xiang. These sorts of disparities can have real-world consequences as facial recognition software gains more widespread use in the workplace. Headroom is not the first to bring such software into the office. HireVue, a recruiting technology firm, recently introduced an emotion recognition software that suggests a job candidate’s “employability,” based on factors like facial movements and speaking voice.

Constance Hadley, a researcher at Boston University’s Questrom School of Business, says that gathering data on people’s behavior during meetings can reveal what is and isn’t working within that setup, which could be useful for employers and employees alike. But when people know their behavior is being monitored, it can change how they act in unintended ways. “If the monitoring is used to understand patterns as they exist, that’s great,” says Hadley. “But if it’s used to incentivize certain types of behavior, then it can end up triggering dysfunctional behavior.” In Hadley’s classes, when students know that 25 percent of the grade is participation, students raise their hands more often, but they don’t necessarily say more interesting things. When Green and Rabinovich demonstrated their software to me, I found myself raising my eyebrows, widening my eyes, and grinning maniacally to change my levels of perceived emotion.

In Hadley’s estimation, when meetings are conducted is just as important as how. Poorly scheduled meetings can rob workers of the time to do their own tasks, and a deluge of meetings can make people feel like they’re wasting time while drowning in work. Naturally, there are software solutions to this, too. Clockwise, an AI time management platform launched in 2019, uses an algorithm to optimize the timing of meetings. “Time has become a shared asset inside a company, not a personal asset,” says Matt Martin, the founder of Clockwise. “People are balancing all these different threads of communication, the velocity has gone up, the demands of collaboration are more intense. And yet, the core of all of that, there’s not a tool for anyone to express, ‘This is the time I need to actually get my work done. Do not distract me!’”

Clockwise syncs with someone’s Google calendar to analyze how they’re spending their time, and how they could do so more optimally. The software adds protective time blocks based on an individual’s stated preferences. It might reserve a chunk of “do not disturb” time for getting work done in the afternoons. (It also automatically blocks off time for lunch. “As silly as that sounds, it makes a big difference,” says Martin.) And by analyzing multiple calendars within the same workforce or team, the software can automatically move meetings like a “team sync” or a “weekly 1×1” into time slots that work for everyone. The software optimizes for creating more uninterrupted blocks of time, when workers can get into “deep work” without distraction.

Clockwise, which launched in 2019, just closed an $18 million funding round and says it’s gaining traction in Silicon Valley. So far, it has 200,000 users, most of whom work for companies like Uber, Netflix, and Twitter; about half of its users are engineers. Headroom is similarly courting clients in the tech industry, where Green and Rabinovich feel they best understand the problems with meetings. But it’s not hard to imagine similar software creeping beyond the Silicon Valley bubble. Green, who has school-age children, has been exasperated by parts of their remote learning experience. There are two dozen students in their classes, and the teacher can’t see all of them at once. “If the teacher is presenting slides, they actually can see none of them,” he says. “They don’t even see if the kids have their hands up to ask a question.”

Indeed, the pains of teleconferencing aren’t limited to offices. As more and more interaction is mediated by screens, more software tools will surely try to optimize the experience. Other problems, like laggy Wi-Fi, will be someone else’s to solve.

This story first appeared on wired.com

Continue Reading

Biz & IT

Security firm Malwarebytes was infected by same hackers who hit SolarWinds

Published

on

Security firm Malwarebytes said it was breached by the same nation-state-sponsored hackers who compromised a dozen or more US government agencies and private companies.

The attackers are best known for first hacking into Austin, Texas-based SolarWinds, compromising its software-distribution system, and using it to infect the networks of customers who used SolarWinds’ network management software. In an online notice, however, Malwarebytes said the attackers used a different vector.

“While Malwarebytes does not use SolarWinds, we, like many other companies were recently targeted by the same threat actor,” the notice stated. “We can confirm the existence of another intrusion vector that works by abusing applications with privileged access to Microsoft Office 365 and Azure environments.”

Investigators have determined the attacker gained access to a limited subset of internal company emails. So far, the investigators have found no evidence of unauthorized access or compromise in any Malwarebytes production environments.

The notice isn’t the first time investigators have said the SolarWinds software supply chain attack wasn’t the sole means of infection.

When the mass compromise came to light last month, Microsoft said the hackers also stole signing certificates that allowed them to impersonate any of a target’s existing users and accounts through the Security Assertion Markup Language. Typically abbreviated as SAML, the XML-based language provides a way for identity providers to exchange authentication and authorization data with service providers.

Twelve days ago, the Cybersecurity & Infrastructure Security Agency, said the attackers may have obtained initial access by using password guessing or password spraying or by exploiting administrative or service credentials.

Mimecast

“In our particular instance, the threat actor added a self-signed certificate with credentials to the service principal account,” Malwarebytes researcher Marcin Kleczynski wrote. “From there, they can authenticate using the key and make API calls to request emails via MSGraph.”

Last week, email management provider Mimecast also said that hackers compromised a digital certificate it issued and used it to target select customers who use it to encrypt data they sent and received through the company’s cloud-based service. While Mimecast didn’t say the certificate compromise was related to the ongoing attack, the similarities make it likely the two attacks are related.

Because the attackers used their access to the SolarWinds network to compromise the company’s software build system, Malwarebytes researchers investigated the possibility that they too were being used to infect their customers. So far, Malwarebytes said it has no evidence of such an infection. The company has also inspected its source code repositories for signs of malicious changes.

Malwarebytes said it first learned of the infection from Microsoft on December 15, two days after the SolarWinds hack was first disclosed. Microsoft identified the network compromise through suspicious activity from a third-party application in Malwarebytes’ Microsoft Office 365 tenant. The tactics, techniques, and procedures in the Malwarebytes attack were similar in key ways to the threat actor involved in the SolarWinds attacks.

Malwarebytes’ notice marks the fourth time a company has disclosed it was targeted by the SolarWinds hackers. Microsoft and security firms FireEye and CrowdStrike have also been targeted, although CrowdStrike has said the attempt to infect its network was unsuccessful. Government agencies reported to be affected include the Departments of Defense, Justice, Treasury, Commerce, and Homeland Security as well as the National Institutes of Health.

Continue Reading

Biz & IT

Ars online IT roundtable tomorrow: What’s the future of the data center?

Published

on

If you’re in IT, you probably remember the first time you walked into a real data center—not just a server closet, but an actual raised-floor data center, where the door wooshes open in a blast of cold air and noise and you’re confronted with rows and rows of racks, monolithic and gray, stuffed full of servers with cooling fans screaming and blinkenlights blinking like mad. The data center is where the cool stuff is—the pizza boxes, the blade servers, the NASes and the SANs. Some of its residents are more exotic—the Big Iron in all its massive forms, from Z-series to Superdome and all points in between.

For decades, data centers have been the beating hearts of many businesses—the fortified secret rooms where huge amounts of capital sit, busily transforming electricity into revenue. And they’re sometimes a place for IT to hide, too—it’s kind of a standing joke that whenever a user you don’t want to see is stalking around the IT floor, your best bet to avoid contact is just to badge into the data center and wait for them to go away. (But, uh, I never did that ever. I promise.)

But the last few years have seen a massive shift in the relationship between companies and their data—and the places where that data lives. Sure, it’s always convenient to own your own servers and storage, but why tie up all that capital when you don’t have to? Why not just go to the cloud buffet and pay for what you want to eat and nothing more?

There will always be some reason for some companies to have data centers—the cloud, for all its attractiveness, can’t quite do everything. (Not yet, at least.) But the list of objections to going off-premises for your computing needs is rapidly shrinking—and we’re going to talk a bit about what comes next.

Join us for a chat!

We’ll be holding a livestreamed discussion on the future of the data center on Tuesday, January 20, at 3:15pm Eastern Time (that’s 12:15pm Pacific Time, and 8:15pm UTC). On the panel will be Ars Infosec Editor Emeritus Sean Gallagher and myself, along with special guest Ivan Nekrasov, data center demand generation manager and field marketing consultant for Dell Technologies.

If you’d like to pitch us questions during the event, please feel free to register here and join us during the meeting tomorrow on Zoom. For folks who just want to watch, the live conversation will be available on Twitter, and we’ll embed the finished version (with transcript) on this story page like we did with our last livestream. Register and join in, or check back here after the event to watch!

Continue Reading

Biz & IT

How law enforcement gets around your smartphone’s encryption

Published

on

Enlarge / Uberwachung, Symbolbild, Datensicherheit, Datenhoheit

Westend61 | Getty Images

Lawmakers and law enforcement agencies around the world, including in the United States, have increasingly called for backdoors in the encryption schemes that protect your data, arguing that national security is at stake. But new research indicates governments already have methods and tools that, for better or worse, let them access locked smartphones thanks to weaknesses in the security schemes of Android and iOS.

Cryptographers at Johns Hopkins University used publicly available documentation from Apple and Google as well as their own analysis to assess the robustness of Android and iOS encryption. They also studied more than a decade’s worth of reports about which of these mobile security features law enforcement and criminals have previously bypassed, or can currently, using special hacking tools. The researchers have dug into the current mobile privacy state of affairs and provided technical recommendations for how the two major mobile operating systems can continue to improve their protections.

“It just really shocked me, because I came into this project thinking that these phones are really protecting user data well,” says Johns Hopkins cryptographer Matthew Green, who oversaw the research. “Now I’ve come out of the project thinking almost nothing is protected as much as it could be. So why do we need a backdoor for law enforcement when the protections that these phones actually offer are so bad?”

Before you delete all your data and throw your phone out the window, though, it’s important to understand the types of privacy and security violations the researchers were specifically looking at. When you lock your phone with a passcode, fingerprint lock, or face recognition lock, it encrypts the contents of the device. Even if someone stole your phone and pulled the data off it, they would only see gibberish. Decoding all the data would require a key that only regenerates when you unlock your phone with a passcode, or face or finger recognition. And smartphones today offer multiple layers of these protections and different encryption keys for different levels of sensitive data. Many keys are tied to unlocking the device, but the most sensitive require additional authentication. The operating system and some special hardware are in charge of managing all of those keys and access levels so that, for the most part, you never even have to think about it.

With all of that in mind, the researchers assumed it would be extremely difficult for an attacker to unearth any of those keys and unlock some amount of data. But that’s not what they found.

“On iOS in particular, the infrastructure is in place for this hierarchical encryption that sounds really good,” says Maximilian Zinkus, a PhD student at Johns Hopkins who led the analysis of iOS. “But I was definitely surprised to see then how much of it is unused.” Zinkus says that the potential is there, but the operating systems don’t extend encryption protections as far as they could.

When an iPhone has been off and boots up, all the data is in a state Apple calls “Complete Protection.” The user must unlock the device before anything else can really happen, and the device’s privacy protections are very high. You could still be forced to unlock your phone, of course, but existing forensic tools would have a difficult time pulling any readable data off it. Once you’ve unlocked your phone that first time after reboot, though, a lot of data moves into a different mode—Apple calls it “Protected Until First User Authentication,” but researchers often simply call it “After First Unlock.”

If you think about it, your phone is almost always in the AFU state. You probably don’t restart your smartphone for days or weeks at a time, and most people certainly don’t power it down after each use. (For most, that would mean hundreds of times a day.) So how effective is AFU security? That’s where the researchers started to have concerns.

The main difference between Complete Protection and AFU relates to how quick and easy it is for applications to access the keys to decrypt data. When data is in the Complete Protection state, the keys to decrypt it are stored deep within the operating system and encrypted themselves. But once you unlock your device the first time after reboot, lots of encryption keys start getting stored in quick access memory, even while the phone is locked. At this point an attacker could find and exploit certain types of security vulnerabilities in iOS to grab encryption keys that are accessible in memory and decrypt big chunks of data from the phone.

Based on available reports about smartphone access tools, like those from the Israeli law enforcement contractor Cellebrite and US-based forensic access firm Grayshift, the researchers realized that this is how almost all smartphone access tools likely work right now. It’s true that you need a specific type of operating system vulnerability to grab the keys—and both Apple and Google patch as many of those flaws as possible—but if you can find it, the keys are available, too.

The researchers found that Android has a similar setup to iOS with one crucial difference. Android has a version of “Complete Protection” that applies before the first unlock. After that, the phone data is essentially in the AFU state. But where Apple provides the option for developers to keep some data under the more stringent Complete Protection locks all the time—something a banking app, say, might take them up on—Android doesn’t have that mechanism after first unlocking. Forensic tools exploiting the right vulnerability can grab even more decryption keys, and ultimately access even more data, on an Android phone.

Tushar Jois, another Johns Hopkins PhD candidate who led the analysis of Android, notes that the Android situation is even more complex because of the many device makers and Android implementations in the ecosystem. There are more versions and configurations to defend, and across the board users are less likely to be getting the latest security patches than iOS users.

“Google has done a lot of work on improving this, but the fact remains that a lot of devices out there aren’t receiving any updates,” Jois says. “Plus different vendors have different components that they put into their final product, so on Android you can not only attack the operating system level, but other different layers of software that can be vulnerable in different ways and incrementally give attackers more and more data access. It makes an additional attack surface, which means there are more things that can be broken.”

The researchers shared their findings with the Android and iOS teams ahead of publication. An Apple spokesperson told WIRED that the company’s security work is focused on protecting users from hackers, thieves, and criminals looking to steal personal information. The types of attacks the researchers are looking at are very costly to develop, the spokesperson pointed out; they require physical access to the target device and only work until Apple patches the vulnerabilities they exploit. Apple also stressed that its goal with iOS is to balance security and convenience.

“Apple devices are designed with multiple layers of security in order to protect against a wide range of potential threats, and we work constantly to add new protections for our users’ data,” the spokesperson said in a statement. “As customers continue to increase the amount of sensitive information they store on their devices, we will continue to develop additional protections in both hardware and software to protect their data.”

Similarly, Google stressed that these Android attacks depend on physical access and the existence of the right type of exploitable flaws. “We work to patch these vulnerabilities on a monthly basis and continually harden the platform so that bugs and vulnerabilities do not become exploitable in the first place,” a spokesperson said in a statement. “You can expect to see additional hardening in the next release of Android.”

To understand the difference in these encryption states, you can do a little demo for yourself on iOS or Android. When your best friend calls your phone, their name usually shows up on the call screen because it’s in your contacts. But if you restart your device, don’t unlock it, and then have your friend call you, only their number will show up, not their name. That’s because the keys to decrypt your address book data aren’t in memory yet.

The researchers also dove deep into how both Android and iOS handle cloud backups—another area where encryption guarantees can erode.

“It’s the same type of thing where there’s great crypto available, but it’s not necessarily in use all the time,” Zinkus says. “And when you back up, you also expand what data is available on other devices. So if your Mac is also seized in a search, that potentially increases law enforcement access to cloud data.”

Though the smartphone protections that are currently available are adequate for a number of “threat models” or potential attacks, the researchers have concluded that they fall short on the question of specialized forensic tools that governments can easily buy for law enforcement and intelligence investigations. A recent report from researchers at the nonprofit Upturn found nearly 50,000 examples of US police in all 50 states using mobile device forensic tools to get access to smartphone data between 2015 and 2019. And while citizens of some countries may think it is unlikely that their devices will ever specifically be subject to this type of search, widespread mobile surveillance is ubiquitous in many regions of the world and at a growing number of border crossings. The tools are also proliferating in other settings like US schools.

As long as mainstream mobile operating systems have these privacy weaknesses, though, it’s even more difficult to explain why governments around the world—including the US, UK, Australia, and India—have mounted major calls for tech companies to undermine the encryption in their products.

This story originally appeared on wired.com.

Continue Reading

Trending