Connect with us

Tech News

How Russia’s online influence campaign engaged with millions for years – TechCrunch

Published

on

Russian efforts to influence U.S. politics and sway public opinion were consistent and, as far as engaging with target audiences, largely successful, according to a report from Oxford’s Computational Propaganda Project published today. Based on data provided to Congress by Facebook, Instagram, Google and Twitter, the study paints a portrait of the years-long campaign that’s less than flattering to the companies.

The report, which you can read here, was published today but given to some outlets over the weekend; it summarizes the work of the Internet Research Agency, Moscow’s online influence factory and troll farm. The data cover various periods for different companies, but 2016 and 2017 showed by far the most activity.

A clearer picture

If you’ve only checked into this narrative occasionally during the last couple of years, the Comprop report is a great way to get a bird’s-eye view of the whole thing, with no “we take this very seriously” palaver interrupting the facts.

If you’ve been following the story closely, the value of the report is mostly in deriving specifics and some new statistics from the data, which Oxford researchers were provided some seven months ago for analysis. The numbers, predictably, all seem to be a bit higher or more damning than those provided by the companies themselves in their voluntary reports and carefully practiced testimony.

Previous estimates have focused on the rather nebulous metric of “encountering” or “seeing” IRA content put on these social metrics. This had the dual effect of increasing the affected number — to over 100 million on Facebook alone — but “seeing” could easily be downplayed in importance; after all, how many things do you “see” on the internet every day?

The Oxford researchers better quantify the engagement, on Facebook first, with more specific and consequential numbers. For instance, in 2016 and 2017, nearly 30 million people on Facebook actually shared Russian propaganda content, with similar numbers of likes garnered, and millions of comments generated.

Note that these aren’t ads that Russian shell companies were paying to shove into your timeline — these were pages and groups with thousands of users on board who actively engaged with and spread posts, memes and disinformation on captive news sites linked to by the propaganda accounts.

The content itself was, of course, carefully curated to touch on a number of divisive issues: immigration, gun control, race relations and so on. Many different groups (i.e. black Americans, conservatives, Muslims, LGBT communities) were targeted; all generated significant engagement, as this breakdown of the above stats shows:

Although the targeted communities were surprisingly diverse, the intent was highly focused: stoke partisan divisions, suppress left-leaning voters and activate right-leaning ones.

Black voters in particular were a popular target across all platforms, and a great deal of content was posted both to keep racial tensions high and to interfere with their actual voting. Memes were posted suggesting followers withhold their votes, or with deliberately incorrect instructions on how to vote. These efforts were among the most numerous and popular of the IRA’s campaign; it’s difficult to judge their effectiveness, but certainly they had reach.

Examples of posts targeting black Americans.

In a statement, Facebook said that it was cooperating with officials and that “Congress and the intelligence community are best placed to use the information we and others provide to determine the political motivations of actors like the Internet Research Agency.” It also noted that it has “made progress in helping prevent interference on our platforms during elections, strengthened our policies against voter suppression ahead of the 2018 midterms, and funded independent research on the impact of social media on democracy.”

Instagram on the rise

Based on the narrative thus far, one might expect that Facebook — being the focus for much of it — was the biggest platform for this propaganda, and that it would have peaked around the 2016 election, when the evident goal of helping Donald Trump get elected had been accomplished.

In fact Instagram was receiving as much or more content than Facebook, and it was being engaged with on a similar scale. Previous reports disclosed that around 120,000 IRA-related posts on Instagram had reached several million people in the run-up to the election. The Oxford researchers conclude, however, that 40 accounts received in total some 185 million likes and 4 million comments during the period covered by the data (2015-2017).

A partial explanation for these rather high numbers may be that, also counter to the most obvious narrative, IRA posting in fact increased following the election — for all platforms, but particularly on Instagram.

IRA-related Instagram posts jumped from an average of 2,611 per month in 2016 to 5,956 in 2017; note that the numbers don’t match the above table exactly because the time periods differ slightly.

Twitter posts, while extremely numerous, are quite steady at just under 60,000 per month, totaling around 73 million engagements over the period studied. To be perfectly frank, this kind of voluminous bot and sock puppet activity is so commonplace on Twitter, and the company seems to have done so little to thwart it, that it hardly bears mentioning. But it was certainly there, and often reused existing bot nets that previously had chimed in on politics elsewhere and in other languages.

In a statement, Twitter said that it has “made significant strides since 2016 to counter manipulation of our service, including our release of additional data in October related to previously disclosed activities to enable further independent academic research and investigation.”

Google too is somewhat hard to find in the report, though not necessarily because it has a handle on Russian influence on its platforms. Oxford’s researchers complain that Google and YouTube have been not just stingy, but appear to have actively attempted to stymie analysis.

Google chose to supply the Senate committee with data in a non-machine-readable format. The evidence that the IRA had bought ads on Google was provided as images of ad text and in PDF format whose pages displayed copies of information previously organized in spreadsheets. This means that Google could have provided the useable ad text and spreadsheets—in a standard machine- readable file format, such as CSV or JSON, that would be useful to data scientists—but chose to turn them into images and PDFs as if the material would all be printed out on paper.

This forced the researchers to collect their own data via citations and mentions of YouTube content. As a consequence, their conclusions are limited. Generally speaking, when a tech company does this, it means that the data they could provide would tell a story they don’t want heard.

For instance, one interesting point brought up by a second report published today, by New Knowledge, concerns the 1,108 videos uploaded by IRA-linked accounts on YouTube. These videos, a Google statement explained, “were not targeted to the U.S. or to any particular sector of the U.S. population.”

In fact, all but a few dozen of these videos concerned police brutality and Black Lives Matter, which as you’ll recall were among the most popular topics on the other platforms. Seems reasonable to expect that this extremely narrow targeting would have been mentioned by YouTube in some way. Unfortunately it was left to be discovered by a third party and gives one an idea of just how far a statement from the company can be trusted. (Google did not immediately respond to a request for comment.)

Desperately seeking transparency

In its conclusion, the Oxford researchers — Philip N. Howard, Bharath Ganesh and Dimitra Liotsiou — point out that although the Russian propaganda efforts were (and remain) disturbingly effective and well organized, the country is not alone in this.

“During 2016 and 2017 we saw significant efforts made by Russia to disrupt elections around the world, but also political parties in these countries spreading disinformation domestically,” they write. “In many democracies it is not even clear that spreading computational propaganda contravenes election laws.”

“It is, however, quite clear that the strategies and techniques used by government cyber troops have an impact,” the report continues, “and that their activities violate the norms of democratic practice… Social media have gone from being the natural infrastructure for sharing collective grievances and coordinating civic engagement, to being a computational tool for social control, manipulated by canny political consultants, and available to politicians in democracies and dictatorships alike.”

Predictably, even social networks’ moderation policies became targets for propagandizing.

Waiting on politicians is, as usual, something of a long shot, and the onus is squarely on the providers of social media and internet services to create an environment in which malicious actors are less likely to thrive.

Specifically, this means that these companies need to embrace researchers and watchdogs in good faith instead of freezing them out in order to protect some internal process or embarrassing misstep.

“Twitter used to provide researchers at major universities with access to several APIs, but has withdrawn this and provides so little information on the sampling of existing APIs that researchers increasingly question its utility for even basic social science,” the researchers point out. “Facebook provides an extremely limited API for the analysis of public pages, but no API for Instagram.” (And we’ve already heard what they think of Google’s submissions.)

If the companies exposed in this report truly take these issues seriously, as they tell us time and again, perhaps they should implement some of these suggestions.

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech News

This is the real voice behind Google Assistant

Published

on

When using Google Assistant, most of us don’t even consider who the voice is coming from — after all, it’s artificial intelligence, not a real person. Our virtual assistants, be it Siri, Alexa, or Google Assistant, are always at our beck and call, but we (for the most part) remain well-aware of the fact that they’re just lines of code and intricate algorithms. But how would you feel if you knew that Google Assistant has a very human backstory?

Tada Images/Shutterstock

In an interview with The Atlantic, James Giangola, the lead conversation and persona designer at Google, spoke about the Assistant at great length. When the team set out to create its AI-based assistant, they knew that the line between a cool, futuristic feature and a mildly creepy if uncanny voice bot is very, very thin. Google Assistant was never meant to seem human — that would just be disturbing — but she was meant to be just human enough to make us feel comfortable. To achieve that elusive feeling of somewhat reserved comfort, Giangola and his team went to great lengths to perfect the Assistant.

You’d think that just hiring a skilled voice actor would be enough, but there was much more to consider than just finding a pleasant voice. James Giangola set out on a quest to make the Google Assistant sound normal and to hide that alien feeling of speaking to a robot. In order to do this, he made up a lengthy backstory for the Assistant.

A robot with an extensive backstory

Yasin Hasan/Shutterstock

When searching for the right voice actress and then training her later on, The Atlantic notes that James Giangola came up with a very specific backstory for the AI. He did so because he wanted Google Assistant to appear real, and in order to give it a distinct personality, he gave the voice actress a lengthy background on the Assistant. First and foremost, the Assistant comes from Colorado, which gives her a neutral accent.

She comes from a well-read family and is the youngest daughter of a physics professor (who has a B.A. in art history from Northwestern University, no less) and a research librarian. She once worked for “a very popular late-night-TV satirical pundit” as a personal assistant. She was always a smart kid, she won $100,000 on the Kids Edition of “Jeopardy.” Oh, and she also likes kayaking. Let’s not forget: She’s not real.

The need to create such a specific backstory may seem questionable, and it actually was questioned by James Giangola’s colleagues. However, Giangola was able to prove his point during the audition process. When a colleague asked him how does anyone even sound like they’re into kayaking, Giangola fired back: “The candidate who just gave an audition — do you think she sounded energetic, like she’s up for kayaking?” And she didn’t, which to Giangola meant that she wasn’t the right voice.

Google aimed for ‘upbeat geekiness’

Phone with Google Assistant

sdx15/Shutterstock

Aside from nailing the exact tone of her voice, which The Atlantic described as “upbeat geekiness,” the Assistant had to be trained to sound human not just by voice, but also by speech patterns and rhythms. In the interview, James Giangola talks about some of the different small changes that were made to take the Assistant from robotic to almost natural.

To illustrate the example, Giangola played a recording in which the AI had to contradict a user who wanted to book something on June 31. It had to be done in a delicate, natural-sounding manner that still delivers the required information. When prompted, the Assistant replied: “Actually, June has only 30 days,” achieving the level of vocal realism Giangola was looking for.

Although the Assistant’s intricate backstory may seem overkill, it seems to have helped Google find the right voice actress. According to Tech Bezeer, the main voice of the Assistant is Antonia Flynn, who was cast back in 2016. However, Google is not very forthcoming with information about who exactly voices each version of the Assistant, so this needs to be taken with a grain of salt. The information originates from Reddit, where a user was able to track Flynn down based on her voice, but only Google knows whether she really is the friendly AI inside our mobile devices.

Continue Reading

Tech News

Microsoft’s post-Windows Phone vision leaks, but don’t get your hopes up

Published

on

While Microsoft’s Windows Phone ambitions are well and truly dead at this point, there was a time when the company was plotting a follow-up to the ill-fated mobile operating system. That follow-up was known internally as Andromeda OS, and it was being developed as the operating system for the Surface Duo. Sadly, Microsoft’s plan to create a version of Windows for dual-screen devices never saw the light of day, but today we’re getting a look at an internal build of Andromeda OS and what could have been.

Image: Microsoft

That look comes from Zac Bowden at Windows Central, who managed to get a build of Andromeda OS up and running on a Lumia 950. Even though Andromeda OS was intended for the Surface Duo, Microsoft apparently conducted internal testing on Lumia 950 devices, making it a solid choice for this hands-on.

In both his write-up and the video you see embedded below, Bowden is very clear that this is not some leak of a work-in-progress mobile operating system. Andromeda OS is dead and not in active development, so there’s no real hope of seeing a more fully-featured version launch on Microsoft’s mobile hardware at any point in the future. Despite that rather grim reality, this is a good look at the progress Microsoft made before it ultimately decided to ship the Surface Duo with Android.

Though the hands-on shows us an operating system that is very rough-around-the-edges and somewhat clunky, it’s immediately obvious that Microsoft planned Andromeda OS with inking capabilities at the center. For instance, the lock screen doubles as an inking space, allowing users to jot quick notes down on it that persist until they’re erased or the lock screen is cleared entirely.

Just as well, unlocking the device takes you to a home screen that also doubles as a journal. As with the lock screen, you can use this page to take notes, but you can also do things like paste stuff from the clipboard or insert an image for markup. Having the phone unlock to what is essentially a blank canvas instead of a home screen full of app icons is an interesting idea and one that we’re probably never going to see on other devices.

Andromeda OS also features a Start menu reminiscent of Windows Phone, which means that it has those familiar Live Tiles. Bowden also shows off the various gesture controls included in Andromeda OS, swiping from the left to summon the aforementioned Start menu and from the right to bring up Cortana and notifications. Swiping down pulls up the Control Center, which will look familiar to those who are currently using Windows 11.

Image: Windows Central

We’re also given a brief demo of what Andromeda OS might have looked like on an actual dual-screen device, but since that demo is also on a Lumia 950, we sadly don’t get the full experience. Still, it’s interesting to see what might have been before Microsoft decided to can Andromeda OS entirely and switch to Android for the Surface Duo.

While there’s no chance we’ll see this project revived for future Microsoft hardware, there is always the chance that some individual features could make their way to the Surface Duo. Even then, it’s probably best to appreciate this as a relic of the past rather than something that might inform Microsoft’s future efforts, as disappointing as that may be for those who miss Windows Phone and Windows 10 Mobile.

Continue Reading

Tech News

Google just got terrible news in Europe – and it could get much worse

Published

on

Google was just hit by some very bad news coming from Europe, but the news may be even worse for website owners than for Google itself. In an unprecedented case, the court in Austria has just ruled that Google Analytics is in violation with the European data protection laws. As a result, Google Analytics has been made illegal in Austria.

Kaspars Grinvalds/Shutterstock

It all comes back to the General Data Protection Regulation (GDPR) observed in Europe. Implemented in 2018, GDPR was created to give European citizens more control over their personal data, both online and offline. Unfortunately, the GDPR and US surveillance laws just do not mix.

According to a decision made in 2020 by the Court of Justice of the European Union (CJEU,) policies that force website providers in the US to provide personal user data to authorities are against the GDPR. While this may not seem that related to Google Analytics at the first glance, it very much is. Some of the information readily collected by US providers is in direct violation with the GDPR, which in theory means that these websites would have to stop collecting private information in order to legally operate within Europe. In practice, it seems that not much has changed since 2018.

Google Analytics is now completely illegal in Austria

Prior to 2020, a law called the Privacy Shield was in place that allowed European data to be transferred to the United States. However, the shield was invalidated by the CJEU on July 16, 2020. Since then, US-based websites were not allowed to transfer the data of European citizens to the US. Of course, this only applies to data that falls under the GDPR, which only includes identifiable information about any given person. However, according to FieldFisher, this also includes IP addresses, as that is regarded as an “online identifier.”

Regardless of the 2020 ruling made by the CJEU, many providers continued to send personal data to the US — including Google Analytics. As stated by Max Schrems, honorary chair of NOYB, an European non-profit focused on digital rights, “Instead of actually adapting services to be GDPR compliant, US companies have tried to simply add some text to their privacy policies and ignore the Court of Justice. Many EU companies have followed the lead instead of switching to legal options.”

The Austrian Data Protection Authority has now followed up on what the CJEU ruled back in 2020 and made the use of Google Analytics completely illegal. The ruling comes into effect immediately, so all the websites that service Austrian citizens need to act quickly in order to not be fined for violating the local laws.

What will the new court ruling change?

SB_photos/Shutterstock

Many companies that operate in Europe will now have to decide between continuing to use Google Analytics and swapping to an alternative website traffic tool. Refusing to comply may result in hefty fines. However, it could be that providers will continue to ignore the European laws and risk the fines: After all, not every such business will be caught or reported. If caught, the price could be high: NOYB has described a case where the Irish Data Protection Commission issued a fine of 225 million euro on WhatsApp for violating data protection laws.

Ultimately, US-based companies will have to think of workarounds for European privacy laws. Simply hosting customer data in Europe would be helpful, although this would of course limit the type of data that can be freely collected and distributed. For the time being, websites that continue to use Google Analytics will need to obtain consent from each visitor prior to collecting any data.

The choice to ban Google Analytics in Austria may be the first step in a larger revolution. Other countries in the European Union are likely to follow, so while Austria may be the first bit of bad news for Google, there is likely much more to come.

Continue Reading

Trending