Italy fires Meta urgent request for info re: election interference measures

Days ahead of the Italian general election, the country’s privacy watchdog has sent Facebook’s parent (Meta) an urgent request for information, asking the social media giant to clarify measures it’s taking around Sunday’s election.

The risk of election interference via social media continues to be a major concern for regulators after years of rising awareness of how disinformation is seeded, spread and amplified on algorithmic platforms like Facebook, and with democratic processes continuing to be considered core targets for malicious influence ops.

Privacy regulators in the European Union are also watchful of how platforms are processing personal data — with data protection laws in place that regulate the processing of sensitive data such as political opinions.

In a press release about its request yesterday, the Garante points back to a previous $1.1M sanction it imposed on Facebook for the Cambridge Analytica scandal, and for the “Candidates” project Facebook launched for Italy’s 2018 general election, writing [in Italian; translated here using machine translation] that it’s “necessary to pay particular attention to the processing of data suitable for revealing the political opinions of the interested parties and to respect the free expression of thought”.

“Facebook will have to provide timely information on the initiative undertaken; on the nature and methods of data processing on any agreements aimed at sending reminders and the publication of information ‘stickers’ (also published on Instagram — part of the Meta Group); on the measures taken to ensure, as announced, that the initiative is brought to the attention only of persons of legal age,” the watchdog adds.

The move follows what it describes as “information campaign” by Meta, targeted at Italian users, which is said to be aimed at countering interference and removing content that discourages voting — and involving the use of a virtual Operations Center to identity potential threats in real-time, as well as collaboration with independent fact-checking organizations.

The Garante said the existence of this campaign was made public by Meta publishing “promemoria” (memos). However a page on Meta’s website which provides an overview of information about its preparations for upcoming elections only currently offers downloadable documents detailing its approach for the US midterms and for Brazil’s elections. There is no information here about Meta’s approach to Italy’s general election — or any information about the information campaign it is (apparently) running locally.

A separate page on Meta’s website — entitled “election integrity” — includes a number of additional articles about its preparations for elections elsewhere, including Kenya’s 2022 general election; the 2022 Philippines’ general election; and for Ethiopia’s — 2021 — general election. Plus earlier articles for State elections in India; and an update on the Georgia runoff elections from the end of 2020, among others.

But, again, Meta does not appear to have provided any information here about its preparations for Italy’s General Election.

The reason for this oversight — which is presumably what it is — could be related to the Italian election being a snap election, called following a government crisis and the resignation of prime minister Mario Draghi, i.e. rather than a long-programmed and timetabled general election.

However the gap in Meta’s election integrity information hub on measures it’s taking to protect Italy’s general election from disinformation suggests there are limitations to its transparency in this crucial area — suggesting it’s unable to provide consistent transparency in response to what can often be dynamically changing democratic timelines.

The Italian parliament was dissolved on July 21 — which was when the president called for new elections. Which means that Meta, a company with a market cap of hundreds of billions of dollars, has had two months to make upload details of the election integrity measures it’s taking in the country to relevant hubs on its website — yet it does not appear to have done so.

We reached out to Meta yesterday with questions about what it’s doing in Italy to protect the election from interference but at the time of writing the company had not responded.

It will of course have to respond to Italy’s watchdog’s request for information. We’ve reached out to the regulator with questions.

The Garante continues to be an active privacy watchdog in policing tech giants operating on its turf in spite of not being the lead supervisor for such companies under the one-stop-shop (OSS) mechanism in the EU’s General Data Protection Regulation (GDPR), which has otherwise led to bottlenecks around GDPR enforcement. But the regulation provides some wiggle room for concerned DPAs to act on pressing matters on their own turf without having to submit to the OSS.

Yesterday’s urgent request to Meta for information by Italy’s watchdog follows a number of other proactive interventions in recent years — including a warning to TikTok this summer over a controversial privacy policy switch (which TikTok ‘paused’ soon after); a warning to WhatsApp in January 2021 over another controversial privacy policy and T&Cs update (while stemming from a wider complaint, WhatsApp went on to be fined $267M later that year over GDPR transparency breaches); and a warning to TikTok over underage users, also in January 2021 (TikTok went on remove over half a million accounts that it was unable to confirm did not belong to children and commit to other measures).

So a comprehensive answer to the question of whether the GDPR is working to regulate Big Tech requires a broader view than totting up fines or even fixing on final GDPR enforcement decisions.

Italy fires Meta urgent request for info re: election interference measures by Natasha Lomas originally published on TechCrunch

Cambridge Analytica’s former boss gets 7-year ban on being a business director

The former CEO of Cambridge Analytica, the disgraced data company that worked for the 2016 Trump campaign and shut down in 2018 over a voter manipulation scandal involving masses of Facebook data — has been banned from running limited companies for seven years.

Alexander Nix signed a disqualification undertaking earlier this month which the UK government said yesterday it had accepted. The ban commences on October 5.

“Within the undertaking, Alexander Nix did not dispute that he caused or permitted SCL Elections Ltd or associated companies to market themselves as offering potentially unethical services to prospective clients; demonstrating a lack of commercial probity,” the UK insolvency service wrote in a press release.

Nix was suspended as CEO of Cambridge Analytica at the peak of the Facebook data scandal after footage emerged of him, filmed by undercover reporters, boasting of spreading disinformation and entrapping politicians to meet clients’ needs.

Cambridge Analytica was a subsidiary of the SCL Group, which included the division SCL Elections, while Nix was one of the key people in the group — being a director for SCL Group Ltd, SCL Social Ltd, SCL Analytics Ltd, SCL Commercial Ltd, SCL Elections and Cambridge Analytica (UK) Ltd. All six companies entered into administration in May 2018, going into compulsory liquidation in April 2019.

The “potentially unethical” activities that Nix does not dispute the companies offered, per the undertaking, are:

  • bribery stings and honey trap stings designed to uncover corruption
  • voter disengagement campaigns
  • the obtaining of information to discredit political opponents
  • the anonymous spreading of information

Last year the FTC also settled with Nix over the data misuse scandal — with the former Cambridge Analytica boss agreeing to an administrative order restricting how he conducts business in the future. The order also required the deletion/destruction of any personal information collected via the business.

Back in 2018 Nix was also grilled by the UK parliament’s DCMS committee — and in a second hearing he claimed Cambridge Analytica had licensed “millions of data points on American individuals from very large reputable data aggregators and data vendors such as Acxiom, Experian, Infogroup”, arguing the Facebook data had not been its “foundational data-set”.

It’s fair to say there are still many unanswered questions attached to the data misuse scandal. Last month, for example, the UK’s data watchdog — which raided Cambridge Analytica’s UK offices in 2018, seizing evidence, before going on to fine and then settle with Facebook (which did not admit any liability) over the scandal — said it would no longer be publishing a final report on its data analytics investigation.

Asked about the fate of the final report on Cambridge Analytica, an ICO spokesperson told us: “As part of the conclusion to our data analytics investigation we will be writing to the DCMS select committee to answer the outstanding questions from April 2019. We have committed to updating the select committee on our final findings but this will not be in the form of a further report.”

It’s not clear whether the DCMS committee — which has reformed with a different chair vs the one who in 2018 led the charge to dig into the Cambridge Analytica scandal as part of an enquiry into the impact of online disinformation — will publish the ICO’s written answers. Last year its final report called for Facebook’s business to be investigated over data protection and competition concerns.

You can read a TechCrunch interview with Nix here, from 2017 before the Facebook data scandal broke, in which he discusses how his company helped the Trump campaign.

Microsoft launches a deepfake detector tool ahead of US election

Microsoft has added to the slowly growing pile of technologies aimed at spotting synthetic media (aka deepfakes) with the launch of a tool for analyzing videos and still photos to generate a manipulation score.

The tool, called Video Authenticator, provides what Microsoft calls “a percentage chance, or confidence score” that the media has been artificially manipulated.

“In the case of a video, it can provide this percentage in real-time on each frame as the video plays,” it writes in a blog post announcing the tech. “It works by detecting the blending boundary of the deepfake and subtle fading or greyscale elements that might not be detectable by the human eye.”

If a piece of online content looks real but ‘smells’ wrong chances are it’s a high tech manipulation trying to pass as real — perhaps with a malicious intent to misinform people.

And while plenty of deepfakes are created with a very different intent — to be funny or entertaining — taken out of context such synthetic media can still take on a life of its own as it spreads, meaning it can also end up tricking unsuspecting viewers.

While AI tech is used to generate realistic deepfakes, identifying visual disinformation using technology is still a hard problem — and a critically thinking mind remains the best tool for spotting high tech BS.

Nonetheless, technologists continue to work on deepfake spotters — including this latest offering from Microsoft.

Although its blog post warns the tech may offer only passing utility in the AI-fuelled disinformation arms race: “The fact that [deepfakes are] generated by AI that can continue to learn makes it inevitable that they will beat conventional detection technology. However, in the short run, such as the upcoming U.S. election, advanced detection technologies can be a useful tool to help discerning users identify deepfakes.”

This summer a competition kicked off by Facebook to develop a deepfake detector served up results that were better than guessing — but only just in the case of a data-set the researchers hadn’t had prior access to.

Microsoft, meanwhile, says its Video Authenticator tool was created using a public dataset from Face Forensic++ and tested on the DeepFake Detection Challenge Dataset, which it notes are “both leading models for training and testing deepfake detection technologies”.

It’s partnering with the San Francisco-based AI Foundation to make the tool available to organizations involved in the democratic process this year — including news outlets and political campaigns.

“Video Authenticator will initially be available only through RD2020 [Reality Defender 2020], which will guide organizations through the limitations and ethical considerations inherent in any deepfake detection technology. Campaigns and journalists interested in learning more can contact RD2020 here,” Microsoft adds.

The tool has been developed by its R&D division, Microsoft Research, in coordination with its Responsible AI team and an internal advisory body on AI, Ethics and Effects in Engineering and Research Committee — as part of a wider program Microsoft is running aimed at defending democracy from threats posed by disinformation.

“We expect that methods for generating synthetic media will continue to grow in sophistication,” it continues. “As all AI detection methods have rates of failure, we have to understand and be ready to respond to deepfakes that slip through detection methods. Thus, in the longer term, we must seek stronger methods for maintaining and certifying the authenticity of news articles and other media. There are few tools today to help assure readers that the media they’re seeing online came from a trusted source and that it wasn’t altered.”

On the latter front, Microsoft has also announced a system that will enable content producers to add digital hashes and certificates to media that remain in their metadata as the content travels online — providing a reference point for authenticity.

The second component of the system is a reader tool, which can be deployed as a browser extension, for checking certificates and matching the hashes to offer the viewer what Microsoft calls “a high degree of accuracy” that a particular piece of content is authentic/hasn’t been changed.

The certification will also provide the viewer with details about who produced the media.

Microsoft is hoping this digital watermarking authenticity system will end up underpinning a Trusted News Initiative announced last year by UK publicly funded broadcaster, the BBC — specifically for a verification component, called Project Origin, which is led by a coalition of the BBC, CBC/Radio-Canada, Microsoft and The New York Times.

It says the digital watermarking tech will be tested by Project Origin with the aim of developing it into a standard that can be adopted broadly.

“The Trusted News Initiative, which includes a range of publishers and social media companies, has also agreed to engage with this technology. In the months ahead, we hope to broaden work in this area to even more technology companies, news publishers and social media companies,” Microsoft adds.

While work on technologies to identify deepfakes continues, its blog post also emphasizes the importance of media literacy — flagging a partnership with the University of Washington, Sensity and USA Today aimed at boosting critical thinking ahead of the US election.

This partnership has launched a Spot the Deepfake Quiz for voters in the US to “learn about synthetic media, develop critical media literacy skills and gain awareness of the impact of synthetic media on democracy”, as it puts it.

The interactive quiz will be distributed across web and social media properties owned by USA Today, Microsoft and the University of Washington and through social media advertising, per the blog post.

The tech giant also notes that it’s supporting a public service announcement (PSA) campaign in the US encouraging people to take a “reflective pause” and check to make sure information comes from a reputable news organization before they share or promote it on social media ahead of the upcoming election.

“The PSA campaign will help people better understand the harm misinformation and disinformation have on our democracy and the importance of taking the time to identify, share and consume reliable information. The ads will run across radio stations in the United States in September and October,” it adds.

Bracing for election day, Facebook rolls out voting resources to U.S. users

Eager to avoid a repeat of its disastrous role as a super-spreader of misinformation during the 2016 election cycle, Facebook is getting its ducks in a row.

Following an announcement earlier this summer, the company is now launching a voting information hub that will centralize election resources for U.S. users and ideally inoculate at least some of them against the platform’s ongoing misinformation epidemic.

The voting information center will appear in the menu on both Facebook and Instagram. As part of the same effort, Facebook will also target U.S. users with notifications based on location and age, displaying relevant information about voting in their state. The info center will help users check their state-specific vote-by-mail options, request mail-in ballots and provide voting-related deadlines.

Facebook election information center

Facebook is also expanding the labels it uses to attach verified election resources to posts by political figures. The labels will now appear on voting-related posts from all users across its main platform and Instagram, a way for the platform to avoid taking actions against specific political figures while still directing its users toward verified information about U.S. elections.

Along with other facets of its pre-election push, Facebook will roll previously-announced “voting alerts,” a feature that will allow state election officials to communicate election-related updates to users through the platform. “This will be increasingly critical as we get closer to the election, with potential late-breaking changes to the voting process that could impact voters,” Facebook Vice President of Product Management and Social Impact Naomi Gleit wrote in a blog post about the feature. According to the company, voting alerts will only be available to government accounts and not personal pages belonging to state or local election administrators.

The company cites the complexity of conducting state elections in the midst of the pandemic in its decision to launch the info center, which is also modeled after the COVID-19 info center that it created in the early days of the crisis. While the COVID-19 info hub initially appeared at the top of users’ Facebook feeds, it’s now only surfaced in searches related to the virus.

Election night nightmare

Uncomfortable as it is with the idea, Facebook seems to be aware that it could very well become the “arbiter of truth” on election night. With 2020’s unprecedented circumstances leading to a record number of ballots cast through the mail, it’s possible that the election’s outcome could be delayed or otherwise confusing. Without clear cut results, conspiracy theories, opportunism and other forms of misinformation are likely to explode on social platforms — a nightmare scenario that social networks seem to be preemptively dreading.

“A prolonged ballot process has the potential to be exploited in order to sow distrust in the election outcome,” Gleit wrote in Facebook’s post detailing the election tools.

The company was one of nine tech companies that met with federal officials on Wednesday to discuss how they will handle concerns around misinformation on the platforms around election day.

The group of companies now includes Facebook, Google, Reddit, Twitter, Microsoft, Pinterest, Verizon Media, Linkedin and the Wikimedia Foundation. Some of the group’s members had met previously to discuss efforts ahead of U.S. elections, but the expanded coalition of companies formally working with federal officials to prepare for the U.S. election appears to be new.

Facebook fights order to globally block accounts linked to Brazilian election meddling

Facebook has branded a legal order to globally block a number of Brazilian accounts linked to the spread of political disinformation targeting the country’s 2018 election as “extreme”, claiming it poses a threat to freedom of expression outside the country.

The tech giant is simultaneously complying with the block order — beginning Saturday after it was fined by a Supreme Court judge for non-compliance — citing the risk of criminal liability for a local employee were it not to do so.

However it is appealing to the Supreme Court to try to overturn the order.

A spokesperson for the tech giant sent us this statement on the matter:

Facebook complied with the order of blocking these accounts in Brazil by restricting the ability for the target Pages and Profiles to be seen from IP locations in Brazil. People from IP locations in Brazil were not capable of seeing these Pages and Profiles even if the targets had changed their IP location. This new legal order is extreme, posing a threat to freedom of expression outside of Brazil’s jurisdiction and conflicting with laws and jurisdictions worldwide. Given the threat of criminal liability to a local employee, at this point we see no other alternative than complying with the decision by blocking the accounts globally, while we appeal to the Supreme Court.

On Friday a judge ordered Facebook to pay a 1.92 million reais (~$367k) fine for non compliance, per Reuters, which says the company had been facing further daily fines of 100,000 reais (~$19k) had it not applied a global block.

Before the fine was announced Facebook had said it would appeal the global block order, adding that while it respects the laws of countries where it operates “Brazilian law recognizes the limits of its jurisdiction”.

Reuters reports that the accounts in question were controlled by supporters of the Brazilian president, Jair Bolsonaro, and had been implicated in the spread of political disinformation during the country’s 2018 election with the aim of boosting support for the right wing populist.

Last month the news agency reported Facebook had suspended a network of social media accounts used to spread divisive political messages online which the company had linked to employees of Bolsonaro and two of his sons.

In a blog post at the time, Facebook’s head of security policy, Nathaniel Gleicher, wrote: “Although the people behind this activity attempted to conceal their identities and coordination, our investigation found links to individuals associated with the Social Liberal Party and some of the employees of the offices of Anderson Moraes, Alana Passos, Eduardo Bolsonaro, Flavio Bolsonaro and Jair Bolsonaro.”

In all Facebook said it removed 33 Facebook accounts, 14 Pages, 1 Group and 37 Instagram accounts that it identified as involved in the “coordinated inauthentic behavior”.

It also disclosed that around 883,000 accounts followed one or more of the offending Pages; while the Group had around 350 accounts signed up; and 918,000 people followed one or more of the Instagram accounts.

The political disops effort had spent around $1,500 on Facebook ads, paid for in Brazilian reais, per its account of the investigation.

Facebook said it had identified a network of “clusters” of “connected activity”, with those involved using duplicate and fake accounts to “evade enforcement, create fictitious personas posing as reporters, post content, and manage Pages masquerading as news outlets”.

An example of removed content that was being spread by the disops network identified by Facebook (Image credit: Facebook)

The network posted about “local news and events including domestic politics and elections, political memes, criticism of the political opposition, media organizations and journalists”; and, more recently, about the coronavirus pandemic, it added.

In May a judge in Brazil had ordered Facebook to a block a number of accounts belonging to Bolsonaro supporters who had been implicated in the election meddling. But Facebook only applied the block in Brazil — hence the court order for a global block.

While the tech giant was willing to remove access to the inauthentic content locally, after it had identified a laundry list of policy contraventions, it’s taking a ‘speech’ stance over purging the fake content and associated accounts internationally — arguing such an order risks overreach that could damage freedom of expression online.

The unstated implication is authoritarian states or less progressive regimes could seek to use similar orders to force platforms to apply national laws which prohibit content that’s legal and freely available elsewhere to force it to be taken down in another jurisdiction.

That said, it’s not entirely clear in this specific case why Facebook would not simply bring down its own banhammer on accounts that it has found to have so flagrantly violated its own policies on coordinated authentic behavior. But the company has at times treated political ‘speech’ as somehow exempt from its usual content standards — leading to operating policies that tie themselves in contradictory nots.

Its blog post further notes that some of the content posted by the Brazilian election interference operation had previously been taken down for violating its Community Standards, including hate speech.

The case doesn’t just affect Facebook. In May, Twitter was also ordered to block a number of accounts linked to the probe into political disops. It’s not clear what action Twitter is taking.

We’ve reached out to the company for comment.

How will EC plans to reboot rules for digital services impact startups?

A framework for ensuring fairness in digital marketplaces and tackling abusive behavior online is brewing in Europe, fed by a smorgasbord of issues and ideas, from online safety and the spread of disinformation, to platform accountability, data portability and the fair functioning of digital markets.

European Commission lawmakers are even turning their eye to labor rights, spurred by regional concern over unfair conditions for platform workers.

On the content side, the core question is how to balance individual freedom of expression online against threats to public discourse, safety and democracy from illegal or junk content that can be deployed cheaply, anonymously and at massive scale to pollute genuine public debate.

The age-old conviction that the cure for bad speech is more speech can stumble in the face of such scale. While illegal or harmful content can be a money spinner, outrage-driven engagement is an economic incentive that often gets overlooked or edited out of this policy debate.

Certainly the platform giants — whose business models depend on background data-mining of internet users in order to program their content-sorting and behavioral ad-targeting (activity that, notably, remains under regulatory scrutiny in relation to EU data protection law) — prefer to frame what’s at stake as a matter of free speech, rather than bad business models.

But with EU lawmakers opening a wide-ranging consultation about the future of digital regulation, there’s a chance for broader perspectives on platform power to shape the next decades online, and much more besides.

In search of cutting-edge standards

For the past two decades, the EU’s legal framework for regulating digital services has been the e-commerce Directive — a cornerstone law that harmonizes basic principles and bakes in liabilities exemptions, greasing the groove of cross-border e-commerce.

In recent years, the Commission has supplemented this by applying pressure on big platforms to self-regulate certain types of content, via a voluntary Code of Conduct on illegal hate speech takedowns — and another on disinformation. However, the codes lack legal bite and lawmakers continue to chastise platforms for not doing enough nor being transparent enough about what they are doing.

Reddit links UK-US trade talk leak to Russian influence campaign

Reddit has linked account activity involving the leak and amplification of sensitive UK-US trade talks on its platform during the ongoing UK election campaign to a suspected Russian political influence operation.

Or, to put it more plainly, the social network suspects that Russian operatives are behind the leak of sensitive trade data — likely with the intention of impacting the UK’s General Election campaign.

The country goes to the polls next week, on December 12.

The minority Conservative government has struggled to negotiate a brexit deal that parliament backs. The UK has been politically deadlocked since mid 2016 over how to implement the result of the referendum to leave the European Union . Another hung parliament or minority government would likely result in continued uncertainty.

In a post discussing the “Suspected campaign from Russia on Reddit”, the company writes:

We were recently made aware of a post on Reddit that included leaked documents from the UK. We investigated this account and the accounts connected to it, and today we believe this was part of a campaign that has been reported as originating from Russia.

Earlier this year Facebook discovered a Russian campaign on its platform, which was further analyzed by the Atlantic Council and dubbed “Secondary Infektion.” Suspect accounts on Reddit were recently reported to us, along with indicators from law enforcement, and we were able to confirm that they did indeed show a pattern of coordination. We were then able to use these accounts to identify additional suspect accounts that were part of the campaign on Reddit. This group provides us with important attribution for the recent posting of the leaked UK documents, as well as insights into how adversaries are adapting their tactics.

Reddit says that an account, called gregoratior, originally posted the leaked trade talks document. Later a second account, ostermaxnn, reposted it. The platform also found a “pocket of accounts” that worked together to manipulate votes on the original post in an attempt to amplify it. Though fairly fruitlessly, as it turned out; the leak gained little attention on Reddit, per the company.

As a result of the investigation Reddit says it has banned 1 subreddit and 61 accounts — under policies against vote manipulation and misuse of its platform.

The story doesn’t end there, though, because whoever was behind the trade talk leak appears to have resorted to additional tactics to draw attention to it — including emailing campaign groups and political activists directly.

This activity did bear fruit this month when the opposition Labour party got hold of the leak and made it into a major campaign issue, claiming the 451-page document shows the Conservative party, led by Boris Johnson, is plotting to sell off the country’s free-at-the-point-of-use National Health Service (NHS) to US private health insurance firms and drug companies.

Labour party leader, Jeremy Corbyn, showed a heavily redacted version of the document during a TV leaders debate earlier this month, later calling a press conference to reveal a fully un-redacted version of the data — arguing the document proves the NHS is in grave danger if the Conservatives are re-elected.

Johnson has denied Labour’s accusation that the NHS will be carved up as the price of a Trump trade deal. But the leaked document itself is genuine. It details preliminary meetings between UK and US trade negotiators, which took place between July 2017 and July 2019, in which discussion of the NHS takes place, in addition to other issues such as food standards. Although the document does not confirm what position the UK might seek to adopt in any future trade talks with the US.

The source of the heavily redacted version of the document appears to be a Freedom of Information (FOI) request by campaigning organisation, Global Justice Now — which told Vice it made an FOI request to the UK’s Department for International Trade around 18 months ago.

The group said it was subsequently emailed a fully unredacted version of the document by an unknown source which also appears to have sent the data directly to the Labour party. So while the influence operation looks to have originated on Reddit, the agents behind it seem to have resorted to more direct means of data dissemination in order for the leak to gain the required attention to become an election-influencing issue.

Experts in online influence operations had already suggested similarities between the trade talks leak and an earlier Russian operation, dubbed Secondary Infektion, which involved the leak of fake documents on multiple online platforms. Facebook identified and took down that operation in May.

In a report analysing the most recent leak, social network mapping and analysis firm Graphika says the key question is how the trade document came to be disseminated online a few weeks before the election.

“The mysterious [Reddit] user seemingly originated the leak of a diplomatic document by posting it around online, just six weeks before the UK elections. This raises the question of how the user got hold of the document in the first place,” it writes. “This is the single most pressing question that arises from this report.”

Graphika’s analysis concludes that the manner of leaking and amplifying the trade talks data “closely resembles” the known Russian information operation, Secondary Infektion.

“The similarities to Secondary Infektion are not enough to provide conclusive attribution but are too close to be simply a coincidence. They could indicate a return of the actors behind Secondary Infektion or a sophisticated attempt by unknown actors to mimic it,” it adds.

Internet-enabled Russian influence operations that feature hacking and strategically timed data dumps of confidential/sensitive information, as well as the seeding and amplification of political disinformation which is intended to polarize, confuse and/or disengage voters, have become a regular feature of Western elections in recent years.

The most high profile example of Russian election interference remains the 2016 hack of documents and emails from Hillary Clinton’s presidential campaign and Democratic National Committee — which went on to be confirmed by US investigators as an operation by the country’s GRU intelligence agency.

In 2017 emails were also leaked from French president Emmanuel Macron’s campaign shortly before the election — although with apparently minimal impact in that case. (Attribution is also less clear-cut.)

Russian activity targeting UK elections and referendums remains a matter of intense interest and investigation — and had been raised publicly as a concern by former prime minister, Theresa May, in 2017.

Although her government failed to act on recommendations to strengthen UK election and data laws to respond to the risks posed by Internet-enabled interference. She also did nothing to investigate questions over the extent of foreign interference in the 2016 brexit referendum.

May was finally unseated by the ongoing political turmoil around brexit this summer, when Johnson took over as prime minister. But he has also turned a wilfully blind eye to the risks around foreign election interference — while fully availing himself of data-fuelled digital campaign methods whose ethics have been questioned by multiple UK oversight bodies.

A report into Russian interference in UK politics which was compiled by the UK’s intelligence and security parliamentary committee — and had been due to be published ahead of the general election — was also personally blocked from publication by the prime minister.

Voters won’t now get to see that information until after the election. Or, well, barring another strategic leak…

Elizabeth Warren bites back at Zuckerberg’s leaked threat to K.O. the government

Presidential candidate Senator Elizabeth Warren has responded publicly to a leaked attack on her by Facebook CEO Mark Zuckerberg, saying she won’t be bullied out of taking big tech to task for anticompetitive practices.

Warren’s subtweeting of the Facebook founder follows a leak in which the Verge obtained two hours of audio from an internal Q&A session with Zuckerberg — publishing a series of snippets today.

In one snippet the Facebook leader can be heard opining on how Warren’s plan to break up big tech would “suck”.

“You have someone like Elizabeth Warren who thinks that the right answer is to break up the companies … if she gets elected president, then I would bet that we will have a legal challenge, and I would bet that we will win the legal challenge,” he can be heard saying. “Does that still suck for us? Yeah. I mean, I don’t want to have a major lawsuit against our own government. … But look, at the end of the day, if someone’s going to try to threaten something that existential, you go to the mat and you fight.”

Warren responded soon after publication with a pithy zinger, writing on Twitter: “What would really ‘suck’ is if we don’t fix a corrupt system that lets giant companies like Facebook engage in illegal anticompetitive practices, stomp on consumer privacy rights, and repeatedly fumble their responsibility to protect our democracy.”

In a follow up tweet she added that she would not be afraid to “hold Big Tech companies like Facebook, Google and Amazon accountable”.

The Verge claims it did not obtain the leaked audio from Facebook’s PR machine. But in a public Facebook post following its publication of the audio snippets Zuckerberg links to their article — and doesn’t exactly sound mad to have what he calls his “unfiltered” views put right out there…

Whether the audio was leaked intentionally or not, as many commentators have been quick to point out — Warren principal among them — the fact that a company has gotten so vastly powerful it feels able to threaten to fight and defeat its own government should give pause for civilized thought.

Someone high up in Facebook’s PR department might want to pull Zuckerberg aside and make a major wincing gesture right in his face.

In another of the audio snippets Zuckerberg extends the threat — arguing that breaking up tech giants would threaten the integrity of elections.

“It’s just that breaking up these companies, whether it’s Facebook or Google or Amazon, is not actually going to solve the issues,” he is heard saying. “And, you know, it doesn’t make election interference less likely. It makes it more likely because now the companies can’t coordinate and work together.”

Elections such as the one Warren hopes to be running in as a US presidential candidate… so er… again this argument is a very strange one to be making when the critics you’re railing against are calling you an overbearing, oversized democracy-denting beast.

Zuckerberg’s remarks also contain the implied threat that a failure to properly police elections, by Facebook, could result in someone like Warren not actually getting elected in the first place.

Given, y’know, the vast power Facebook wields with its content-shaping algorithms which amplify narratives and shape public opinion at cheap, factory farm scale.

Reading between the lines, then, presidential hopefuls should be really careful what they say about important technology companies — or, er, else!

How times change.

Just a few short years ago Zuckerberg was the guy telling everyone that election interference via algorithmically amplified social media fakes was “a pretty crazy idea”.

Now he’s saying only tech behemoths like Facebook can save democracy from, uh, tech behemoths like Facebook…

For more on where Zuckerberg’s self-servingly circular logic leads, let’s refer to another of his public talking points: That only Facebook’s continued use of powerful, privacy-hostile AI technologies such as facial recognition can save Western society from a Chinese-style state dystopia in which the presence of your face broadcasts a social credit score for others to determine what you get to access.

This equally uncompelling piece of ‘Zuckerlogic’ sums to: ‘Don’t regulate our privacy hostile shit — or China will get to do worse shit before we can!’

So um… yeah but no.

Facebook found hosting masses of far right EU disinformation networks

A multi-month hunt for political disinformation spreading on Facebook in Europe suggests there are concerted efforts to use the platform to spread bogus far right propaganda to millions of voters ahead of a key EU vote which kicks off tomorrow.

Following the independent investigation, Facebook has taken down a total of 77 pages and 230 accounts from Germany, UK, France, Italy, Spain and Poland — which had been followed by an estimated 32 million people and generated 67 million ‘interactions’ (i.e. comments, likes, shares) in the last three months alone.

The bogus mainly far-right disinformation networks were not identified by Facebook — but had been reported to it by campaign group Avaaz — which says the fake pages had more Facebook followers and interactions than all the main EU far right and anti-EU parties combined.

“The results are overwhelming: the disinformation networks upon which Facebook acted had more interactions (13 million) in the past three months than the main party pages of the League, AfD, VOX, Brexit Party, Rassemblement National and PiS combined (9 million),” it writes in a new report.

Although interactions is the figure that best illustrates the impact and reach of these networks, comparing the number of followers of the networks taken down reveals an even clearer image. The Facebook networks takedown had almost three times (5.9 million) the number of followers as AfD, VOX, Brexit Party, Rassemblement National and PiS’s main Facebook pages combined (2 million).”

Avaaz has previously found and announced far right disinformation networks operating in Spain, Italy and Poland — and a spokesman confirmed to us it’s re-reporting some of its findings now (such as the ~30 pages and groups in Spain that had racked up 1.7M followers and 7.4M interactions, which we covered last month) to highlight an overall total for the investigation.

“Our report contains new information for France, United Kingdom and Germany,” the spokesman added.

Examples of politically charged disinformation being spread via Facebook by the bogus networks it found include a fake viral video seen by 10 million people that supposedly shows migrants in Italy destroying a police car (but was actually from a movie; which Avaaz adds that this fake had been “debunked years ago”); a story in Poland claiming that migrant taxi drivers rape European women, including a fake image; and fake news about a child cancer center being closed down by Catalan separatists in Spain.

There’s lots more country-specific detail in its full report.

In all, Avaaz reported more than 500 suspicious pages and groups to Facebook related to the three-month investigation of Facebook disinformation networks in Europe. Though Facebook only took down a subset of the far right muck-spreaders — around 15% of the suspicious pages reported to it.

“The networks were either spreading disinformation or using tactics to amplify their mainly anti-immigration, anti-EU, or racist content, in a way that appears to breach Facebook’s own policies,” Avaaz writes of what it found.

It estimates that content posted by all the suspicious pages it reported had been viewed some 533 million times over the pre-election period. Albeit, there’s no way to know whether or not everything it judged suspicious actually was.

In a statement responding to Avaaz’s findings, Facebook told us:

We thank Avaaz for sharing their research for us to investigate. As we have said, we are focused on protecting the integrity of elections across the European Union and around the world. We have removed a number of fake and duplicate accounts that were violating our authenticity policies, as well as multiple Pages for name change and other violations. We also took action against some additional Pages that repeatedly posted misinformation. We will take further action if we find additional violations.

The company did not respond to our question asking why it failed to unearth this political disinformation itself.

Ahead of the EU parliament vote, which begins tomorrow, Facebook invited a select group of journalists to tour a new Dublin-based election security ‘war room’ — where it talked about a “five pillars of countering disinformation” strategy to prevent cynical attempts to manipulate voters’ views.

But as Avaaz’s investigation shows there’s plenty of political disinformation flying by entirely unchecked.

One major ongoing issue where political disinformation and Facebook’s platform is concerned is that how the company enforces its own rules remains entirely opaque.

We don’t get to see all the detail — so can’t judge and assess all its decisions. Yet Facebook has been known to shut down swathes of accounts deemed fake ahead of elections, while apparently failing entirely to find other fakes (such as in this case).

It’s a situation that does not look compatible with the continued functioning of democracy given Facebook’s massive reach and power to influence.

Nor is the company under an obligation to report every fake account it confirms. Instead, Facebook gets to control the timing and flow of any official announcements it chooses to make about “coordinated inauthentic behaviour” — dropping these self-selected disclosures as and when it sees fit, and making them sound as routine as possible by cloaking them in its standard, dryly worded newspeak.

Back in January, Facebook COO Sheryl Sandberg admitted publicly that the company is blocking more than 1M fake accounts every day. If Facebook was reporting every fake it finds it would therefore need to do so via a real-time dashboard — not sporadic newsroom blog posts that inherently play down the scale of what is clearly embedded into its platform, and may be so massive and ongoing that it’s not really possible to know where Facebook stops and ‘Fakebook’ starts.

The suspicious behaviours that Avaaz attached to the pages and groups it found that appeared to be in breach of Facebook’s stated rules include the use of fake accounts, spamming, misleading page name changes and suspected coordinated inauthentic behavior.

When Avaaz previously reported the Spanish far right networks Facebook subsequently told us it had removed “a number” of pages violating its “authenticity policies”, including one page for name change violations but claimed “we aren’t removing accounts or Pages for coordinated inauthentic behavior”.

So again, it’s worth emphasizing that Facebook gets to define what is and isn’t acceptable on its platform — including creating terms that seek to normalize its own inherently dysfunctional ‘rules’ and their ‘enforcement’.

Such as by creating terms like “coordinated inauthentic behavior”, which sets a threshold of Facebook’s own choosing for what it will and won’t judge political disinformation. It’s inherently self-serving.

Given that Facebook only acted on a small proportion of what Avaaz found and reported overall, we might posit that the company is setting a very high bar for acting against suspicious activity. And that plenty of election fiddling is free flowing under its feeble radar. (When we previously asked Facebook whether it was disputing Avaaz’s finding of coordinated inauthentic behaviour vis-a-vis the far right disinformation networks it reported in Spain the company did not respond to the question.)

Much of the publicity around Facebook’s self-styled “election security” efforts has also focused on how it’s enforcing new disclosure rules around political ads. But again political disinformation masquerading as organic content continues being spread across its platform — where it’s being shown to be racking up millions of interactions with people’s brains and eyeballs.

Plus, as we reported yesterday, research conducted by the Oxford Internet Institute into pre-EU election content sharing on Facebook has found that sources of disinformation-spreading ‘junk news’ generate far greater engagement on its platform than professional journalism.

So while Facebook’s platform is also clearly full of real people sharing actual news and views, the fake BS which Avaaz’s findings imply is also flooding the platform, gets spread around more, on a per unit basis. And it’s democracy that suffers — because vote manipulators are able to pass off manipulative propaganda and hate speech as bona fide news and views as a consequence of Facebook publishing the fake stuff alongside genuine opinions and professional journalism.

It does not have algorithms that can perfectly distinguish one from the other, and has suggested it never will.

The bottom line is that even if Facebook dedicates far more resource (human and AI) to rooting out ‘election interference’ the wider problem is that a commercial entity which benefits from engagement on an ad-funded platform is also the referee setting the rules.

Indeed, the whole loud Facebook publicity effort around “election security” looks like a cynical attempt to distract the rest of us from how broken its rules are. Or, in other words, a platform that accelerates propaganda is also seeking to manipulate and skew our views.

When it comes to elections, Facebook moves slow, may still break things

This week, Facebook invited a small group of journalists — which didn’t include TechCrunch — to look at the “war room” it has set up in Dublin, Ireland, to help monitor its products for election-related content that violates its policies. (“Time and space constraints” limited the numbers, a spokesperson told us when he asked why we weren’t invited.)

Facebook announced it would be setting up this Dublin hub — which will bring together data scientists, researchers, legal and community team members, and others in the organization to tackle issues like fake news, hate speech and voter suppression — back in January. The company has said it has nearly 40 teams working on elections across its family of apps, without breaking out the number of staff it has dedicated to countering political disinformation. 

We have been told that there would be “no news items” during the closed tour — which, despite that, is “under embargo” until Sunday — beyond what Facebook and its executives discussed last Friday in a press conference about its European election preparations.

The tour looks to be a direct copy-paste of the one Facebook held to show off its US election “war room” last year, which it did invite us on. (In that case it was forced to claim it had not disbanded the room soon after heavily PR’ing its existence — saying the monitoring hub would be used again for future elections.)

We understand — via a non-Facebook source — that several broadcast journalists were among the invites to its Dublin “war room”. So expect to see a few gauzy inside views at the end of the weekend, as Facebook’s PR machine spins up a gear ahead of the vote to elect the next European Parliament later this month.

It’s clearly hoping shots of serious-looking Facebook employees crowded around banks of monitors will play well on camera and help influence public opinion that it’s delivering an even social media playing field for the EU parliament election. The European Commission is also keeping a close watch on how platforms handle political disinformation before a key vote.

But with the pan-EU elections set to start May 23, and a general election already held in Spain last month, we believe the lack of new developments to secure EU elections is very much to the company’s discredit.

The EU parliament elections are now a mere three weeks away, and there are a lot of unresolved questions and issues Facebook has yet to address. Yet we’re told the attending journalists were once again not allowed to put any questions to the fresh-faced Facebook employees staffing the “war room”.

Ahead of the looming batch of Sunday evening ‘war room tour’ news reports, which Facebook will be hoping contain its “five pillars of countering disinformation” talking points, we’ve compiled a run down of some key concerns and complications flowing from the company’s still highly centralized oversight of political campaigning on its platform — even as it seeks to gloss over how much dubious stuff keeps falling through the cracks.

Worthwhile counterpoints to another highly managed Facebook “election security” PR tour.

No overview of political ads in most EU markets

Since political disinformation created an existential nightmare for Facebook’s ad business with the revelations of Kremlin-backed propaganda targeting the 2016 US presidential election, the company has vowed to deliver transparency — via the launch of a searchable political ad archive for ads running across its products.

The Facebook Ad Library now shines a narrow beam of light into the murky world of political advertising. Before this, each Facebook user could only see the propaganda targeted specifically at them. Now, such ads stick around in its searchable repository for seven years. This is a major step up on total obscurity. (Obscurity that Facebook isn’t wholly keen to lift the lid on, we should add; Its political data releases to researchers so far haven’t gone back before 2017.)

However, in its current form, in the vast majority of markets, the Ad Library makes the user do all the leg work — running searches manually to try to understand and quantify how Facebook’s platform is being used to spread political messages intended to influence voters.

Facebook does also offer an Ad Library Report — a downloadable weekly summary of ads viewed and highest spending advertisers. But it only offers this in four countries globally right now: the US, India, Israel and the UK.

It has said it intends to ship an update to the reports in mid-May. But it’s not clear whether that will make them available in every EU country. (Mid-May would also be pretty late for elections that start May 23.)

So while the UK report makes clear that the new ‘Brexit Party’ is now a leading spender ahead of the EU election, what about the other 27 members of the bloc? Don’t they deserve an overview too?

A spokesperson we talked to about this week’s closed briefing said Facebook had no updates on expanding Ad Library Reports to more countries, in Europe or otherwise.

So, as it stands, the vast majority of EU citizens are missing out on meaningful reports that could help them understand which political advertisers are trying to reach them and how much they’re spending.

Which brings us to…

Facebook’s Ad Archive API is far too limited

In another positive step Facebook has launched an API for the ad archive that developers and researchers can use to query the data. However, as we reported earlier this week, many respected researchers have voiced disappointed with what it’s offering so far — saying the rate-limited API is not nearly open or accessible enough to get a complete picture of all ads running on its platform.

Following this criticism, Facebook’s director of product, Rob Leathern, tweeted a response, saying the API would improve. “With a new undertaking, we’re committed to feedback & want to improve in a privacy-safe way,” he wrote.

The question is when will researchers have a fit-for-purpose tool to understand how political propaganda is flowing over Facebook’s platform? Apparently not in time for the EU elections, either: We asked about this on Thursday and were pointed to Leathern’s tweets as the only update.

This issue is compounded by Facebook also restricting the ability of political transparency campaigners — such as the UK group WhoTargetsMe and US investigative journalism site ProPublica — to monitor ads via browser plug-ins, as the Guardian reported in January.

The net effect is that Facebook is making life hard for civil society groups and public interest researchers to study the flow of political messaging on its platform to try to quantify democratic impacts, and offering only a highly managed level of access to ad data that falls far short of the “political ads transparency” Facebook’s PR has been loudly trumpeting since 2017.

Ad loopholes remain ripe for exploiting

Facebook’s Ad Library includes data on political ads that were active on its platform but subsequently got pulled (made “inactive” in its parlance) because they broke its disclosure rules.

There are multiple examples of inactive ads for the Spanish far right party Vox visible in Facebook’s Ad Library that were pulled for running without the required disclaimer label, for example.

“After the ad started running, we determined that the ad was related to politics and issues of national importance and required the label. The ad was taken down,” runs the standard explainer Facebook offers if you click on the little ‘i’ next to an observation that “this ad ran without a disclaimer”.

What is not at all clear is how quickly Facebook acted to removed rule-breaking political ads.

It is possible to click on each individual ad to get some additional details. Here Facebook provides a per ad breakdown of impressions; genders, ages, and regional locations of the people who saw the ad; and how much was spent on it.

But all those clicks don’t scale. So it’s not possible to get an overview of how effectively Facebook is handling political ad rule breakers. Unless, well, you literally go in clicking and counting on each and every ad…

There is then also the wider question of whether a political advertiser that is found to be systematically breaking Facebook rules should be allowed to keep running ads on its platform.

Because if Facebook does allow that to happen there’s a pretty obvious (and massive) workaround for its disclosure rules: Bad faith political advertisers could simply keep submitting fresh ads after the last batch got taken down.

We were, for instance, able to find inactive Vox ads taken down for lacking a disclaimer that had still been able to rack up thousands — and even tens of thousands — of impressions in the time they were still active.

Facebook needs to be much clearer about how it handles systematic rule breakers.

Definition of political issue ads is still opaque

Facebook currently requires that all political advertisers in the EU go through its authorization process in the country where ads are being delivered if they relate to the European Parliamentary elections, as a step to try and prevent foreign interference.

This means it asks political advertisers to submit documents and runs technical checks to confirm their identity and location. Though it noted, on last week’s call, that it cannot guarantee this ID system cannot be circumvented. (As it was last year when UK journalists were able to successfully place ads paid for by ‘Cambridge Analytica’.)

One other big potential workaround is the question of what is a political ad? And what is an issue ad?

Facebook says these types of ads on Facebook and Instagram in the EU “must now be clearly labeled, including a paid-for-by disclosure from the advertiser at the top of the ad” — so users can see who is paying for the ads and, if there’s a business or organization behind it, their contact details, plus some disclosure about who, if anyone, saw the ads.

But the big question is how is Facebook defining political and issue ads across Europe?

While political ads might seem fairly easy to categorize — assuming they’re attached to registered political parties and candidates, issues are a whole lot more subjective.

Currently Facebook defines issue ads as those relating to “any national legislative issue of public importance in any place where the ad is being run.” It says it worked with EU barometer, YouGov and other third parties to develop an initial list of key issues — examples for Europe include immigration, civil and social rights, political values, security and foreign policy, the economy and environmental politics — that it will “refine… over time.”

Again specifics on when and how that will be refined are not clear. Yet ads that Facebook does not deem political/issue ads will slip right under its radar. They won’t be included in the Ad Library; they won’t be searchable; but they will be able to influence Facebook users under the perfect cover of its commercial ad platform — as before.

So if any maliciously minded propaganda slips through Facebook’s net, because the company decides it’s a non-political issue, it will once again leave no auditable trace.

In recent years the company has also had a habit of announcing major takedowns of what it badges “fake accounts” ahead of major votes. But again voters have to take it on trust that Facebook is getting those judgement calls right.

Facebook continues to bar pan-EU campaigns

On the flip side of weeding out non-transparent political propaganda and/or political disinformation, Facebook is currently blocking the free flow of legal pan-EU political campaigning on its platform.

This issue first came to light several weeks ago, when it emerged that European officials had written to Nick Clegg (Facebook’s vice president of global affairs) to point out that its current rules — i.e. that require those campaigning via Facebook ads to have a registered office in the country where the ad is running — run counter to the pan-European nature of this particular election.

It means EU institutions are in the strange position of not being able to run Facebook ads for their own pan-EU election everywhere across the region. “This runs counter to the nature of EU institutions. By definition, our constituency is multinational and our target audience are in all EU countries and beyond,” the EU’s most senior civil servants pointed out in a letter to the company last month.

This issue impacts not just EU institutions and organizations advocating for particular policies and candidates across EU borders, but even NGOs wanting to run vanilla “get out the vote” campaigns Europe-wide — leading to a number to accuse Facebook of breaching their electoral rights and freedoms.

Facebook claimed last week that the ball is effectively in the regulators’ court on this issue — saying it’s open to making the changes but has to get their agreement to do so. A spokesperson confirmed to us that there is no update to that situation, either.

Of course the company may be trying to err on the side of caution, to prevent bad actors being able to interfere with the vote across Europe. But at what cost to democratic freedoms?

What about fake news spreading on WhatsApp?

Facebook’s ‘election security’ initiatives have focused on political and/or politically charged ads running across its products. But there’s no shortage of political disinformation flowing unchecked across its platforms as user uploaded ‘content’.

On the Facebook-owned messaging app WhatsApp, which is hugely popular in some European markets, the presence of end-to-end encryption further complicates this issue by providing a cloak for the spread of political propaganda that’s not being regulated by Facebook.

In a recent study of political messages spread via WhatsApp ahead of last month’s general election in Spain, the campaign group Avaaz dubbed it “social media’s dark web” — claiming the app had been “flooded with lies and hate”.

Posts range from fake news about Prime Minister Pedro Sánchez signing a secret deal for Catalan independence to conspiracy theories about migrants receiving big cash payouts, propaganda against gay people and an endless flood of hateful, sexist, racist memes and outright lies,” it wrote. 

Avaaz compiled this snapshot of politically charged messages and memes being shared on Spanish WhatsApp by co-opting 5,833 local members to forward election-related content that they deemed false, misleading or hateful.

It says it received a total of 2,461 submissions — which is of course just a tiny, tiny fraction of the stuff being shared in WhatsApp groups and chats. Which makes this app the elephant in Facebook’s election ‘war room’.

What exactly is a war room anyway?

Facebook has said its Dublin Elections Operation Center — to give it its official title — is “focused on the EU elections”, while also suggesting it will plug into a network of global teams “to better coordinate in real time across regions and with our headquarters in California [and] accelerate our rapid response times to fight bad actors and bad content”.

But we’re concerned Facebook is sending out mixed — and potentially misleading — messages about how its election-focused resources are being allocated.

Our (non-Facebook) source told us the 40-odd staffers in the Dublin hub during the press tour were simultaneously looking at the Indian elections. If that’s the case, it does not sound entirely “focused” on either the EU or India’s elections. 

Facebook’s eponymous platform has 2.375 billion monthly active users globally, with some 384 million MAUs in Europe. That’s more users than in the US (243M MAUs). Though Europe is Facebook’s second-biggest market in terms of revenues after the US. Last quarter, it pulled in $3.65BN in sales for Facebook (versus $7.3BN for the US) out of $15BN overall.

Apart from any kind of moral or legal pressure that Facebook might have for running a more responsible platform when it comes to supporting democratic processes, these numbers underscore the business imperative that it has to get this sorted out in Europe in a better way.

Having a “war room” may sound like a start, but unfortunately Facebook is presenting it as an end in itself. And its foot-dragging on all of the bigger issues that need tackling, in effect, means the war will continue to drag on.