UK gov’t asleep at the wheel on Russia cyber ops threat, report warns

The UK lacks a comprehensive and cohesive high level strategy to respond to the cyber threat posed by Russia and other hostile states using online disinformation and influence ops to target democratic institutions and values, a parliamentary committee has warned in a long-delayed report that’s finally been published today.

“The UK is clearly a target for Russia’s disinformation campaigns and political influence operations and must therefore equip itself to counter such efforts,” the committee warns, calling for legislation to tackle the multi-pronged threat posed by hostile foreign influence operations in the digital era.

The report also urges the government to do the leg work of attributing state-backed cyber attacks — recommending a tactic of ‘naming and shaming’ perpetrators, while recognizing that UK agencies have, since the WannaCry attack, been more willing to publicly attribute a cyber attack to a state actor like Russia than they were in decades past. (Last week the government did just that in relation to COVID-19 vaccine R&D efforts — attacking Russia for targeting the work with custom malware, as UK ministers sought to get out ahead of the committee’s recommendations.)

“Russia’s cyber capability, when combined with its willingness to deploy it in a malicious capacity, is a matter of grave concern, and poses an immediate and urgent threat to our national security,” the committee warns.

On the threat posed to democracy by state-backed online disinformation and influence campaigns, the committee also points a finger of blame at social media giants for “failing to play their part”.

“It is the social media companies which hold the key and yet are failing to play their part,” the committee writes, urging the government to establish “a protocol” with platform giants to ensure they “take covert hostile state use of their platforms seriously, and have clear timescales within which they commit to removing such material”.

“Government should ‘name and shame’ those which fail to act,” the committee adds, suggesting such a protocol could be “usefully expanded” to other areas where the government is seeking action from platforms giants.

Russia report

The Intelligence and Security Committee (ISC) prepared the dossier for publication last year, after conducting a lengthy enquiry into Russian state influence in the UK — including examining how money from Russian oligarchs flows into the country, and especially into London, via wealthy ex-pats and their establishment links; as well as looking at Russia’s use of hostile cyber operations to attempt to influence UK elections.

UK prime minister Boris Johnson blocked publication ahead of last year’s general election — meaning it’s taken a full nine months for the report to make it into the public domain, despite then committee chair urging publication ahead of polling day. The UK’s next election, meanwhile, is not likely for some half a decade’s time. (Related: Johnson was able to capitalize on unregulated social media ads during his own election campaign last year, so, er… )

The DCMS committee, which was one of the bodies that submitted evidence to the ISC’s inquiry, has similarly been warning for years about the threats posed to democracy by online disinformation and political targeting — as have the national data watchdog and others. Yet successive Conservative-led governments have failed to act on urgent recommendations in this area.

Last year ministers set out a proposal to regulate a broad swathe of ‘online harms’, although the focus is not specifically on political disinformation — and draft legislation still hasn’t been laid before parliament.

“The clearest requirement for immediate action is for new legislation,” the ISC committee writes of the threat posed by Russia. “The Intelligence Community must be given the tools it needs and be put in the best possible position if it is to tackle this very capable adversary, and this means a new statutory framework to tackle espionage, the illicit financial dealings of the Russian elite and the ‘enablers’ who support this activity.”

The report labels foreign disinformation operations and online influence campaigns something of a “hot potato” no UK agency wants to handle. A key gap the report highlights is this lack of ministerial responsibility for combating the democratic threat posed by hostile foreign states, leveraging connectivity to spread propaganda or deploy malware.

“Protecting our democratic discourse and processes from hostile foreign interference is a central responsibility of Government, and should be a ministerial priority,” the committee writes, flagging both the lack of central, ministerial responsibility and a reluctance by the UK’s intelligence and security agencies to involve themselves in actively defending democratic processes.

“Whilst we understand the nervousness around any suggestion that the intelligence and security Agencies might be involved in democratic processes – certainly a fear that is writ large in other countries – that cannot apply when it comes to the protection of those processes. And without seeking in any way to imply that DCMS [the Department for Digital, Culture, Media and Sport] is not capable, or that the Electoral Commission is not a staunch defender of democracy, it is a question of scale and access. DCMS is a small Whitehall policy department and the Electoral Commission is an arm’s length body; neither is in the central position required to tackle a major hostile state threat to our democracy.”

Last July the government did announce what it called its Defending Democracy programme, which — per the ISC committee report — is intended to “co-ordinate work on protecting democratic discourse and processes from interference under the leadership of the Cabinet Office, with the Chancellor of the Duchy of Lancaster and the Deputy National Security Adviser holding overall responsibility at ministerial and official level respectively”.

However the committee points out this structure is “still rather fragmented”, noting that at least ten separate teams are involved across government.

It also questions the level of priority being attached to the issue, writing that: “It seems to have been afforded a rather low priority: it was signed off by the National Security Council only in February 2019, almost three years after the EU referendum campaign and the US presidential election which brought these issues to the fore.”

“In the Committee’s view, a foreign power seeking to interfere in our democratic processes – whether it is successful or not – cannot be taken lightly; our democracy is intrinsic to our country’s success and well-being and any threat to it must be treated as a serious national security issue by those tasked with defending us,” it adds.

The lack of an overarching ministerial body invested with central responsibility to tackle online threats to democracy goes a long way to explaining the damp squib of a response around breaches of UK election law which relate to the Brexit vote — when social media platforms were used to funnel in dark money to fund digital ads aimed at influencing the outcome of what should have been a UK-only vote.

(A redacted footnote in the report touches on the £8M donation by Arron Banks to the Leave.EU campaign — “the biggest donor in British political history”; noting how the Electoral Commission, which had been investigating the source of the donation, referred the case to the National Crime Agency — “which investigated it ***” [redacting any committee commentary on what was or was not found by the NCA]; before adding: “In September 2019, the National Crime Agency announced that it had concluded the investigation, having found no evidence that any criminal offences had been committed under the Political Parties, Elections and Referendums Act 2000 or company law by any of the individuals or organisations referred to it by the Electoral Commission.”)

“The regulation of political advertising falls outside this Committee’s remit,” the ISC report adds, under a brief section on ‘Political advertising on social media’. “We agree, however, with the DCMS Select Committee’s conclusion that the regulatory framework needs urgent review if it is to be fit for purpose in the age of widespread social media.

“In particular, we note and affirm the Select Committee’s recommendation that all online political adverts should include an imprint stating who is paying for it. We would add to that a requirement for social media companies to co-operate with MI5 where it is suspected that a hostile foreign state may be covertly running a campaign.”

On Brexit itself, and the heavily polarizing question of how much influence Russia was able to exert over the UK’s vote to leave the European Union, the committee suggests this would be “difficult” or even “impossible” to assess. But it emphasizes: “it is important to establish whether a hostile state took deliberate action with the aim of influencing a UK democratic process, irrespective of whether it was successful or not.”

The report then goes on to query the lack of evidence of an attempt by the UK government or security agencies to do just that.

In one interesting — and heavily redacted paragraph — the committee notes it sought to ascertain whether UK intelligence agencies hold “secret intelligence” that might support or supplement open source studies that have pointed to attempts by Russia to influence the Brexit vote — but was sent only a very brief response.

Here the committee writes:

In response to our request for written evidence at the outset of the Inquiry, MI5 initially provided just six lines of text. It stated that ***, before referring to academic studies. This was noteworthy in terms of the way it was couched (***) and the reference to open source studies ***. The brevity was also, to us, again, indicative of the extreme caution amongst the intelligence and security Agencies at the thought that they might have any role in relation to the UK’s democratic processes, and particularly one as contentious as the EU referendum. We repeat that this attitude is illogical; this is about the protection of the process and mechanism from hostile state interference, which should fall to our intelligence and security Agencies.

The report also records a gap in the government’s response on this issue — with the committee being told of no active attempt by government to understand whether or not UK elections have been targeted by Russia.

“The written evidence provided to us appeared to suggest that HMG had not seen or sought evidence of successful interference in UK democratic processes or any activity that has had a material impact on an election, for example influencing results,” it writes.

A later redacted paragraph indicates an assessment by the committee that the government failed to fully take into account open source material which had indicated attempts to influence Brexit (such as the studies of attempts to influence the referendum using Russia state mouthpieces RT and Sputnik; or via social media campaigns).

“Given that the Committee has previously been informed that open source material is now fully represented in the Government’s understanding of the threat picture, it was surprising to us that in this instance it was not,” the committee adds.

The committee also raises an eyebrow at the lack of any post-referendum analysis of Russian attempts to influence the vote by UK intelligence agencies — which it describes as in “stark contrast” to the US agency response following the revelations of Russian disops targeted at the 2016 US presidential election.

“Whilst the issues at stake in the EU referendum campaign are less clear-cut, it is nonetheless the Committee’s view that the UK Intelligence Community should produce an analogous assessment of potential Russian interference in the EU referendum and that an unclassified summary of it be published,” it suggests.

In other recommendations related to Russia’s “offensive cyber” capabilities, the committee reiterates that there’s a need for “a common international approach” to tackling the threat.

“It is clear there is now a pressing requirement for the introduction of a doctrine, or set of protocols, to ensure that there is a common approach to Offensive Cyber. While the UN has agreed that international law, and in particular the UN Charter, applies in cyberspace, there is still a need for a greater global understanding of how this should work in practice,” it writes, noting that it made the same recommendation in its 2016-17 annual
report.

“It is imperative that there are now tangible developments in this area in light of the increasing threat from Russia (and others, including China, Iran and the Democratic People’s Republic of Korea). Achieving a consensus on this common approach will be a challenging process, but as a leading proponent of the Rules Based International Order it is essential that the UK helps to promote and shape Rules of Engagement, working
with our allies.”

The security-cleared committee notes that the public report is a redacted summary of a more detailed dossier it felt unable to publish on account of classified information and the risk of Russia being able to use it to glean too much intelligence on the level of UK intelligence of its activities. Hence opting for a more truncated (and redacted) document than it would usually publish — which again raises questions over why Johnson sought repeatedly to delay publication.

Plenty of sections of the report contain a string of asterisk at a crucial point, eliding strategic specifics (e.g. this paragraph on exactly how Russia is targeting critical UK infrastructure: “Russia has also undertaken cyber pre-positioning activity on other nations’ Critical National Infrastructure (CNI). The National Cyber Security Centre (NCSC) has advised that there is *** Russian cyber intrusion into the UK’s CNI – particularly marked in the *** sectors.)”)

Most recently Number 10 sought to influence the election of the ISC committee chair by seeking to parachute a preferred candidate into the seat — which could have further delayed publication of the report. However the attempt at stacking the committee was thwarted when new chair, Conservative MP Julian Lewis, sided with opposition MPs to vote for himself. After which the newly elected committee voted unanimously to release the Russia report before the summer recess of parliament, avoiding another multi-month delay.

Another major chunk of the report, which tackles the topic of Russian expatriate oligarchs and their money; how they’ve been welcomed into UK society with “open arms”, enabling their illicit finance to be recycled through “the London ‘laundromat’, and to find its way inexorably into political party coffers, may explain the government’s reluctance for the report to be made public.

The committee’s commentary here makes particularly awkward reading for a political party with major Russian donors. And a prime minister with Russian oligarch friends

“It is widely recognised that the key to London’s appeal was the exploitation of the UK’s investor visa scheme, introduced in 1994, followed by the promotion of a light and limited touch to regulation, with London’s strong capital and housing markets offering sound investment opportunities,” the committee writes, further noting that Russian money was also invested in “extending patronage and building influence across a wide sphere of the British establishment – PR firms, charities, political interests, academia and cultural institutions were all willing beneficiaries of Russian money, contributing to a ‘reputation laundering’ process”.

“In brief, Russian influence in the UK is ‘the new normal’, and there are a lot of Russians with very close links to Putin who are well integrated into the UK business and social scene, and accepted because of their wealth,” it adds.

You can read the full report here.

Brexit means clear your cookies for democracy

Brexit looks set to further sink the already battered reputation of tracking cookies after a Buzzfeed report yesterday revealed what appears to be a plan by the UK’s minority government to use official government websites to harvest personal data on UK citizens for targeting purposes.

According to leaked government documents obtained by the news site, the prime minister has instructed government departments to share website usage data that’s collected via gov.uk websites with ministers on a cabinet committee tasked with preparing for a ‘no deal’ Brexit.

It’s not clear how linking up citizens use of essential government portals could further ‘no deal’ prep.

Rather the suspicion is it’s a massive, consent-less voter data grab by party political forces preparing for an inevitable general election in which the current Tory PM plans to campaign on a pro-Brexit message.

The instruction to pool gov.uk usage data as a “top priority” is also being justified internally in instructions to civil servants as necessary to accelerate plans for a digital revolution in public services — an odd ASAP to be claiming at a time of national, Brexit-induced crisis when there are plenty more pressing priorities (given the October 31 EU exit date looming).

A government spokesperson nonetheless told Buzzfeed the data is being collected to improve service delivery. They also claimed it’s “anonymized” data.

“Individual government departments currently collect anonymised user data when people use gov.uk. The Government Digital Service is working on a project to bring this anonymous data together to make sure people can access all the services they need as easily as possible,” the spokesperson said, further claiming: “No personal data is collected at any point during the process, and all activity is fully compliant with our legal and ethical obligations.”

However privacy experts quickly pointed out the nonsense of trying to pretend that joined up user data given a shared identifier is in any way anonymous.

 

For those struggling to keep up with the blistering pace of UK political developments engendered by Brexit, this is a government led by a new (and unelected) prime minister, Boris ‘Brexit: Do or Die’ Johnson, and his special advisor, digital guru Dominic Cummings, of election law-breaking Vote Leave campaign fame.

Back in 2015 and 2016, Cummings, then the director of the official Vote Leave campaign, masterminded a plan to win the EU referendum by using social media data to profile voters — blitzing them with millions of targeted ads in final days of the Brexit campaign.

Vote Leave was later found to have channelled money to Cambridge Analytica-linked Canadian data firm Aggregate IQ to target pro-Brexit ads via Facebook’s platform. Many of which were subsequently revealed to have used blatantly xenophobic messaging to push racist anti-EU messaging when Facebook finally handed over the ad data.

Setting aside the use of xenophobic dark ads to whip up racist sentiment to sell Brexit to voters, and ongoing questions about exactly how Vote Leave acquired data on UK voters for targeting them with political ads (including ethical questions about the use of a football quiz touting a £50M prize run on social media as a mass voter data-harvesting exercise), last year the UK’s Electoral Commission found Vote Leave had breached campaign spending limits through undeclared joint working with another pro-Brexit campaign — via which almost half a million pounds was illegally channeled into Facebook ads.

The Vote Leave campaign was fined £61k by the Electoral Commission, and referred to the police. (An investigation is possibly ongoing.)

Cummings, the ‘huge brain’ behind Vote Leave’s digital strategy, did not suffer a dent in his career as a consequence of all this — on the contrary, he was appointed by Johnson as senior advisor this summer, after Johnson won the Conservative leader contest and so became the third UK PM since the 2016 vote for Brexit.

With Cummings at his side, it’s been full steam ahead for Johnson on social media ads and data grabs, as we reported last month — paving the way for a hoped for general election campaign, fuelled by ‘no holds barred’ data science. Democratic ethics? Not in this digitally disruptive administration!

The Johnson-Cummings pact ignores entirely the loud misgivings sounded by the UK’s information commissioner — which a year ago warned that political microtargeting risks undermining trust in democracy. The ICO called then for an ethical pause. Instead Johnson stuck up a proverbial finger by installing Cummings in No.10.

The UK’s Digital, Culture, Media and Sport parliamentary committee, which tried and failed to get Cummings to testify before it last year as part of a wide-ranging enquiry into online disinformation (a snub for which Cummings was later found in contempt of parliament), also urged the government to update election law as a priority last summer — saying it was essential to act to defend democracy against data-fuelled misinformation and disinformation. A call that was met with cold water.

This means the same old laws that failed to prevent ethically dubious voter data-harvesting during the EU referendum campaign, and failed to prevent social media ad platforms and online payment platforms (hi, Paypal!) from being the conduit for illegal foreign donations into UK campaigns, are now apparently incapable of responding to another voter data heist trick, this time cooked up at the heart of government on the umbrella pretext of ‘preparing for Brexit’.

The repurposing of government departments under Johnson-Cummings for pro-Brexit propaganda messaging also looks decidedly whiffy…

Asked about the legality of the data pooling gov.uk plan as reported by Buzzfeed, an ICO spokesperson told us: “People should be able to make informed choices about the way their data is used. That’s why organisations have to ensure that they process personal information fairly, legally and transparently. When that doesn’t happen, the ICO can take action.”

Can — but hasn’t yet.

It’s also not clear what action the ICO could end up taking to purge UK voter data that’s already been (or is in the process of being) sucked out of the Internet to be repurposed for party political purposes — including, judging by the Vote Leave playbook, for microtargeted ads that promote a no holds barred ‘no deal’ Brexit agenda.

One thing is clear: Any action would need to be swiftly enacted and robustly enforced if it were to have a meaningful chance of defending democracy from ethics-free data-targeting.

Sadly, the ICO has yet to show an appetite for swift and robust action where political parties are concerned.

Likely because a report it put out last fall essentially called out all UK political parties for misusing people’s data. It followed up saying it would audit the political parties starting early this year — but has yet to publish its findings.

Concerned opposition MPs are left tweeting into the regulatory abyss — decrying the ‘coup’ and forlornly pressing for action… Though if the political boot were on the other foot it might well be a different story.

Among the cookies used on gov.uk sites are Google Analytics cookies which store information on how visitors got to the site; the pages visited and length of time spent on them; and items clicked on. Which could certainly enable rich profiles to be attached to single visitors IDs.

Visitors to gov.uk properties can switch off Google Analytics measurement cookies, as well as denying gov.uk communications and marketing cookies, and cookies that store preferences — with only “strictly necessary” cookies (which remember form progress and serve notifications) lacking a user toggle.

What should concerned UK citizens to do to defend democracy against the data science folks we’re told are being thrown at the Johnson-Cummings GSD data pooling project? Practice good privacy hygiene.

Clear your cookies. Indeed, switch off gov.uk cookies. Deny access wherever and whenever possible.

It’s probably also a good idea to use a fresh (incognito) browser session each time you need to visit a government website and close the session (with cookies set to clear) immediately you’re done. And use a good tracker blocker.

When the laws have so spectacularly failed to keep up with the data processors, limiting how your information is gathered online is the only way to be sure. Though as we’ve written before it’s not easy.

Privacy is personal and unfortunately, with the laws lagging, the personal is now trivially cheap and easy to weaponize for political dark arts that treat democracy as a game of PR, debasing the entire system in the process.

Brexit backer’s insurance firm and leave campaign fined £120k by data watchdog

The UK’s data protection watchdog has issued fines against a pro-Brexit campaign, Leave.EU, and an insurance company owned by the largest individual donor to the leave cause, Arron Banks’ Eldon Insurance.

The penalties have been handed down for what the Information Commissioner’s Office (ICO) dubs “serious breaches of electronic marketing laws” during the 2016 referendum on the UK’s European Union membership. 

The fines — served under the Privacy and Electronic Communications Regulations 2003, which governs electronic marketing — total £120,000 (~$157k); with Leave.EU fined a total of £60k (covering two incidents) and Eldon Insurance £60k.

The ICO’s investigation found the two entities were closely linked and it says systems for segregating the personal data of insurance customers’ from that of political subscribers’ were “ineffective”.

Leave.EU used Eldon Insurance customers’ details unlawfully to send almost 300,000 political marketing messages, according to the ICO’s probe.

Eldon Insurance was also found to have carried out two unlawful direct marketing campaigns which involved the sending of more than a million emails to Leave.EU subscribers without “sufficient consent”.

The ICO says it will now review how both entities are complying with data protection laws by carrying out audits — to observe how personal data is processed; what policies and procedures are in place; and look at the types of training made available for staff.

Key employees across both organisations will also be interviewed, including directors, staff and their data protection officers.

The ICO adds that it will publish its findings when it concludes the audits.

Commenting in a statement, information commissioner Elizabeth Denham, said: “It is deeply concerning that sensitive personal data gathered for political purposes was later used for insurance purposes; and vice versa. It should never have happened. We have been told both organisations have made improvements and learned from these events. But the ICO will now audit the organisations to determine how they are using customers’ personal information.”

The ICO issued a preliminary enforcement notice and three notices of intent to fine Leave.EU and Eldon Insurance trading as Go Skippy Insurance, last November, as part of a wide-ranging investigation into data analytics for political purposes.

“After considering the companies’ representations, the ICO has issued the fines, confirming a change to one amount, with the other two remaining unchanged,” it writes today. “The regulator has also issued two assessment notices to Leave.EU and Eldon Insurance to inform both organisations that they will be audited.”

Banks and associates connected to his unofficial leave campaign remain under investigation by the UK’s National Crime Agency. Last November the NCA announced an investigation into the source of £8M in funding Banks provided to the Leave.EU campaign — after an Electoral Commission investigation found there were reasonable grounds to suspect he was “not the true source” of the money.

The UK introduced legislation back in the year 2000 to outlaw foreign donations, with donors of even a few thousand pounds needing to be both British citizens and on the UK electoral roll for the donations to be legal.

However since then the rise of social media platforms has provided an unregulated workaround for election spending rules by offering a free-for-all conduit for political ads by the backdoor.

And it’s only since major scandals over election interference, such as Kremlin propaganda targeting the 2016 US presidential election, that tech giants have started to pay attention to the problem and introduce some checks on who can run political ads.

Facebook, for example, recently announced it will set up human-staffed operations centers to monitor political news.

In a few markets it’s also launched tools that offer a degree of transparency around who is buying certain types of political ads. But such measures clearly come far too late for Brexit.

A UK parliamentary committee which spend months investigating the issue of online political disinformation — and slammed Facebook for dodging its questions — came out with a laundry list of recommendations for changes to the law in a preliminary report last year, including calling for a levy on social media firms to defend democracy from disinformation.

Although the government rejected the levy, and most of the committee’s recommendations — preferring a ‘wait and see’ approach. (It has previously committed to legislate around social media and safety, though.)

Last year the UK’s election oversight body issued a series of fines for other leave-backed Brexit referendum campaigns — after finding the official Vote Leave campaign had breached election campaign spending limits by undeclared joint working with a youth-focused Brexit campaign, BeLeave.

Almost half a million pounds in illegal overspending was channeled via a Canadian data firm, AggregateIQ, to use for targeting political advertising pushing pro-Brexit ads on Facebook’s platform.

Facebook later released some of the ads that had been used by Brexit campaigns, which included fake claims and dogwhistle racism being used by leave campaigns to stir up fear among voters about foreigners coming to the UK.

The Facebook Cambridge Analytica data misuse scandal which snowballed into a major global scandal last year, also triggered a major ICO investigation into the use of personal data for political campaigning, parts of which remain ongoing.

The watchdog issued a £500,000 fine on Facebook last year, as part of that probe — saying the company had “failed to sufficiently protect the privacy of its users before, during and after the unlawful processing” by Cambridge Analytica.

Though Facebook has filed an appeal, arguing the ICO did not find evidence that any UK users’ data was processed by CA.

Last year information commissioner Elizabeth Denham also called for an “ethical pause” around the use of microtargeting ad tools for political campaigning — saying there was “a risk of developing a system of voter surveillance by default”.

In the case of Facebook, the platform has generally preferred to continue accepting money for political ads, while it works on expanding self-styled “election security” measures.

Although it did temporarily suspend foreign-funded ads during a referendum in Ireland last year on whether to repeal or retain a constitutional ban on abortion — acting after concerns had been raised. It also fast tracked the launch of an ad transparency tool in the market ahead of the vote.

Fake news ‘threat to democracy’ report gets back-burner response from UK gov’t

The UK government has rejected a parliamentary committee’s call for a levy on social media firms to fund digital literacy lessons to combat the impact of disinformation online.

The recommendation of a levy on social media platforms was made by the Digital, Culture, Media and Sport committee three months ago, in a preliminary report following a multi-month investigation into the impact of so-called ‘fake news’ on democratic processes.

Though it has suggested the terms ‘misinformation’ and ‘disinformation’ be used instead, to better pin down exact types of problematic inauthentic content — and on that at least the government agrees. But just not on very much else. At least not yet.

Among around 50 policy suggestions in the interim report — which the committee put out quickly exactly to call for “urgent action” to ‘defend democracy’ — it urged the government to put forward proposals for an education levy on social media.

But in its response, released by the committee today, the government writes that it is “continuing to build the evidence base on a social media levy to inform our approach in this area”.

“We are aware that companies and charities are undertaking a wide range of work to tackle online harms and would want to ensure we do not negatively impact existing work,” it adds, suggesting it’s most keen not to be accused of making a tricky problem worse.

Earlier this year the government did announce plans to set up a dedicated national security unit to combat state-led disinformation campaigns, with the unit expected to monitor social media platforms to support faster debunking of online fakes — by being able to react more quickly to co-ordinated interference efforts by foreign states.

But going a step further and requiring social media platforms themselves to pay a levy to fund domestic education programs — to arm citizens with critical thinking capabilities so people can more intelligently parse content being algorithmically pushed at them — is not, apparently, forming part of government’s current thinking.

Though it is not taking the idea of some form of future social media tax off the table entirely, as it continues seeking ways to make big tech pay a fairer share of earnings into the public purse, also noting in its response: “We will be considering any levy in the context of existing work being led by HM Treasury in relation to corporate tax and the digital economy.”

As a whole, the government’s response to the DCMS committee’s laundry list of policy recommendations around the democratic risks of online disinformation can be summed up in a word as ‘cautious’ — with only three of the report’s forty-two recommendations being accepted outright, as the committee tells it, and four fully rejected.

Most of the rest are being filed under ‘come back later — we’re still looking into it’.

So if you take the view that ‘fake news’ online has already had a tangible and worrying impact on democratic debate the government’s response will come across as underwhelming and lacking in critical urgency. (Though it’s hardly alone on that front.)

The committee has reacted with disappointment — with chair Damian Collins dubbing the government response “disappointing and a missed opportunity”, and also accusing ministers of hiding behind ‘ongoing investigations’ to avoid commenting on the committee’s call that the UK’s National Crime Agency urgently carry out its own investigation into “allegations involving a number of companies”.

Earlier this month Collins also called for the Met Police to explain why they had not opened an investigation into Brexit-related campaign spending breaches.

It has also this month emerged that the force will not examine claims of Russian meddling in the referendum.

Meanwhile the political circus and business uncertainty triggered by the Brexit vote goes on.

Holding pattern

The bulk of the government’s response to the DCMS interim report entails flagging a number of existing and/or ongoing consultations and reviews — such as the ‘Protecting the Debate: Intimidating, Influence and Information‘ consultation, which it launched this summer.

But by saying it’s continuing to gather evidence on a number of fronts the government is also saying it does not feel it’s necessary to rush through any regulatory responses to technology-accelerated, socially divisive/politically sensitive viral nonsense — claiming also that it hasn’t seen any evidence that malicious misinformation has been able to skew genuine democratic debate on the domestic front.

It’ll be music to Facebook’s ears given the awkward scrutiny the company has faced from lawmakers at home and, indeed, elsewhere in Europe — in the wake of a major data misuse scandal with a deeply political angle.

The government also points multiple times to a forthcoming oversight body which is in the process of being established — aka the Centre for Data Ethics and Innovation — saying it expects this to grapple with a number of the issues of concern raised by the committee, such as ad transparency and targeting; and to work towards agreeing best practices in areas such as “targeting, fairness, transparency and liability around the use of algorithms and data-driven technologies”.

Identifying “potential new regulations” is another stated role for the future body. Though given it’s not yet actively grappling with any of these issues the UK’s democratically concerned citizens are simply being told to wait.

“The government recognises that as technological advancements are made, and the use of data and AI becomes more complex, our existing governance frameworks may need to be strengthened and updated. That is why we are setting up the Centre,” the government writes, still apparently questioning whether legislative updates are needed — this in a response to the committee’s call, informed by its close questioning of tech firms and data experts, for an oversight body to be able to audit “non-financial” aspects of technology companies (including security mechanism and algorithms) to “ensure they are operating responsibly”.

“As set out in the recent consultation on the Centre, we expect it to look closely at issues around the use of algorithms, such as fairness, transparency, and targeting,” the government continues, noting that details of the body’s initial work program will be published in the fall — when it says it will also put out its response to the aforementioned consultation.

It does not specify when the ethics body will be in any kind of position to hit this shifty ground running. So again there’s zero sense the government intends to act at a pace commensurate with the fast-changing technologies in question.

Then, where the committee’s recommendations touch on the work of existing UK oversight bodies, such as Competition and Markets Authority, the ICO data watchdog, the Electoral Commission and the National Crime Agency, the government dodges specific concerns by suggesting it’s not appropriate for it to comment “on independent bodies or ongoing investigations”.

Also notable: It continues to reject entirely the idea that Russian-backed disinformation campaigns have had any impact on domestic democratic processes at all — despite public remarks by prime minister Theresa May  last year generally attacking Putin for weaponizing disinformation for election interference purposes.

Instead it writes:

We want to reiterate, however, that the Government has not seen evidence of successful use of disinformation by foreign actors, including Russia, to influence UK democratic processes. But we are not being complacent and the Government is actively engaging with partners to develop robust policies to tackle this issue.

Its response on this point also makes no reference of the extensive use of social media platforms to run political ads targeting the 2016 Brexit referendum.

Nor does it make any note of the historic lack of transparency of such ad platforms. Which means that it’s simply not possible to determine where all the ad money came from to fund digital campaigning on domestic issues — with Facebook only just launching a public repository of who is paying for political ads and badging them as such in the UK, for example.

The elephant in the room is of course that ‘lack of evidence’ is not necessarily evidence of a lack of success, especially when it’s so hard to extract data from opaque adtech platforms in the first place.

Moreover, just this week fresh concerns have been raised about how platforms like Facebook are still enabling dark ads to target political messages at citizens — without it being transparently clear who is actually behind and paying for such campaigns…

In turn triggering calls from opposition MPs for updates to UK election law…

Yet the government, busily embroiled as it still is with trying to deliver some kind of Brexit outcome, is seemingly unconcerned by all this unregulated, background ongoing political advertising.

It also directly brushes off the committee’s call for it to state how many investigations are currently being carried out into Russian interference in UK politics, saying only that it has taken steps to ensure there is a “coordinated structure across all relevant UK authorities to defend against hostile foreign interference in British politics, whether from Russia or any other State”, before reiterating: “There has, however, been no evidence to date of any successful foreign interference.”

This summer the Electoral Commission found that the official Vote Leave campaign in the UK’s in/out EU referendum had broken campaign spending rules — with social media platforms being repurposed as the unregulated playing field where election law could be diddled at such scale. That much is clear.

The DCMS committee had backed the Commission’s call for digital imprint requirements for electronic campaigns to level the playing field between digital and print ads.

However the government has failed to back even that pretty uncontroversial call, merely pointing again to a public consultation (which ends today) on proposed changes to electoral law. So it’s yet more wait and see.

The committee is also disappointed about the lack of government response to its call for the Commission to establish a code for advertising through social media during election periods; and its recommendation that “Facebook and other platforms take responsibility for the way their platforms are used” — noting also the government made “no response to Facebook’s failure to respond adequately to the Committee’s inquiry and Mark Zuckerberg’s reluctance to appear as a witness“. (A reluctance that really enraged the committee.)

In a statement on the government’s response, committee chair Damian Collins writes: “The government’s response to our interim report on disinformation and ‘fake news’ is disappointing and a missed opportunity. It uses other ongoing investigations to further delay desperately needed announcements on the ongoing issues of harmful and misleading content being spread through social media.

“We need to see a more coordinated approach across government to combat campaigns of disinformation being organised by Russian agencies seeking to disrupt and undermine our democracy. The government’s response gives us no real indication of what action is being taken on this important issue.”

Collins finds one slender crumb of comfort, though, that the government might have some appetite to rule big tech.

After the committee had called for government to “demonstrate how seriously it takes Facebook’s apparent collusion in spreading disinformation in Burma, at the earliest opportunity”, the government writes that it: “has made it clear to Facebook, and other social media companies, that they must do more to remove illegal and harmful content”; and noting also that its forthcoming Online Harms White Paper will include “a range of policies to tackle harmful content”.

“We welcome though the strong words from the Government in its demand for action by Facebook to tackle the hate speech that has contributed to the ethnic cleansing of the Rohingya in Burma,” notes Collins, adding: “We will be looking for the government to make progress on these and other areas in response to our final report which will be published in December.

“We will also be raising these issues with the Secretary of State for DCMS, Jeremy Wright, when he gives evidence to the Committee on Wednesday this week.”

(Wright being the new minister in charge of the UK’s digital brief, after Matt Hancock moved over to health.)

We’ve reached out to Facebook for comment on the government’s call for a more robust approach to illegal hate speech.

Last week the company announced it had hired former UK deputy prime minister, Nick Clegg, to be its new head of global policy and comms — apparently signalling a willingness to pay a bit more attention to European regulators.

Fake news inquiry calls for social media levy to defend democracy

A UK parliamentary committee which has been running a multi-month investigation into the impact of online disinformation on political campaigning — and on democracy itself — has published a preliminary report highlighting what it describes as “significant concerns” over the risks to “shared values and the integrity of our democratic institutions”.

It’s calling for “urgent action” from government and regulatory bodies to “build resilience against misinformation and disinformation into our democratic system”.

“We are faced with a crisis concerning the use of data, the manipulation of our data, and the targeting of pernicious views,” the DCMS committee warns. “In particular, we heard evidence of Russian state-sponsored attempts to influence elections in the US and the UK through social media, of the efforts of private companies to do the same, and of law-breaking by certain Leave campaign groups in the UK’s EU Referendum in their use of social media.”

The inquiry, which was conceived of and begun in the previous UK parliament — before relaunching in fall 2017, after the June General Election — has found itself slap-bang in the middle of one of the major scandals of the modern era, as revelations about the extent of disinformation and social media data misuse and allegations of election fiddling and law bending have piled up thick and fast, especially in recent months (albeit, concerns have been rising steadily, ever since the 2016 US presidential election and revelations about the cottage industry of fake news purveyors spun up to feed US voters, in addition to Kremlin troll farm activity.)

Yet the Facebook-Cambridge Analytica data misuse saga (which snowballed into a major global scandal this March) is just one of the strands of the committee’s inquiry. Hence they’ve opted to publish multiple reports — the initial one recommending urgent actions for the government and regulators, which will be followed by another report covering the inquiry’s “wider terms of reference” and including a closer look at the role of advertising. (The latter report is slated to land in the fall.)

For now, the committee is suggesting “principle-based recommendations” designed to be “sufficiently adaptive to deal with fast-moving technological developments”. 

Among a very long list of recommendations are:

  • a levy on social media and tech giants to fund expanding a “major investment” in the UK’s data watchdog so the body is able to “attract and employ more technically-skilled engineers who not only can analyse current technologies, but have the capacity to predict future technologies” — with the tech company levy operating in “a similar vein to the way in which the banking sector pays for the upkeep of the Financial Conduct Authority”. Additionally, the committee also wants the government put forward proposals for an educational levy to be raised by social media companies, “to finance a comprehensive educational framework (developed by charities and non-governmental organisations) and based online”. “Digital literacy should be the fourth pillar of education, alongside reading, writing and maths,” the committee writes. “The DCMS Department should co-ordinate with the Department for Education, in highlighting proposals to include digital literacy, as part of the Physical, Social, Health and Economic curriculum (PSHE). The social media educational levy should be used, in part, by the government, to finance this additional part of the curriculum.” It also wants to see a rolling unified public awareness initiative, part-funded by a tech company levy, to “set the context of social media content, explain to people what their rights over their data are… and set out ways in which people can interact with political campaigning on social media. “The public should be made more aware of their ability to report digital campaigning that they think is misleading, or unlawful,” it adds
  • amendments to UK Electoral Law to reflect use of new technologies — with the committee backing the Electoral Commission’s suggestion that “all electronic campaigning should have easily accessible digital imprint requirements, including information on the publishing organisation and who is legally responsible for the spending, so that it is obvious at a glance who has sponsored that campaigning material, thereby bringing all online advertisements and messages into line with physically published leaflets, circulars and advertisements”. It also suggests government should “consider the feasibility of clear, persistent banners on all paid-for political adverts and videos, indicating the source and making it easy for users to identify what is in the adverts, and who the advertiser is”. And urges the government to carry out “a comprehensive review of the current rules and regulations surrounding political work during elections and referenda, including: increasing the length of the regulated period; definitions of what constitutes political campaigning; absolute transparency of online political campaigning; a category introduced for digital spending on campaigns; reducing the time for spending returns to be sent to the Electoral Commission (the current time for large political organisations is six months)”.
  • the Electoral Commission to establish a code for advertising through social media during election periods “giving consideration to whether such activity should be restricted during the regulated period, to political organisations or campaigns that have registered with the Commission”. It also urges the Commission to propose “more stringent requirements for major donors to demonstrate the source of their donations”, and backs its suggestion of a change in the rules covering political spending so that limits are put on the amount of money an individual can donate
  • a major increase in the maximum fine that can be levied by the Electoral Commission (currently just £20,000) — saying this should rather be based on a fixed percentage of turnover. It also suggests the body should have the ability to refer matters to the Crown Prosecution Service, before their investigations have been completed; and urges the government to consider giving it the power to compel organisations that it does not specifically regulate, including tech companies and individuals, to provide information relevant to their inquiries, subject to due process.
  • a public register for political advertising — “requiring all political advertising work to be listed for public display so that, even if work is not requiring regulation, it is accountable, clear, and transparent for all to see”. So it also wants the government to conduct a review of UK law to ensure that digital campaigning is defined in a way that includes online adverts that use political terminology but are not sponsored by a specific political party.
  • a ban on micro-targeted political advertising to lookalikes online, and a minimum limit for the number of voters sent individual political messages to be agreed at a national level. The committee also suggests the Electoral Commission and the ICO should consider the ethics of Facebook or other relevant social media companies selling lookalike political audiences to advertisers during the regulated period, saying they should consider whether users should have the right to opt out from being included in such lookalike audiences
  • a recommendation to formulate a new regulatory category for tech companies that is not necessarily either a platform or a publisher, and which “tightens tech companies’ liabilities”
  • a suggestion that the government consider (as part of an existing review of digital advertising) whether the Advertising Standards Agency could regulate digital advertising. “It is our recommendation that this process should establish clear legal liability for the tech companies to act against harmful and illegal content on their platforms,” the committee writes. “This should include both content that has been referred to them for takedown by their users, and other content that should have been easy for the tech companies to identify for themselves. In these cases, failure to act on behalf of the tech companies could leave them open to legal proceedings launched either by a public regulator, and/or by individuals or organisations who have suffered as a result of this content being freely disseminated on a social media platform.”
  • another suggestion that the government consider establishing a “digital Atlantic Charter as a new mechanism to reassure users that their digital rights are guaranteed” — with the committee also raising concerns that the UK risks a privacy loophole opening up after it leave the EU when US-based companies will be able to take UK citizens’ data to the US for processing without the protections afforded by the EU’s GDPR framework (as the UK will then be a third country)
  • a suggestion that a professional “global Code of Ethics” should be developed by tech companies, in collaboration with international governments, academics and other “interested parties” (including the World Summit on Information Society), in order to “set down in writing what is and what is not acceptable by users on social media, with possible liabilities for companies and for individuals working for those companies, including those technical engineers involved in creating the software for the companies”. “New products should be tested to ensure that products are fit-for-purpose and do not constitute dangers to the users, or to society,” it suggests. “The Code of Ethics should be the backbone of tech companies’ work, and should be continually referred to when developing new technologies and algorithms. If companies fail to adhere to their own Code of Ethics, the UK Government should introduce regulation to make such ethical rules compulsory.”
  • the committee also suggests the government avoids using the (charged and confusing) term ‘fake news’ — and instead puts forward an agreed definition of the words ‘misinformation’ and ‘disinformation’. It should also support research into the methods by which misinformation and disinformation are created and spread across the internet, including support for fact-checking. “We recommend that the government initiate a working group of experts to create a credible annotation of standards, so that people can see, at a glance, the level of verification of a site. This would help people to decide on the level of importance that they put on those sites,” it writes
  • a suggestion that tech companies should be subject to security and algorithmic auditing — with the committee writing: “Just as the finances of companies are audited and scrutinised, the same type of auditing and scrutinising should be carried out on the non-financial aspects of technology companies, including their security mechanisms and algorithms, to ensure they are operating responsibly. The Government should provide the appropriate body with the power to audit these companies, including algorithmic auditing, and we reiterate the point that the ICO’s powers should be substantially strengthened in these respects”. The committee also floats the idea that the Competition and Markets Authority considers conducting an audit of the operation of the advertising market on social media (given the risk of fake accounts leading to ad fraud)
  • a requirement for tech companies to make full disclosure of targeting used as part of advert transparency. The committee says tech companies must also address the issue of shell corporations and “other professional attempts to hide identity in advert purchasing.

How the government will respond to the committee’s laundry list of recommendations for cleaning up online political advertising remains to be seen, although the issue of Kremlin-backed disinformation campaigns was at least raised publicly by the prime minister last year. Although Theresa May has been rather quieter on revelations about EU referendum-related data misuse and election law breaches.

While the committee uses the term “tech companies” throughout its report to refer to multiple companies, Facebook specifically comes in for some excoriating criticism, with the committee accusing the company of misleading by omission and actively seeking to impede the progress of the inquiry.

It also reiterates its call — for something like the fifth time at this point — for founder Mark Zuckerberg to give evidence. Facebook has provided several witnesses to the committee, including its CTO, but Zuckerberg has declined its requests he appear, even via video link. (And even though he did find time for a couple of hours in front of the EU parliament back in May.)

The committee writes:

We undertook fifteen exchanges of correspondence with Facebook, and two oral evidence sessions, in an attempt to elicit some of the information that they held, including information regarding users’ data, foreign interference and details of the so-called ‘dark ads’ that had reached Facebook users. Facebook consistently responded to questions by giving the minimal amount of information possible, and routinely failed to offer information relevant to the inquiry, unless it had been expressly asked for. It provided witnesses who have been unwilling or unable to give full answers to the Committee’s questions. This is the reason why the Committee has continued to press for Mark Zuckerberg to appear as a witness as, by his own admission, he is the person who decides what happens at Facebook.

Tech companies are not passive platforms on which users input content; they reward what is most engaging, because engagement is part of their business model and their growth strategy. They have profited greatly by using this model. This manipulation of the sites by tech companies must be made more transparent. Facebook has all of the information. Those outside of the company have none of it, unless Facebook chooses to release it. Facebook was reluctant to share information with the Committee, which does not bode well for future transparency. We ask, once more, for Mr Zuckerberg to come to the Committee to answer the many outstanding questions to which Facebook has not responded adequately, to date.

The committee suggests that the UK’s Defamation Act 2013 means Facebook and other social media companies have a duty to publish and to follow transparent rules — arguing that the Act has provisions which state that “if a user is defamed on social media, and the offending individual cannot be identified, the liability rests with the platform”.

“We urge the government to examine the effectiveness of these provisions, and to monitor tech companies to ensure they are complying with court orders in the UK and to provide details of the source of disputed content– including advertisements — to ensure that they are operating in accordance with the law, or any future industry Codes of Ethics or Conduct. Tech companies also have a responsibility to ensure full disclosure of the source of any political advertising they carry,” it adds.

The committee is especially damning of Facebook’s actions in Burma (as indeed many others have also been), condemning the company’s failure to prevent its platform from being used to spread hate and fuel violence against the Rohingya ethnic minority — and citing the UN’s similarly damning assessment.

“Facebook has hampered our efforts to get information about their company throughout this inquiry. It is as if it thinks that the problem will go away if it does not share information about the problem, and reacts only when it is pressed. Time and again we heard from Facebook about mistakes being made and then (sometimes) rectified, rather than designing the product ethically from the beginning of the process. Facebook has a ‘Code of Conduct’, which highlights the principles by which Facebook staff carry out their work, and states that employees are expected to “act lawfully, honestly, ethically, and in the best interests of the company while performing duties on behalf of Facebook”. Facebook has fallen well below this standard in Burma,” it argues.

The committee also directly blames Facebook’s actions for undermining the UK’s international aid efforts in the country — writing:

The United Nations has named Facebook as being responsible for inciting hatred against the Rohingya Muslim minority in Burma, through its ‘Free Basics’ service. It provides people free mobile phone access without data charges, but is also responsible for the spread disinformation and propaganda. The CTO of Facebook, Mike Schroepfer described the situation in Burma as “awful”, yet Facebook cannot show us that it has done anything to stop the spread of disinformation against the Rohingya minority.

The hate speech against the Rohingya—built up on Facebook, much of which is disseminated through fake accounts—and subsequent ethnic cleansing, has potentially resulted in the success of DFID’s [the UK Department for International Development] aid programmes being greatly reduced, based on the qualifications they set for success. The activity of Facebook undermines international aid to Burma, including the UK Government’s work. Facebook is releasing a product that is dangerous to consumers and deeply unethical. We urge the Government to demonstrate how seriously it takes Facebook’s apparent collusion in spreading disinformation in Burma, at the earliest opportunity. This is a further example of Facebook failing to take responsibility for the misuse of its platform.

We reached out to Facebook for a response to the committee’s report, and in an email statement — attributed to Richard Allan, VP policy — the company told us:

The Committee has raised some important issues and we were pleased to be able to contribute to their work.

We share their goal of ensuring that political advertising is fair and transparent and agree that electoral rule changes are needed. We have already made all advertising on Facebook more transparent. We provide more information on the Pages behind any ad and you can now see all the ads any Facebook Page is running, even if they are not targeted at you. We are working on ways to authenticate and label political ads in the UK and create an archive of those ads that anyone can search. We will work closely with the UK Government and Electoral Commission as we develop these new transparency tools.

We’re also investing heavily in both people and technology to keep bad content off our services. We took down 2.5 million pieces of hate speech and disabled 583 million fake accounts globally in the first quarter of 2018 — much of it before anyone needed to report this to Facebook. By using technology like machine learning, artificial intelligence and computer vision, we can detect more bad content and take action more quickly.

The statement makes no mention of Burma. Nor indeed of the committee’s suggestion that social media firms should be taxed to pay for defending democracy and civil society against the damaging excesses of their tools.

On Thursday, rolling out its latest ads transparency features, Facebook announced that users could now see the ads a Page is running across Facebook, Instagram, Messenger and its partner network “even if those ads aren’t shown to you”.

To do so, users have to log into Facebook, visit any Page and select “Info and Ads”. “You’ll see ad creative and copy, and you can flag anything suspicious by clicking on ‘Report Ad’,” it added.

It also flagged a ‘more Page information’ feature that users can use to get more details about a Page such as recent name changes and the date it was created.

“The vast majority of ads on Facebook are run by legitimate organizations — whether it’s a small business looking for new customers, an advocacy group raising money for their cause, or a politician running for office. But we’ve seen that bad actors can misuse our products, too,” Facebook wrote, adding that the features being announced “are just the start” of its efforts “to improve” and “root out abuse”.

Brexit drama

The committee’s interim report was pushed out at the weekend ahead of the original embargo as a result of yet more Brexiteer-induced drama — after the campaign director of the UK’s official Brexit supporting ‘Vote Leave’ campaign, Dominic Cummings, deliberately broke the embargo by publishing the report on his blog in order to spin his own response before the report had been widely covered by the media.

Last week the Electoral Commission published its own report following a multi-month investigation into Brexit campaign spending. The oversight body concluded that Vote Leave broke UK electoral law by massively overspending via a joint working arrangement with another Brexit supporting campaign (BeLeave) — an arrangement via which an additional almost half a million pound’s worth of targeted Facebook ads were co-ordinated by Vote Leave in the final days of the campaign when it had already reached its spending limit (Facebook finally released some of the 2016 Brexit campaign ads that had been microtargeted at UK voters via its platform to the committee, which published these ads last week. Many of Vote Leave’s (up to that point ‘dark’) adverts show the official Brexit campaign generating fake news of its own with ads that, for example, claim Turkey is on the cusp of joining the EU and flooding the UK with millions of migrants; or spreading the widely debunked claim that the UK would be able to spend £350M more per week on the NHS if it left the EU.

In general, dog whistle racism appears to have been Vote Leave’s preferred ‘persuasion’ tactic of microtargeted ad choice — and thanks to Facebook’s ad platform, no one other than each ad’s chosen recipients would have been likely to see the messages.

Cummings also comes in for a major roasting in the committee’s report after his failure to appear before it to answer questions, despite multiple summons (including an unprecedented House of Commons motion ordering him to appear — which he nonetheless also ignored).

“Mr Cummings’ contemptuous behaviour is unprecedented in the history of this Committee’s inquiries and underlines concerns about the difficulties of enforcing co-operation with Parliamentary scrutiny in the modern age,” it writes, adding: “We will return to this issue in our Report in the autumn, and believe it to be an urgent matter for consideration by the Privileges Committee and by Parliament as a whole.”

On his blog, Cummings claims the committee offered him dates they knew he couldn’t do; slams its investigation as ‘fake news’; moans copiously that the committee is made up of Remain supporting MPs; and argues that the investigation should be under oath — as his major defense seems to be that key campaign whistleblowers are lying (despite ex-Cambridge Analytica employee Chris Wylie and ex-BeLeave treasurer Shahmir Sanni having provided copious amounts of documentary evidence to back up their claims; evidence which both the Electoral Commission and the UK’s data watchdog, the ICO, have found convincing enough to announce some of the largest fines they can issue — in the latter case, the ICO announced its intention to fine Facebook the maximum penalty possible (under the prior UK data protection regime) for failing to protect users’ information. (The data watchdog is continuing to investigate multiple aspects of what is a hugely complex (technically and politically) online ad ops story, and earlier this month commissioner Elizabeth Denham called for an ‘ethical pause’ on the use of online ad platforms for microtargeting voters with political messages, arguing — like the DCMS committee — that there are very real and very stark risks for democratic processes).

There’s much, much more self-piteous whining on Cummings blog for anyone who wants to make themselves queasy reading. But bear in mind the Electoral Commission’s withering criticism of the Vote Leave campaign specifically — for not so much failure to co-operate with its investigation but intentional obstruction.

Fake news inquiry calls for social media levy to defend democracy

A UK parliamentary committee which has been running a multi-month investigation into the impact of online disinformation on political campaigning — and on democracy itself — has published a preliminary report highlighting what it describes as “significant concerns” over the risks to “shared values and the integrity of our democratic institutions”.

It’s calling for “urgent action” from government and regulatory bodies to “build resilience against misinformation and disinformation into our democratic system”.

“We are faced with a crisis concerning the use of data, the manipulation of our data, and the targeting of pernicious views,” the DCMS committee warns. “In particular, we heard evidence of Russian state-sponsored attempts to influence elections in the US and the UK through social media, of the efforts of private companies to do the same, and of law-breaking by certain Leave campaign groups in the UK’s EU Referendum in their use of social media.”

The inquiry, which was conceived of and begun in the previous UK parliament — before relaunching in fall 2017, after the June General Election — has found itself slap-bang in the middle of one of the major scandals of the modern era, as revelations about the extent of disinformation and social media data misuse and allegations of election fiddling and law bending have piled up thick and fast, especially in recent months (albeit, concerns have been rising steadily, ever since the 2016 US presidential election and revelations about the cottage industry of fake news purveyors spun up to feed US voters, in addition to Kremlin troll farm activity.)

Yet the Facebook-Cambridge Analytica data misuse saga (which snowballed into a major global scandal this March) is just one of the strands of the committee’s inquiry. Hence they’ve opted to publish multiple reports — the initial one recommending urgent actions for the government and regulators, which will be followed by another report covering the inquiry’s “wider terms of reference” and including a closer look at the role of advertising. (The latter report is slated to land in the fall.)

For now, the committee is suggesting “principle-based recommendations” designed to be “sufficiently adaptive to deal with fast-moving technological developments”. 

Among a very long list of recommendations are:

  • a levy on social media and tech giants to fund expanding a “major investment” in the UK’s data watchdog so the body is able to “attract and employ more technically-skilled engineers who not only can analyse current technologies, but have the capacity to predict future technologies” — with the tech company levy operating in “a similar vein to the way in which the banking sector pays for the upkeep of the Financial Conduct Authority”. Additionally, the committee also wants the government put forward proposals for an educational levy to be raised by social media companies, “to finance a comprehensive educational framework (developed by charities and non-governmental organisations) and based online”. “Digital literacy should be the fourth pillar of education, alongside reading, writing and maths,” the committee writes. “The DCMS Department should co-ordinate with the Department for Education, in highlighting proposals to include digital literacy, as part of the Physical, Social, Health and Economic curriculum (PSHE). The social media educational levy should be used, in part, by the government, to finance this additional part of the curriculum.” It also wants to see a rolling unified public awareness initiative, part-funded by a tech company levy, to “set the context of social media content, explain to people what their rights over their data are… and set out ways in which people can interact with political campaigning on social media. “The public should be made more aware of their ability to report digital campaigning that they think is misleading, or unlawful,” it adds
  • amendments to UK Electoral Law to reflect use of new technologies — with the committee backing the Electoral Commission’s suggestion that “all electronic campaigning should have easily accessible digital imprint requirements, including information on the publishing organisation and who is legally responsible for the spending, so that it is obvious at a glance who has sponsored that campaigning material, thereby bringing all online advertisements and messages into line with physically published leaflets, circulars and advertisements”. It also suggests government should “consider the feasibility of clear, persistent banners on all paid-for political adverts and videos, indicating the source and making it easy for users to identify what is in the adverts, and who the advertiser is”. And urges the government to carry out “a comprehensive review of the current rules and regulations surrounding political work during elections and referenda, including: increasing the length of the regulated period; definitions of what constitutes political campaigning; absolute transparency of online political campaigning; a category introduced for digital spending on campaigns; reducing the time for spending returns to be sent to the Electoral Commission (the current time for large political organisations is six months)”.
  • the Electoral Commission to establish a code for advertising through social media during election periods “giving consideration to whether such activity should be restricted during the regulated period, to political organisations or campaigns that have registered with the Commission”. It also urges the Commission to propose “more stringent requirements for major donors to demonstrate the source of their donations”, and backs its suggestion of a change in the rules covering political spending so that limits are put on the amount of money an individual can donate
  • a major increase in the maximum fine that can be levied by the Electoral Commission (currently just £20,000) — saying this should rather be based on a fixed percentage of turnover. It also suggests the body should have the ability to refer matters to the Crown Prosecution Service, before their investigations have been completed; and urges the government to consider giving it the power to compel organisations that it does not specifically regulate, including tech companies and individuals, to provide information relevant to their inquiries, subject to due process.
  • a public register for political advertising — “requiring all political advertising work to be listed for public display so that, even if work is not requiring regulation, it is accountable, clear, and transparent for all to see”. So it also wants the government to conduct a review of UK law to ensure that digital campaigning is defined in a way that includes online adverts that use political terminology but are not sponsored by a specific political party.
  • a ban on micro-targeted political advertising to lookalikes online, and a minimum limit for the number of voters sent individual political messages to be agreed at a national level. The committee also suggests the Electoral Commission and the ICO should consider the ethics of Facebook or other relevant social media companies selling lookalike political audiences to advertisers during the regulated period, saying they should consider whether users should have the right to opt out from being included in such lookalike audiences
  • a recommendation to formulate a new regulatory category for tech companies that is not necessarily either a platform or a publisher, and which “tightens tech companies’ liabilities”
  • a suggestion that the government consider (as part of an existing review of digital advertising) whether the Advertising Standards Agency could regulate digital advertising. “It is our recommendation that this process should establish clear legal liability for the tech companies to act against harmful and illegal content on their platforms,” the committee writes. “This should include both content that has been referred to them for takedown by their users, and other content that should have been easy for the tech companies to identify for themselves. In these cases, failure to act on behalf of the tech companies could leave them open to legal proceedings launched either by a public regulator, and/or by individuals or organisations who have suffered as a result of this content being freely disseminated on a social media platform.”
  • another suggestion that the government consider establishing a “digital Atlantic Charter as a new mechanism to reassure users that their digital rights are guaranteed” — with the committee also raising concerns that the UK risks a privacy loophole opening up after it leave the EU when US-based companies will be able to take UK citizens’ data to the US for processing without the protections afforded by the EU’s GDPR framework (as the UK will then be a third country)
  • a suggestion that a professional “global Code of Ethics” should be developed by tech companies, in collaboration with international governments, academics and other “interested parties” (including the World Summit on Information Society), in order to “set down in writing what is and what is not acceptable by users on social media, with possible liabilities for companies and for individuals working for those companies, including those technical engineers involved in creating the software for the companies”. “New products should be tested to ensure that products are fit-for-purpose and do not constitute dangers to the users, or to society,” it suggests. “The Code of Ethics should be the backbone of tech companies’ work, and should be continually referred to when developing new technologies and algorithms. If companies fail to adhere to their own Code of Ethics, the UK Government should introduce regulation to make such ethical rules compulsory.”
  • the committee also suggests the government avoids using the (charged and confusing) term ‘fake news’ — and instead puts forward an agreed definition of the words ‘misinformation’ and ‘disinformation’. It should also support research into the methods by which misinformation and disinformation are created and spread across the internet, including support for fact-checking. “We recommend that the government initiate a working group of experts to create a credible annotation of standards, so that people can see, at a glance, the level of verification of a site. This would help people to decide on the level of importance that they put on those sites,” it writes
  • a suggestion that tech companies should be subject to security and algorithmic auditing — with the committee writing: “Just as the finances of companies are audited and scrutinised, the same type of auditing and scrutinising should be carried out on the non-financial aspects of technology companies, including their security mechanisms and algorithms, to ensure they are operating responsibly. The Government should provide the appropriate body with the power to audit these companies, including algorithmic auditing, and we reiterate the point that the ICO’s powers should be substantially strengthened in these respects”. The committee also floats the idea that the Competition and Markets Authority considers conducting an audit of the operation of the advertising market on social media (given the risk of fake accounts leading to ad fraud)
  • a requirement for tech companies to make full disclosure of targeting used as part of advert transparency. The committee says tech companies must also address the issue of shell corporations and “other professional attempts to hide identity in advert purchasing.

How the government will respond to the committee’s laundry list of recommendations for cleaning up online political advertising remains to be seen, although the issue of Kremlin-backed disinformation campaigns was at least raised publicly by the prime minister last year. Although Theresa May has been rather quieter on revelations about EU referendum-related data misuse and election law breaches.

While the committee uses the term “tech companies” throughout its report to refer to multiple companies, Facebook specifically comes in for some excoriating criticism, with the committee accusing the company of misleading by omission and actively seeking to impede the progress of the inquiry.

It also reiterates its call — for something like the fifth time at this point — for founder Mark Zuckerberg to give evidence. Facebook has provided several witnesses to the committee, including its CTO, but Zuckerberg has declined its requests he appear, even via video link. (And even though he did find time for a couple of hours in front of the EU parliament back in May.)

The committee writes:

We undertook fifteen exchanges of correspondence with Facebook, and two oral evidence sessions, in an attempt to elicit some of the information that they held, including information regarding users’ data, foreign interference and details of the so-called ‘dark ads’ that had reached Facebook users. Facebook consistently responded to questions by giving the minimal amount of information possible, and routinely failed to offer information relevant to the inquiry, unless it had been expressly asked for. It provided witnesses who have been unwilling or unable to give full answers to the Committee’s questions. This is the reason why the Committee has continued to press for Mark Zuckerberg to appear as a witness as, by his own admission, he is the person who decides what happens at Facebook.

Tech companies are not passive platforms on which users input content; they reward what is most engaging, because engagement is part of their business model and their growth strategy. They have profited greatly by using this model. This manipulation of the sites by tech companies must be made more transparent. Facebook has all of the information. Those outside of the company have none of it, unless Facebook chooses to release it. Facebook was reluctant to share information with the Committee, which does not bode well for future transparency. We ask, once more, for Mr Zuckerberg to come to the Committee to answer the many outstanding questions to which Facebook has not responded adequately, to date.

The committee suggests that the UK’s Defamation Act 2013 means Facebook and other social media companies have a duty to publish and to follow transparent rules — arguing that the Act has provisions which state that “if a user is defamed on social media, and the offending individual cannot be identified, the liability rests with the platform”.

“We urge the government to examine the effectiveness of these provisions, and to monitor tech companies to ensure they are complying with court orders in the UK and to provide details of the source of disputed content– including advertisements — to ensure that they are operating in accordance with the law, or any future industry Codes of Ethics or Conduct. Tech companies also have a responsibility to ensure full disclosure of the source of any political advertising they carry,” it adds.

The committee is especially damning of Facebook’s actions in Burma (as indeed many others have also been), condemning the company’s failure to prevent its platform from being used to spread hate and fuel violence against the Rohingya ethnic minority — and citing the UN’s similarly damning assessment.

“Facebook has hampered our efforts to get information about their company throughout this inquiry. It is as if it thinks that the problem will go away if it does not share information about the problem, and reacts only when it is pressed. Time and again we heard from Facebook about mistakes being made and then (sometimes) rectified, rather than designing the product ethically from the beginning of the process. Facebook has a ‘Code of Conduct’, which highlights the principles by which Facebook staff carry out their work, and states that employees are expected to “act lawfully, honestly, ethically, and in the best interests of the company while performing duties on behalf of Facebook”. Facebook has fallen well below this standard in Burma,” it argues.

The committee also directly blames Facebook’s actions for undermining the UK’s international aid efforts in the country — writing:

The United Nations has named Facebook as being responsible for inciting hatred against the Rohingya Muslim minority in Burma, through its ‘Free Basics’ service. It provides people free mobile phone access without data charges, but is also responsible for the spread disinformation and propaganda. The CTO of Facebook, Mike Schroepfer described the situation in Burma as “awful”, yet Facebook cannot show us that it has done anything to stop the spread of disinformation against the Rohingya minority.

The hate speech against the Rohingya—built up on Facebook, much of which is disseminated through fake accounts—and subsequent ethnic cleansing, has potentially resulted in the success of DFID’s [the UK Department for International Development] aid programmes being greatly reduced, based on the qualifications they set for success. The activity of Facebook undermines international aid to Burma, including the UK Government’s work. Facebook is releasing a product that is dangerous to consumers and deeply unethical. We urge the Government to demonstrate how seriously it takes Facebook’s apparent collusion in spreading disinformation in Burma, at the earliest opportunity. This is a further example of Facebook failing to take responsibility for the misuse of its platform.

We reached out to Facebook for a response to the committee’s report, and in an email statement — attributed to Richard Allan, VP policy — the company told us:

The Committee has raised some important issues and we were pleased to be able to contribute to their work.

We share their goal of ensuring that political advertising is fair and transparent and agree that electoral rule changes are needed. We have already made all advertising on Facebook more transparent. We provide more information on the Pages behind any ad and you can now see all the ads any Facebook Page is running, even if they are not targeted at you. We are working on ways to authenticate and label political ads in the UK and create an archive of those ads that anyone can search. We will work closely with the UK Government and Electoral Commission as we develop these new transparency tools.

We’re also investing heavily in both people and technology to keep bad content off our services. We took down 2.5 million pieces of hate speech and disabled 583 million fake accounts globally in the first quarter of 2018 — much of it before anyone needed to report this to Facebook. By using technology like machine learning, artificial intelligence and computer vision, we can detect more bad content and take action more quickly.

The statement makes no mention of Burma. Nor indeed of the committee’s suggestion that social media firms should be taxed to pay for defending democracy and civil society against the damaging excesses of their tools.

On Thursday, rolling out its latest ads transparency features, Facebook announced that users could now see the ads a Page is running across Facebook, Instagram, Messenger and its partner network “even if those ads aren’t shown to you”.

To do so, users have to log into Facebook, visit any Page and select “Info and Ads”. “You’ll see ad creative and copy, and you can flag anything suspicious by clicking on ‘Report Ad’,” it added.

It also flagged a ‘more Page information’ feature that users can use to get more details about a Page such as recent name changes and the date it was created.

“The vast majority of ads on Facebook are run by legitimate organizations — whether it’s a small business looking for new customers, an advocacy group raising money for their cause, or a politician running for office. But we’ve seen that bad actors can misuse our products, too,” Facebook wrote, adding that the features being announced “are just the start” of its efforts “to improve” and “root out abuse”.

Brexit drama

The committee’s interim report was pushed out at the weekend ahead of the original embargo as a result of yet more Brexiteer-induced drama — after the campaign director of the UK’s official Brexit supporting ‘Vote Leave’ campaign, Dominic Cummings, deliberately broke the embargo by publishing the report on his blog in order to spin his own response before the report had been widely covered by the media.

Last week the Electoral Commission published its own report following a multi-month investigation into Brexit campaign spending. The oversight body concluded that Vote Leave broke UK electoral law by massively overspending via a joint working arrangement with another Brexit supporting campaign (BeLeave) — an arrangement via which an additional almost half a million pound’s worth of targeted Facebook ads were co-ordinated by Vote Leave in the final days of the campaign when it had already reached its spending limit (Facebook finally released some of the 2016 Brexit campaign ads that had been microtargeted at UK voters via its platform to the committee, which published these ads last week. Many of Vote Leave’s (up to that point ‘dark’) adverts show the official Brexit campaign generating fake news of its own with ads that, for example, claim Turkey is on the cusp of joining the EU and flooding the UK with millions of migrants; or spreading the widely debunked claim that the UK would be able to spend £350M more per week on the NHS if it left the EU.

In general, dog whistle racism appears to have been Vote Leave’s preferred ‘persuasion’ tactic of microtargeted ad choice — and thanks to Facebook’s ad platform, no one other than each ad’s chosen recipients would have been likely to see the messages.

Cummings also comes in for a major roasting in the committee’s report after his failure to appear before it to answer questions, despite multiple summons (including an unprecedented House of Commons motion ordering him to appear — which he nonetheless also ignored).

“Mr Cummings’ contemptuous behaviour is unprecedented in the history of this Committee’s inquiries and underlines concerns about the difficulties of enforcing co-operation with Parliamentary scrutiny in the modern age,” it writes, adding: “We will return to this issue in our Report in the autumn, and believe it to be an urgent matter for consideration by the Privileges Committee and by Parliament as a whole.”

On his blog, Cummings claims the committee offered him dates they knew he couldn’t do; slams its investigation as ‘fake news’; moans copiously that the committee is made up of Remain supporting MPs; and argues that the investigation should be under oath — as his major defense seems to be that key campaign whistleblowers are lying (despite ex-Cambridge Analytica employee Chris Wylie and ex-BeLeave treasurer Shahmir Sanni having provided copious amounts of documentary evidence to back up their claims; evidence which both the Electoral Commission and the UK’s data watchdog, the ICO, have found convincing enough to announce some of the largest fines they can issue — in the latter case, the ICO announced its intention to fine Facebook the maximum penalty possible (under the prior UK data protection regime) for failing to protect users’ information. (The data watchdog is continuing to investigate multiple aspects of what is a hugely complex (technically and politically) online ad ops story, and earlier this month commissioner Elizabeth Denham called for an ‘ethical pause’ on the use of online ad platforms for microtargeting voters with political messages, arguing — like the DCMS committee — that there are very real and very stark risks for democratic processes).

There’s much, much more self-piteous whining on Cummings blog for anyone who wants to make themselves queasy reading. But bear in mind the Electoral Commission’s withering criticism of the Vote Leave campaign specifically — for not so much failure to co-operate with its investigation but intentional obstruction.

It’s official: Brexit campaign broke the law — with social media’s help

The UK’s Electoral Commission has published the results of a near nine-month-long investigation into Brexit referendum spending and has found that the official Vote Leave campaign broke the law by breaching election campaign spending limits.

Vote Leave broke the law including by channeling money to a Canadian data firm, AggregateIQ, to use for targeting political advertising on Facebook’s platform, via undeclared joint working with another Brexit campaign, BeLeave, it found.

Aggregate IQ remains the subject of a separate joint investigation by privacy watchdogs in Canada and British Columbia.

The Electoral Commission’s investigation found evidence that BeLeave spent more than £675,000 with AggregateIQ under a common arrangement with Vote Leave. Yet the two campaigns had failed to disclose on their referendum spending returns that they had a common plan.

As the designated lead leave campaign, Vote Leave had a £7M spending limit under UK law. But via its joint spending with BeLeave the Commission determined it actually spent £7,449,079 — exceeding the legal spending limit by almost half a million pounds.

The June 2016 referendum in the UK resulted in a narrow 52:48 majority for the UK to leave the European Union. Two years on from the vote, the government has yet to agree a coherent policy strategy to move forward in negotiations with the EU, leaving businesses to suck up ongoing uncertainty and society and citizens to remain riven and divided.

Meanwhile, Facebook — whose platform played a key role in distributing referendum messaging — booked revenue of around $40.7BN in 2017 alone, reporting a full year profit of almost $16BN.

Back in May, long-time leave supporter and MEP, Nigel Farage, told CEO Mark Zuckerberg to his face in the European Parliament that without “Facebook and other forms of social media there is no way that Brexit or Trump or the Italian elections could ever possibly have happened”.

The Electoral Commission’s investigation focused on funding and spending, and mainly concerned five payments made to Aggregate IQ in June 2016 — payments made for campaign services for the EU Referendum — by the three Brexit campaigns it investigated (the third being: Veterans for Britain).

Veterans for Britain’s spending return included a donation of £100,000 that was reported as a cash donation received and accepted on 20 May 2016. But the Commission found this was in fact a payment by Vote Leave to Aggregate IQ for services provided to Veterans for Britain in the final days of the EU Referendum campaign. The date was also incorrectly reported: It was actually paid by Vote Leave on 29 June 2016.

Despite the donation to a third Brexit campaign by the official Vote Leave campaign being for services provided by Aggregate IQ, which was also simultaneously providing services to Vote Leave, the Commission did not deem it to constitute joint working, writing: “[T]he evidence we have seen does not support the concern that the services were provided to Veterans for Britain as joint working with Vote Leave.”

It was, however, found to constitute an inaccurate donation report — another offense under the UK’s Political Parties, Elections and Referendums Act 2000.

The report details multiple issues with spending returns across the three campaigns. And the Commission has issued a series of fines to the three Brexit campaigns.

It has also referred two individuals — Vote Leave’s David Alan Halsall and BeLeave’s Darren Grimes — to the UK’s Metropolitan Police Service, which has the power to instigate a criminal investigation.

Early last year the Commission decided not to fully investigate Vote Leave’s spending but by October it says new information had emerged — which suggested “a pattern of action by Vote Leave” — so it revisited the assessment and reopened an investigation in November.

Its report also makes it clear that Vote Leave failed to co-operate with its investigation — including by failing to produce requested information and documents; by failing to provide representatives for interview; by ignoring deadlines to respond to formal investigation notices; and by objecting to the fact of the investigation, including suggesting it would judicially review the opening of the investigation.

Judging by the Commission’s account, Vote Leave seemingly did everything it could to try to thwart and delay the investigation — which is only reporting now, two years on from the Brexit vote and with mere months of negotiating time left before the end of the formal Article 50 exit notification process.

What’s crystal clear from this report is that following money and data trails takes time and painstaking investigation, which — given that, y’know, democracy is at stake — heavily bolsters the case for far more stringent regulations and transparency mechanisms to prevent powerful social media platforms from quietly absorbing politically motivated money and messaging without recognizing any responsibility to disclose the transactions, let alone carry out due diligence on who or what may be funding the political spending.

The political ad transparency measures that Facebook has announced so far come far too late for Brexit — or indeed, for the 2016 US presidential election when its platform carried and amplifiedKremlin funded divisive messaging which reached the eyeballs of hundreds of millions of US voters.

Last week the UK’s information commissioner, Elizabeth Denham, criticized Facebook for transparency and control failures relating to political ads on its platform, and also announced its intention to fine Facebook the maximum possible for breaches of UK data protection law relating to the Cambridge Analytica scandal, after it emerged that information on as many as 87 million Facebook users was extracted from its platform and passed to a controversial UK political consultancy without most people’s knowledge or consent.

She also published a series of policy recommendations around digital political campaigning — calling for an ethical pause on the use of personal data for political ad targeting, and warning that a troubling lack of transparency about how people’s data is being used risks undermining public trust in democracy

“Without a high level of transparency – and therefore trust amongst citizens that their data is being used appropriately – we are at risk of developing a system of voter surveillance by default,” she warned.

The Cambridge Analytica Facebook scandal is linked to the Brexit referendum via AggregateIQ — which was also a contractor for Cambridge Analytica, and also handled Facebook user information which the former company had improperly obtained, after paying a Cambridge University academic to use a quiz app to harvest people’s data and use it to create psychometric profiles for ad targeting.

The Electoral Commission says it was approached by Facebook during the Brexit campaign spending investigation with “some information about how Aggregate IQ used its services during the EU Referendum campaign”.

We’ve reached out to Facebook for comment on the report and will update this story with any response.

The Commission states that evidence from Facebook indicates that AggregateIQ used “identical target lists for Vote Leave and BeLeave ads”, although at least in one instance the BeLeave ads “were not run”.

It writes:

BeLeave’s ability to procure services from Aggregate IQ only resulted from the actions of Vote Leave, in providing those donations and arranging a separate donor for BeLeave. While BeLeave may have contributed its own design style and input, the services provided by Aggregate IQ to BeLeave used Vote Leave messaging, at the behest of BeLeave’s campaign director. It also appears to have had the benefit of Vote Leave data and/or data it obtained via online resources set up and provided to it by Vote Leave to target and distribute its campaign material. This is shown by evidence from Facebook that Aggregate IQ used identical target lists for Vote Leave and BeLeave ads, although the BeLeave ads were not run.

“We also asked for copies of the adverts Aggregate IQ placed for BeLeave, and for details of the reports he received from Aggregate IQ on their use. Mr Grimes replied to our questions,” it further notes in the report.

At the height of the referendum campaign — at a crucial moment when Vote Leave had reached its official spending limit — officials from the official leave campaign persuaded BeLeave’s only other donor, an individual called Anthony Clake, to allow it to funnel a donation from him directly to Aggregate IQ, who Vote Leave campaign director Dominic Cummins dubbed a bunch of “social media ninjas”.

The Commission writes:

On 11 June 2016 Mr Cummings wrote to Mr Clake saying that Vote Leave had all the money it could spend, and suggesting the following: “However, there is another organisation that could spend your money. Would you be willing to spend the 100k to some social media ninjas who could usefully spend it on behalf of this organisation? I am very confident it would be well spent in the final crucial 5 days. Obviously it would be entirely legal. (sic)”

Mr Clake asked about this organisation. Mr Cummings replied as follows: “the social media ninjas are based in canada – they are extremely good. You would send your money directly to them. the organisation that would legally register the donation is a permitted participant called BeLeave, a “young people’s organisation”. happy to talk it through on the phone though in principle nothing is required from you but to wire money to a bank account if you’re happy to take my word for it. (sic)

Mr Clake then emailed Mr Grimes to offer a donation to BeLeave. He specified that this donation would made “via the AIQ account.”

And while the Commission says it found evidence that Grimes and others from BeLeave had “significant input into the look and design of the BeLeave adverts produced by Aggregate IQ”, it also determined that Vote Leave messaging was “influential in their strategy and design” — hence its determination of a common plan between the two campaigns. Aggregate IQ was the vehicle used by Vote Leave to breech its campaign spending cap.

Providing examples of the collaboration it found between the two campaigns, the Commission quotes internal BeLeave correspondence — including an instruction from Grimes to: “Copy and paste lines from Vote Leave’s briefing room in a BeLeave voice”.

It writes:

On 15 June 2016 Mr Grimes told other BeLeave Board members and Aggregate IQ that BeLeave’s ads needed to be: “an effective way of pushing our more liberal and progressive message to an audience which is perhaps not as receptive to Vote Leave’s messaging.”

On 17 June 2016 Mr Grimes told other BeLeave Board members: “So as soon as we can go live. Advertising should be back on tomorrow and normal operating as of Sunday. I’d like to make sure we have loads of scheduled tweets and Facebook status. Post all of those blogs including Shahmirs [aka Shahmir Sami; who became a BeLeave whistleblower], use favstar to check out and repost our best performing tweets. Copy and paste lines from Vote Leave’s briefing room in a BeLeave voice”