Digital campaigning vs democracy: UK election regulator calls for urgent law changes

A report by the UK’s Electoral Commission has called for urgent changes in the law to increase transparency about how digital tools are being used for political campaigning, warning that an atmosphere of mistrust is threatening the democratic process.

The oversight body, which also regulates campaign spending, has spent the past year examining how digital campaigning was used in the UK’s 2016 EU referendum and 2017 general election — as well as researching public opinion to get voters’ views on digital campaigning issues.

Among the changes the Commission wants to see is greater clarity around election spending to try to prevent foreign entities pouring money into domestic campaigns, and beefed up financial regulations including bigger penalties for breaking election spending rules.

It also has an ongoing investigation into whether pro-Brexit campaigns — including the official Vote Leave campaign — broke spending rules. And last week the BBC reported on a leaked draft of the report suggesting the Commission will find the campaigns broke the law.

Last month the Leave.EU Brexit campaign was also fined £70,000 after a Commission investigation found it had breached multiple counts of electoral law during the referendum.

Given the far larger sums now routinely being spent on elections — another pro-Brexit group, Vote Leave, had a £7M spending limit (though it has also been accused of exceeding that) — it’s clear the Commission needs far larger teeth if it’s to have any hope of enforcing the law.

Digital tools have lowered the barrier of entry for election fiddling, while also helping to ramp up democratic participation.

“On digital campaigning, our starting point is that elections depend on participation, which is why we welcome the positive value of online communications. New ways of reaching voters are good for everyone, and we must be careful not to undermine free speech in our search to protect voters. But we also fully recognise the worries of many, the atmosphere of mistrust which is being created, and the urgent need for action to tackle this,” writes commission chair John Holmes.

“Funding of online campaigning is already covered by the laws on election spending and donations. But the laws need to ensure more clarity about who is spending what, and where and how, and bigger sanctions for those who break the rules.

“This report is therefore a call to action for the UK’s governments and parliaments to change the rules to make it easier for voters to know who is targeting them online, and to make unacceptable behaviour harder. The public opinion research we publish alongside this report demonstrates the level of concern and confusion amongst voters and the will for new action.”

The Commission’s key recommendations are:

  • Each of the UK’s governments and legislatures should change the law so that digital material must have an imprint saying who is behind the campaign and who created it
  • Each of the UK’s governments and legislatures should amend the rules for reporting spending. They should make campaigners sub-divide their spending returns into different types of spending. These categories should give more information about the money spent on digital campaigns
  • Campaigners should be required to provide more detailed and meaningful invoices from their digital suppliers to improve transparency
  • Social media companies should work with us to improve their policies on campaign material and advertising for elections and referendums in the UK
  • UK election and referendum adverts on social media platforms should be labelled to make the source clear. Their online databases of political adverts should follow the UK’s rules for elections and referendums
  • Each of the UK’s governments and legislatures should clarify that spending on election or referendum campaigns by foreign organisations or individuals is not allowed. They would need to consider how it could be enforced and the impact on free speech
  • We will make proposals to campaigners and each of the UK’s governments about how to improve the rules and deadlines for reporting spending. We want information to be available to voters and us more quickly after a campaign, or during
  • Each of the UK’s governments and legislatures should increase the maximum fine we can sanction campaigners for breaking the rules, and strengthen our powers to obtain information outside of an investigation

The recommendations follow revelations by Chris Wylie, the Cambridge Analytica whistleblower (pictured at the top of this post) — who has detailed to journalists and regulators how Facebook users’ personal data was obtained and passed to the now defunct political consultancy for political campaigning activity without people’s knowledge or consent.

In addition to the Cambridge Analytica data misuse scandal, Facebook has also been rocked by earlier revelations of how extensively Kremlin-backed agents used its ad targeting tools to try to sew social division at scale — including targeting the 2016 US presidential election.

The Facebook founder, Mark Zuckerberg, has since been called before US and EU lawmakers to answer questions about how his platform operates and the risks it’s posing to democratic processes.

The company has announced a series of changes intended to make it more difficult for third parties to obtain user data, and to increase transparency around political advertising — adding a requirement for such ads to continue details of how has paid for them, for example, and also offering a searchable archive.

Although critics question whether the company is going far enough — asking, for example, how it intends to determine what is and is not a political advert.

Facebook is not offering a searchable archive for all ads on its platform, for example.

Zuckerberg has also been accused of equivocating in the face of lawmakers’ concerns, with politicians on both sides of the Atlantic calling him out for providing evasive, misleading or intentionally obfuscating responses to concerns and questions around how his platform operates.

The Electoral Commission makes a direct call for social media firms to do more to increase transparency around digital political advertising and remove messages which “do not meet the right standards”.

“If this turns out to be insufficient, the UK’s governments and parliaments should be ready to consider direct regulation,” it also warns. 

We’ve reached out to Facebook comment and will update this post with any response.

A Cabinet Office spokeswoman told us it would send the government response to the Electoral Commission report shortly — so we’ll also update this post when we have that.

The UK’s data protection watchdog, the ICO, has an ongoing investigating into the use of social media data for political campaigning — and commissioner Elizabeth Denham recently made a call for stronger disclosure rules around political ads and a code of conduct for social media firms. The body is expected to publish the results of its long-running investigation shortly.

At the same time, a DCMS committee has been running an inquiry into the impact of fake news and disinformation online, including examining the impact on the political process. Though Zuckerberg has declined its requests for him to personally testify — sending a number of minions in his place, including CTO Mike Schroepfer who was grilled for around five hours by irate MPs and his answers still left them dissatisfied.

The committee will set out the results of this inquiry in another report touching on the impact of big tech on democratic processes — likely in the coming months. Committee chair Damian Collins tweeted today to say the inquiry has “also highlighted how out of date our election laws are in a world increasingly dominated by big tech media”.

On its forthcoming Brexit campaign spending report, an Electoral Commission spokesperson told us: “In accordance with its Enforcement Policy, the Electoral Commission has written to Vote Leave, Mr Darren Grimes and Veterans for Britain to advise each campaigner of the outcome of the investigation announced on 20 November 2017. The campaigners have 28 days to make representations before final decisions are taken. The Commission will announce the outcome of the investigation and publish an investigation report once this final decision has been taken.”

AT&T collaborates on NSA spying through a web of secretive buildings in the U.S.

A new report from the Intercept sheds light on the NSA’s close relationship with communications provider AT&T.

The Intercept identified eight facilities across the U.S. that function as hubs for AT&T’s efforts to collaborate with the intelligence agency. The site first identified one potential hub of this kind in 2017 in lower Manhattan.

The report reveals that eight AT&T data facilities in the U.S. are regarded are high value sites to the NSA for giving the agency direct “backbone” access to raw data that passes through, including emails, web browsing, social media and any other form of unencrypted online activity. The NSA uses the web of eight AT&T hubs for a surveillance operation code named FAIRVIEW, a program previously reported by the New York Times. The program, first established in 1985, “involves tapping into international telecommunications cables, routers, and switches” and only coordinates directly with AT&T and not the other major U.S. mobile carriers.

AT&T’s deep involvement with the NSA monitoring program operated under the codename SAGUARO. Messaging, email and other web traffic accessed through the program was made searchable through XKEYSCORE, one of the NSA’s more infamous search-powered surveillance tools.

The Intercept explains how those sites give the NSA access to data beyond just AT&T subscribers:

“The data exchange between AT&T and other networks initially takes place outside AT&T’s control, sources said, at third-party data centers that are owned and operated by companies such as California’s Equinix. But the data is then routed – in whole or in part – through the eight AT&T buildings, where the NSA taps into it. By monitoring what it calls the ‘peering circuits’ at the eight sites, the spy agency can collect ‘not only AT&T’s data, they get all the data that’s interchanged between AT&T’s network and other companies,’ according to Mark Klein, a former AT&T technician who worked with the company for 22 years.”

The NSA describes these locations as “peering link router complex” sites while AT&T calls them “Service Node Routing Complexes” (SNRCs). The eight complexes are spread across the nation’s major cities with locations in Chicago, Dallas, Atlanta, Los Angeles, New York City, San Francisco, Seattle, and Washington, D.C. The Intercept report identifies these facilities:

“Among the pinpointed buildings, there is a nuclear blast-resistant, windowless facility in New York City’s Hell’s Kitchen neighborhood; in Washington, D.C., a fortress-like, concrete structure less than half a mile south of the U.S. Capitol; in Chicago, an earthquake-resistant skyscraper in the West Loop Gate area; in Atlanta, a 429-foot art deco structure in the heart of the city’s downtown district; and in Dallas, a cube-like building with narrow windows and large vents on its exterior, located in the Old East district.

… in downtown Los Angeles, a striking concrete tower near the Walt Disney Concert Hall and the Staples Center, two blocks from the most important internet exchange in the region; in Seattle, a 15-story building with blacked-out windows and reinforced concrete foundations, near the city’s waterfront; and in San Francisco’s South of Market neighborhood, a building where it was previously claimed that the NSA was monitoring internet traffic from a secure room on the sixth floor.”

While these facilities could allow for the monitoring of domestic U.S. traffic, they also process vast quantities of international traffic as it moves across the globe — a fact that likely explains why the NSA would view these AT&T nodes as such high value sites. The original documents, part of the leaked files provided by Edward Snowden, are available in the original report.

Google adds a search feature to account settings to ease use

Google has announced a refresh of the Google Accounts user interface. The changes are intended to make it easier for users to navigate settings and review data the company has associated with an account — including information relating to devices, payment methods, purchases, subscriptions, reservations, contacts and other personal info.

The update also makes security and privacy options more prominent, according to Google.

“To help you better understand and take control of your Google Account, we’ve made all your privacy options easy to review with our new intuitive, user-tested design,” it writes. “You can now more easily find your Activity controls in the Data & Personalization tab and choose what types of activity data are saved in your account to make Google work better for you.

“There, you’ll also find the recently updated Privacy Checkup that helps you review your privacy settings and explains how they shape your experience across Google services.”

Android users will get the refreshed Google Account interface first, with iOS and web coming later this year.

Last September the company also refreshed Google Dashboard — to make it easier to use and better integrate it into other privacy controls.

While in October it outed a revamped Security Checkup feature, offering an overview of account security that includes personalized recommendations. The same month it also launched a free, opt-in program aimed at users who believe their accounts to be at particularly high risk of targeted online attacks.

And in January it announced new ad settings controls, also billed as boosting transparency and control. So settings related updates have been coming pretty thick and fast from the ad targeting tech giant.

The latest refresh comes at a time when many companies have been rethinking their approach to security and privacy as a result of a major update to the European Union’s data protection framework which applies to entities processing EU people’s data regardless of where that data is being crunched.

Google also announced a raft of changes to its privacy policy as a direct compliance response with GDPR back in May — saying it was making the policy clearer and easier to navigate, and adding more detail and explanations. It also updated user controls at that time, simplifying on/off switches for things like location data collection and web and app activity.

So that legal imperative to increase visibility and user controls at the core of digital empires looks to be generating uplift that’s helping to raise the settings bar across entire product suites. Which is good news for users.

As well as rethinking how Google Account settings are laid out, the updated “experience” adds some new functions intended to make it easier for people to find the settings they’re looking for too.

Notably a new search functionality for locating settings or specific info within an account — such as how to change a password. Which sounds like a really handy addition. There’s also a new dedicated support section offering help with common tasks, and answers from community experts.

And while it’s certainly welcome to see a search expert like Google adding a search feature to help people gain more control over their personal data, you do have to wonder what took it so long to come up with that idea.

Controls are only as useful as they are easy to use, of course. And offering impenetrable and/or bafflingly complex settings has, shamefully, been the historical playbook of the tech industry — as a socially engineered pathway to maximize data gathering via obfuscation (and obtain consent by confusion).

Again, the GDPR makes egregious personal data heists untenable over the long term — at least where the regulation has jurisdiction.

And while built-in opacity around technology system operation is something regulators are really only beginning to get to grips with — and much important work remains to be done to put vital guardrails in place, such as around the use of personal data for political ad targeting, for instance, or to ensure AI blackboxes can’t bake in bias — several major privacy scandals have knocked the sheen off big tech’s algorithmic Pandora’s boxes in recent years. And politicians are leaning into the techlash.

So, much like all these freshly redesigned settings menus, the direction of regulatory travel looks pretty clear — even if the pace of progress is never as disruptive as the technologies themselves.

Blockchain browser Brave starts opt-in testing of on-device ad targeting

Brave, an ad-blocking web browser with a blockchain-based twist, has started trials of ads that reward viewers for watching them — the next step in its ambitious push towards a consent-based, pro-privacy overhaul of online advertising.

Brave’s Basic Attention Token (BAT) is the underlying micropayments mechanism it’s using to fuel the model. The startup was founded in 2015 by former Mozilla CEO Brendan Eich, and had a hugely successful initial coin offering last year.

In a blog post announcing the opt-in trial yesterday, Brave says it’s started “voluntary testing” of the ad model before it scales up to additional user trials.

These first tests involve around 250 “pre-packaged ads” being shown to trial volunteers via a dedicated version of the Brave browser that’s both loaded with the ads and capable of tracking users’ browsing behavior.

The startup signed up Dow Jones Media Group as a partner for the trial-based ad content back in April.

People interested in joining these trials are being asked to contact its Early Access group — via community.brave.com.

Brave says the test is intended to analyze user interactions to generate test data for training its on-device machine learning algorithms. So while its ultimate goal for the BAT platform is to be able to deliver ads without eroding individual users’ privacy via this kind of invasive tracking, the test phase does involve “a detailed log” of browsing activity being sent to it.

Though Brave also specifies: “Brave will not share this information, and users can leave this test at any time by switching off this feature or using a regular version of Brave (which never logs user browsing data to any server).”

“Once we’re satisfied with the performance of the ad system, Brave ads will be shown directly in the browser in a private channel to users who consent to see them. When the Brave ad system becomes widely available, users will receive 70% of the gross ad revenue, while preserving their privacy,” it adds.

The key privacy-by-design shift Brave is working towards is moving ad targeting from a cloud-based ad exchange to the local device where users can control their own interactions with marketing content, and don’t have to give up personal data to a chain of opaque third parties (armed with hooks and data-sucking pipes) in order to do so.

Local device ad targeting will work by Brave pushing out ad catalogs (one per region and natural language) to available devices on a recurring basis.

“Downloading a catalog does not identify any user,” it writes. “As the user browses, Brave locally matches the best available ad from the catalog to display that ad at the appropriate time. Brave ads are opt-in and consent-based (disabled by default), and engineered to operate without leaking the user’s personal data from their device.”

It couches this approach as “a more efficient and direct opportunity to access user attention without the inherent liabilities and risks involved with large scale user data collection”.

Though there’s still a ways to go before Brave is in a position to prove out its claims — including several more testing phases.

Brave says it’s planning to run further studies later this month with a larger set of users that will focus on improving its user modeling — “to integrate specific usage of the browser, with the primary goal of understanding how behavior in the browser impacts when to deliver ads”.

“This will serve to strengthen existing modeling and data classification engines and to refine the system’s machine learning,” it adds.

After that it says it will start to expand user trials — “in a few months” — focusing testing on the impact of rewards in its user-centric ad system.

“Thousands of ads will be used in this phase, and users will be able to earn tokens for viewing and interacting with ads,” it says of that.

Brave’s initial goal is for users to be able to reward content producers via the utility BAT token stored in a payment wallet baked into the browser. The default distributes the tokens stored in a users’ wallet based on time spent on Brave-verified websites (though users can also make manual tips).

Though payments using BAT may also ultimately be able to do more.

Its roadmap envisages real ad revenue and donation flow fee revenue being generated via its system this year, and also anticipates BAT integration into “other apps based on open source & specs for greater ad buying leverage and publisher onboarding”.

Keepsafe launches a privacy-focused mobile browser

Keepsafe, the company behind the private photo app of the same name, is expanding its product lineup today with the release of a mobile web browser.

Co-founder and CEO Zouhair Belkoura argued that all of Keepsafe’s products (which also include a VPN app and a private phone number generator) are united not just by a focus on privacy, but by a determination to make those features simple and easy-to-understand — in contrast to what Belkoura described as “how security is designed in techland,” with lots of jargon and complicated settings.

Plus, when it comes to your online activity, Belkoura said there are different levels of privacy. There’s the question of the government and large tech companies accessing our personal data, which he argued people care about intellectually, but “they don’t really care about it emotionally.”

Then there’s “the nosy neighbor problem,” which Belkoura suggested is something people feel more strongly about: “A billion people are using Gmail and it’s scanning all their email [for advertising], but if I were to walk up to you and say, ‘Hey, can I read your email?’ you’d be like, ‘No, that’s kind of weird, go away.’ ”

It looks like Keepsafe is trying to tackle both kinds of privacy with its browser. For one thing, you can lock the browser with a PIN (it also supports Touch ID, Face ID and Android Fingerprint).

Keepsafe browser tabs

Then once you’re actually browsing, you can either do it in normal tabs, where social, advertising and analytics trackers are blocked (you can toggle which kinds of trackers are affected), but cookies and caching are still allowed — so you stay logged in to websites, and other session data is retained. But if you want an additional layer of privacy, you can open a private tab, where everything gets forgotten as soon as you close it.

While you can get some of these protections just by turning on private/incognito mode in a regular browser, Belkoura said there’s a clarity for consumers when an app is designed specifically for privacy, and the app is part of a broader suite of privacy-focused products. In addition, he said he’s hoping to build meaningful integrations between the different Keepsafe products.

Keepsafe Browser is available for free on iOS and Android.

When asked about monetization, Belkoura said, “I don’t think that the private browser per se is a good place to directly monetize … I’m more interested in saying this is part of the Keepsafe suite and there are other parts of the Keepsafe Suite that we’ll charge you money for.”

Verizon stops selling customer location to two data brokers after one is caught leaking it

Verizon is cutting off access to its mobile customers’ real-time locations to two third-party data brokers “to prevent misuse of that information going forward.” The company announced the decision in a letter sent to Senator Ron Wyden (D-OR), who along with others helped reveal improper usage and poor security at these location brokers. It is not, however, getting out of the location-sharing business altogether.

Verizon sold bulk access to its customers’ locations to the brokers in question, LocationSmart and Zumigo, which then turned around and resold that data to dozens of other companies. This isn’t necessarily bad — there are tons of times when location is necessary to provide a service the customer asks for, and supposedly that customer would have to okay the sharing of that data. (Disclosure: Verizon owns Oath, which owns TechCrunch. This does not affect our coverage.)

That doesn’t seem to have been the case at LocationSmart customer Securus, which was selling its data directly to law enforcement so they could find mobile customers quickly and without all that fuss about paperwork and warrants. And then it was found that LocationSmart had exposed an API that allowed anyone to request mobile locations freely and anonymously, and without collecting consent.

When these facts were revealed by security researchers and Sen. Wyden, Verizon immediately looked into it, they reported in a letter sent to the Senator.

“We conducted a comprehensive review of our location aggregator program,” wrote Verizon CTO Karen Zacharia. “As a result of this review, we are initiating a process to terminate our existing agreements for the location aggregator program.”

“We will not enter into new location aggregation arrangements unless and until we are comfortable that we can adequately protect our customers’ location data through technological advancements and/or other practices,” she wrote later in the letter. In other words, the program is on ice until it can be secured.

Although Verizon claims to have “girded” the system with “mechanisms designed to protect against misuse of our customers’ location data,” the abuses in question clearly slipped through the cracks. Perhaps most notable is the simple fact that Verizon itself does not seem to need to be informed whether a customer has consented to having their location polled. That collection is the responsibility “the aggregator or corporate customer.”

In other words, Verizon doesn’t need to ask the customer, and the company it sells the data to wholesale doesn’t need to ask the customer — the requirement devolves to the company buying access from the wholesaler. In Securus’s case, it had abstracted things one step further, allowing law enforcement full access when it said it had authority to do so, but apparently without checking, AT&T wrote in its own letter to Wyden.

And there were 75 other corporate customers. Don’t worry, someone is keeping track of them. Right?

These processes are audited, Verizon wrote, but apparently not an audit that finds things like the abuse by Securus or a poorly secured API. Perhaps how this happened is among the “number of internal questions” raised by the review.

When asked for comment, a Verizon representative offered the following statement:

When these issues were brought to our attention, we took immediate steps to stop it.  Customer privacy and security remain a top priority for our customers and our company. We stand-by that commitment to our customers.

And indeed while the program itself appears to have been run with a laxity that should be alarming to all those customers for whom Verizon claims to be so concerned, some of the company’s competitors have yet to take similar action. AT&T, T-Mobile, and Sprint were also named by LocationSmart as partners. Their own letters to Wyden stressed that their systems were similar to the others, with similar safeguards (that were similarly eluded).

Sen. Wyden called on the others to step up in a press release announcing that his pressure on Verizon had borne fruit:

Verizon deserves credit for taking quick action to protect its customers’ privacy and security. After my investigation and follow-up reports revealed that middlemen are selling Americans’ location to the highest bidder without their consent, or making it available on insecure web portals, Verizon did the responsible thing and promptly announced it was cutting these companies off. In contrast, AT&T, T-Mobile, and Sprint seem content to continuing to sell their customers’ private information to these shady middle men, Americans’ privacy be damned.

AT&T actually announced that it is ending its agreements as well, after Wyden’s call to action was published.

The FCC, meanwhile, has announced that it is looking into the issue — with the considerable handicap that Chairman Ajit Pai represented Securus back in 2012 when he was working as a lawyer. Wyden has called on him to recuse himself, but that has yet to happen.

I’ve asked Verizon for further clarification on its arrangements and plans, specifically whether it has any other location-sharing agreements in place with other companies. These aren’t, after all, the only players in the game.

UK report warns DeepMind Health could gain ‘excessive monopoly power’

DeepMind’s foray into digital health services continues to raise concerns. The latest worries are voiced by a panel of external reviewers appointed by the Google-owned AI company to report on its operations after its initial data-sharing arrangements with the UK’s National Health Service (NHS) ran into a major public controversy in 2016.

The DeepMind Health Independent Reviewers’ 2018 report flags a series of risks and concerns, as they see it, including the potential for DeepMind Health to be able to “exert excessive monopoly power” as a result of the data access and streaming infrastructure that’s bundled with provision of the Streams app — and which, contractually, positions DeepMind as the access-controlling intermediary between the structured health data and any other third parties that might, in future, want to offer their own digital assistance solutions to the Trust.

While the underlying FHIR (aka, fast healthcare interoperability resource) deployed by DeepMind for Streams uses an open API, the contract between the company and the Royal Free Trust funnels connections via DeepMind’s own servers, and prohibits connections to other FHIR servers. A commercial structure that seemingly works against the openness and interoperability DeepMind’s co-founder Mustafa Suleyman has claimed to support.

There are many examples in the IT arena where companies lock their customers into systems that are difficult to change or replace. Such arrangements are not in the interests of the public. And we do not want to see DeepMind Health putting itself in a position where clients, such as hospitals, find themselves forced to stay with DeepMind Health even if it is no longer financially or clinically sensible to do so; we want DeepMind Health to compete on quality and price, not by entrenching legacy position,” the reviewers write.

Though they point to DeepMind’s “stated commitment to interoperability of systems”, and “their adoption of the FHIR open API” as positive indications, writing: “This means that there is potential for many other SMEs to become involved, creating a diverse and innovative marketplace which works to the benefit of consumers, innovation and the economy.”

“We also note DeepMind Health’s intention to implement many of the features of Streams as modules which could be easily swapped, meaning that they will have to rely on being the best to stay in business,” they add. 

However stated intentions and future potentials are clearly not the same as on-the-ground reality. And, as it stands, a technically interoperable app delivery infrastructure is being encumbered by prohibitive clauses in a commercial contract — and by a lack of regulatory pushback against such behavior.

The reviewers also raise concerns about an ongoing lack of clarity around DeepMind Health’s business model — writing: “Given the current environment, and with no clarity about DeepMind Health’s business model, people are likely to suspect that there must be an undisclosed profit motive or a hidden agenda. We do not believe this to be the case, but would urge DeepMind Health to be transparent about their business model, and their ability to stick to that without being overridden by Alphabet. For once an idea of hidden agendas is fixed in people’s mind, it is hard to shift, no matter how much a company is motivated by the public good.”

We have had detailed conversations about DeepMind Health’s evolving thoughts in this area, and are aware that some of these questions have not yet been finalised. However, we would urge DeepMind Health to set out publicly what they are proposing,” they add. 

DeepMind has suggested it wants to build healthcare AIs that are capable of charging by results. But Streams does not involve any AI. The service is also being provided to NHS Trusts for free, at least for the first five years — raising the question of how exactly the Google-owned company intends to recoup its investment.

Google of course monetizes a large suite of free-at-the-point-of-use consumer products — such as the Android mobile operating system; its cloud email service Gmail; and the YouTube video sharing platform, to name three — by harvesting people’s personal data and using that information to inform its ad targeting platforms.

Hence the reviewers’ recommendation for DeepMind to set out its thinking on business model to avoid its intentions vis-a-vis people’s medical data being viewed with suspicion.

The company’s historical modus operandi also underlines the potential monopoly risks if DeepMind is allowed to carve out a dominant platform position in digital healthcare provision — given how effectively its parent has been able to turn a free-for-OEMs mobile OS (Android) into global smartphone market OS dominance, for example.

So, while DeepMind only has a handful of contracts with NHS Trusts for the Streams app and delivery infrastructure at this stage, the reviewers’ concerns over the risk of the company gaining “excessive monopoly power” do not seem overblown.

They are also worried about DeepMind’s ongoing vagueness about how exactly it works with its parent Alphabet, and what data could ever be transferred to the ad giant — an inevitably queasy combination when stacked against DeepMind’s handling of people’s medical records.

“To what extent can DeepMind Health insulate itself against Alphabet instructing them in the future to do something which it has promised not to do today? Or, if DeepMind Health’s current management were to leave DeepMind Health, how much could a new CEO alter what has been agreed today?” they write.

“We appreciate that DeepMind Health would continue to be bound by the legal and regulatory framework, but much of our attention is on the steps that DeepMind Health have taken to take a more ethical stance than the law requires; could this all be ended? We encourage DeepMind Health to look at ways of entrenching its separation from Alphabet and DeepMind more robustly, so that it can have enduring force to the commitments it makes.”

Responding to the report’s publication on its website, DeepMind writes that it’s “developing our longer-term business model and roadmap”.

“Rather than charging for the early stages of our work, our first priority has been to prove that our technologies can help improve patient care and reduce costs. We believe that our business model should flow from the positive impact we create, and will continue to explore outcomes-based elements so that costs are at least in part related to the benefits we deliver,” it continues.

So it has nothing to say to defuse the reviewers’ concerns about making its intentions for monetizing health data plain — beyond deploying a few choice PR soundbites.

On its links with Alphabet, DeepMind also has little to say, writing only that: “We will explore further ways to ensure there is clarity about the binding legal frameworks that govern all our NHS partnerships.”

“Trusts remain in full control of the data at all times,” it adds. “We are legally and contractually bound to only using patient data under the instructions of our partners. We will continue to make our legal agreements with Trusts publicly available to allow scrutiny of this important point.”

“There is nothing in our legal agreements with our partners that prevents them from working with any other data processor, should they wish to seek the services of another provider,” it also claims in response to additional questions we put to it.

We hope that Streams can help unlock the next wave of innovation in the NHS. The infrastructure that powers Streams is built on state-of-the-art open and interoperable standards, known as FHIR. The FHIR standard is supported in the UK by NHS Digital, NHS England and the INTEROPen group. This should allow our partner trusts to work more easily with other developers, helping them bring many more new innovations to the clinical frontlines,” it adds in additional comments to us.

“Under our contractual agreements with relevant partner trusts, we have committed to building FHIR API infrastructure within the five year terms of the agreements.”

Asked about the progress it’s made on a technical audit infrastructure for verifying access to health data, which it announced last year, it reiterated the wording on its blog, saying: “We will remain vigilant about setting the highest possible standards of information governance. At the beginning of this year, we appointed a full time Information Governance Manager to oversee our use of data in all areas of our work. We are also continuing to build our Verifiable Data Audit and other tools to clearly show how we’re using data.”

So developments on that front look as slow as we expected.

The Google-owned UK AI company began its push into digital healthcare services in 2015, quietly signing an information-sharing arrangement with a London-based NHS Trust that gave it access to around 1.6 million people’s medical records for developing an alerts app for a condition called Acute Kidney Injury.

It also inked an MoU with the Trust where the pair set out their ambition to apply AI to NHS data-sets. (They even went so far as to get ethical signs off for an AI project — but have consistently claimed the Royal Free data was not fed to any AIs.)

However the data-sharing collaboration ran into trouble in May 2016 when the scope of patient data being shared by the Royal Free with DeepMind was revealed (via investigative journalism, rather than by disclosures from the Trust or DeepMind).

None of the ~1.6M people whose non-anonymized medical records had been passed to the Google-owned company had been informed or asked for their consent. And questions were raised about the legal basis for the data-sharing arrangement.

Last summer the UK’s privacy regulator concluded an investigation of the project — finding that the Royal Free NHS Trust had broken data protection rules during the app’s development.

Yet despite ethical questions and regulatory disquiet about the legality of the data-sharing, the Streams project steamrollered on. And the Royal Free Trust went on to implement the app for use by clinicians in its hospitals, while DeepMind has also signed several additional contracts to deploy Streams to other NHS Trusts.

More recently, the law firm Linklaters completed an audit of the Royal Free Streams project, after being commissioned by the Trust as part of its settlement with the ICO. Though this audit only examined the current functioning of Streams. (There has been no historical audit of the lawfulness of people’s medical records being shared during the build and test phase of the project.)

Linklaters did recommend the Royal Free terminates its wider MoU with DeepMind — and the Trust has confirmed to us that it will be following the firm’s advice.

“The audit recommends we terminate the historic memorandum of understanding with DeepMind which was signed in January 2016. The MOU is no longer relevant to the partnership and we are in the process of terminating it,” a Royal Free spokesperson told us.

So DeepMind, probably the world’s most famous AI company, is in the curious position of being involved in providing digital healthcare services to UK hospitals that don’t actually involve any AI at all. (Though it does have some ongoing AI research projects with NHS Trusts too.)

In mid 2016, at the height of the Royal Free DeepMind data scandal — and in a bid to foster greater public trust — the company appointed the panel of external reviewers who have now produced their second report looking at how the division is operating.

And it’s fair to say that much has happened in the tech industry since the panel was appointed to further undermined public trust in tech platforms and algorithmic promises. Including the ICO’s finding that the initial data-sharing arrangement between the Royal Free and DeepMind broke UK privacy laws.

The eight members of the panel for the 2018 report are: Martin Bromiley OBE; Elisabeth Buggins CBE; Eileen Burbidge MBE; Richard Horton; Dr Julian Huppert; Professor Donal O’Donoghue; Matthew Taylor; and Professor Sir John Tooke.

In their latest report the external reviewers warn that the public’s view of tech giants has “shifted substantially” vs where it was even a year ago — asserting that “issues of privacy in a digital age are if anything, of greater concern”.

At the same time politicians are also gazing rather more critically on the works and social impacts of tech giants.

Although the UK government has also been keen to position itself as a supporter of AI, providing public funds for the sector and, in its Industrial Strategy white paper, identifying AI and data as one of four so-called ‘Grand Challenges’ where it believes the UK can “lead the world for years to come” — including specifically name-checking DeepMind as one of a handful of leading-edge homegrown AI businesses for the country to be proud of.

Still, questions over how to manage and regulate public sector data and AI deployments — especially in highly sensitive areas such as healthcare — remain to be clearly addressed by the government.

Meanwhile, the encroaching ingress of digital technologies into the healthcare space — even when the techs don’t even involve any AI — are already presenting major challenges by putting pressure on existing information governance rules and structures, and raising the specter of monopolistic risk.

Asked whether it offers any guidance to NHS Trusts around digital assistance for clinicians, including specifically whether it requires multiple options be offered by different providers, the NHS’ digital services provider, NHS Digital, referred our question on to the Department of Health (DoH), saying it’s a matter of health policy.

The DoH in turn referred the question to NHS England, the executive non-departmental body which commissions contracts and sets priorities and directions for the health service in England.

And at the time of writing we’re still waiting for a response from the steering body.

Ultimately it looks like it will be up to the health service to put in place a clear and robust structure for AI and digital decision services that fosters competition by design by baking in a requirement for Trusts to support multiple independent options when procuring apps and services.

Without that important check and balance, the risk is that platform dynamics will quickly dominate and control the emergent digital health assistance space — just as big tech has dominated consumer tech.

But publicly funded healthcare decisions and data sets should not simply be handed to the single market-dominating entity that’s willing and able to burn the most resource to own the space.

Nor should government stand by and do nothing when there’s a clear risk that a vital area of digital innovation is at risk of being closed down by a tech giant muscling in and positioning itself as a gatekeeper before others have had a chance to show what their ideas are made of, and before even a market has had the chance to form. 

Purdue’s PHADE technology lets cameras ‘talk’ to you

It’s become almost second nature to accept that cameras everywhere — from streets, to museums and shops — are watching you, but now they may be able to communicate with you, as well. New technology from Purdue University computer science researchers has made this dystopian prospect a reality in a new paper published today. But, they argue, it’s safer than you might think.

The system is called PHADE, which allows for something called “private human addressing,” where camera systems and individual cell phones can communicate without transmitting any personal data, like an IP or Mac address. Instead of using an IP or Mac address, the technology relies on motion patterns for the address code. That way, even if a hacker intercepts it, they won’t be able to access the person’s physical location.

Imagine you’re strolling through a museum and an unfamiliar painting catches your eye. The docents are busy with a tour group far across the gallery and you didn’t pay extra for the clunky recorder and headphones for an audio tour. While pondering the brushwork you feel your phone buzz, and suddenly a detailed description of the artwork and its painter is in the palm of your hand.

To achieve this effect, researchers use an approach similar to the kind of directional audio experience you might find at theme parks. Through processing the live video data, the technology is able to identify the individual motion patterns of pedestrians and when they are within a pertinent range — say, in front of a painting. From there they can broadcast a packet of information linked to the motion address of the pedestrian. When the user’s phone identifies that the motion address matches their own, the message is received.

While this tech can be used to better inform the casual museum-goer, the researchers also believe it has a role in protecting pedestrians from crime in their area.

“Our system serves as a bridge to connect surveillance cameras and people,” He Wang, a co-creator of the technology and assistant professor of computer science, said in a statement. “[It can] be used by government agencies to enhance public safety [by deploying] cameras in high-crime or high-accident areas and warn[ing] specific users about potential threats, such as suspicious followers.”

While the benefits of an increasingly interconnected world are still being debated and critiqued daily, there might just be an upside to knowing a camera’s got its eye on you.

Audit of NHS Trust’s app project with DeepMind raises more questions than it answers

A third party audit of a controversial patient data-sharing arrangement between a London NHS Trust and Google DeepMind appears to have skirted over the core issues that generated the controversy in the first place.

The audit (full report here) — conducted by law firm Linklaters — of the Royal Free NHS Foundation Trust’s acute kidney injury detection app system, Streams, which was co-developed with Google-DeepMind (using an existing NHS algorithm for early detection of the condition), does not examine the problematic 2015 information-sharing agreement inked between the pair which allowed data to start flowing.

“This Report contains an assessment of the data protection and confidentiality issues associated with the data protection arrangements between the Royal Free and DeepMind . It is limited to the current use of Streams, and any further development, functional testing or clinical testing, that is either planned or in progress. It is not a historical review,” writes Linklaters, adding that: “It includes consideration as to whether the transparency, fair processing, proportionality and information sharing concerns outlined in the Undertakings are being met.”

Yet it was the original 2015 contract that triggered the controversy, after it was obtained and published by New Scientist, with the wide-ranging document raising questions over the broad scope of the data transfer; the legal bases for patients information to be shared; and leading to questions over whether regulatory processes intended to safeguard patients and patient data had been sidelined by the two main parties involved in the project.

In November 2016 the pair scrapped and replaced the initial five-year contract with a different one — which put in place additional information governance steps.

They also went on to roll out the Streams app for use on patients in multiple NHS hospitals — despite the UK’s data protection regulator, the ICO, having instigated an investigation into the original data-sharing arrangement.

And just over a year ago the ICO concluded that the Royal Free NHS Foundation Trust had failed to comply with Data Protection Law in its dealings with Google’s DeepMind.

The audit of the Streams project was a requirement of the ICO.

Though, notably, the regulator has not endorsed Linklaters report. On the contrary, it warns that it’s seeking legal advice and could take further action.

In a statement on its website, the ICO’s deputy commissioner for policy, Steve Wood, writes: “We cannot endorse a report from a third party audit but we have provided feedback to the Royal Free. We also reserve our position in relation to their position on medical confidentiality and the equitable duty of confidence. We are seeking legal advice on this issue and may require further action.”

In a section of the report listing exclusions, Linklaters confirms the audit does not consider: “The data protection and confidentiality issues associated with the processing of personal data about the clinicians at the Royal Free using the Streams App.”

So essentially the core controversy, related to the legal basis for the Royal Free to pass personally identifiable information on 1.6M patients to DeepMind when the app was being developed, and without people’s knowledge or consent, is going unaddressed here.

And Wood’s statement pointedly reiterates that the ICO’s investigation “found a number of shortcomings in the way patient records were shared for this trial”.

“[P]art of the undertaking committed Royal Free to commission a third party audit. They have now done this and shared the results with the ICO. What’s important now is that they use the findings to address the compliance issues addressed in the audit swiftly and robustly. We’ll be continuing to liaise with them in the coming months to ensure this is happening,” he adds.

“It’s important that other NHS Trusts considering using similar new technologies pay regard to the recommendations we gave to Royal Free, and ensure data protection risks are fully addressed using a Data Protection Impact Assessment before deployment.”

While the report is something of a frustration, given the glaring historical omissions, it does raise some points of interest — including suggesting that the Royal Free should probably scrap a Memorandum of Understanding it also inked with DeepMind, in which the pair set out their ambition to apply AI to NHS data.

This is recommended because the pair have apparently abandoned their AI research plans.

On this Linklaters writes: “DeepMind has informed us that they have abandoned their potential research project into the use of AI to develop better algorithms, and their processing is limited to execution of the NHS AKI algorithm… In addition, the majority of the provisions in the Memorandum of Understanding are non-binding. The limited provisions that are binding are superseded by the Services Agreement and the Information Processing Agreement discussed above, hence we think the Memorandum of Understanding has very limited relevance to Streams. We recommend that the Royal Free considers if the Memorandum of Understanding continues to be relevant to its relationship with DeepMind and, if it is not relevant, terminates that agreement.”

In another section, discussing the NHS algorithm that underpins the Streams app, the law firm also points out that DeepMind’s role in the project is little more than helping provide a glorified app wrapper (on the app design front the project also utilized UK app studio, ustwo, so DeepMind can’t claim app design credit either).

“Without intending any disrespect to DeepMind, we do not think the concepts underpinning Streams are particularly ground-breaking. It does not, by any measure, involve artificial intelligence or machine learning or other advanced technology. The benefits of the Streams App instead come from a very well-designed and user-friendly interface, backed up by solid infrastructure and data management that provides AKI alerts and contextual clinical information in a reliable, timely and secure manner,” Linklaters writes.

What DeepMind did bring to the project, and to its other NHS collaborations, is money and resources — providing its development resources free for the NHS at the point of use, and stating (when asked about its business model) that it would determine how much to charge the NHS for these app ‘innovations’ later.

Yet the commercial services the tech giant is providing to what are public sector organizations do not appear to have been put out to open tender.

Also notably excluded in the Linklaters’ audit: Any scrutiny of the project vis-a-vis competition law, public procurement law compliance with procurement rules, and any concerns relating to possible anticompetitive behavior.

The report does highlight one potentially problematic data retention issue for the current deployment of Streams, saying there is “currently no retention period for patient information on Streams” — meaning there is no process for deleting a patient’s medical history once it reaches a certain age.

“This means the information on Streams currently dates back eight years,” it notes, suggesting the Royal Free should probably set an upper age limit on the age of information contained in the system.

While Linklaters largely glosses over the chequered origins of the Streams project, the law firm does make a point of agreeing with the ICO that the original privacy impact assessment for the project “should have been completed in a more timely manner”.

It also describes it as “relatively thin given the scale of the project”.

Giving its response to the audit, health data privacy advocacy group MedConfidential — an early critic of the DeepMind data-sharing arrangement — is roundly unimpressed, writing: “The biggest question raised by the Information Commissioner and the National Data Guardian appears to be missing — instead, the report excludes a “historical review of issues arising prior to the date of our appointment”.

“The report claims the ‘vital interests’ (i.e. remaining alive) of patients is justification to protect against an “event [that] might only occur in the future or not occur at all”… The only ‘vital interest’ protected here is Google’s, and its desire to hoard medical records it was told were unlawfully collected. The vital interests of a hypothetical patient are not vital interests of an actual data subject (and the GDPR tests are demonstrably unmet).

“The ICO and NDG asked the Royal Free to justify the collection of 1.6 million patient records, and this legal opinion explicitly provides no answer to that question.”

UK watchdog issues $330k fine for Yahoo’s 2014 data breach

Another fallout from the massive Yahoo data breach that dates back to 2014: The UK’s data watchdog has just issued a £250,000 (~$334k) penalty for violations of the Data Protection Act 1998.

Yahoo, which has since been acquired by Verizon and merged with AOL to form a joint entity called Oath (which is also the parent of TechCrunch), is arguably getting off pretty lightly here for a breach that impacted a whopping ~500M users.

Certainly given how large data protection fines can now scale under the European Union’s new privacy framework, GDPR, which also requires that most breaches be disclosed within 72 hours of discovery (rather than, ooooh, two years or so later in the Yahoo case… ).

The Information Commissioner’s Office (ICO) focused its investigation on the more than 515,000 affected UK accounts which the London-based Yahoo UK Services Ltd had responsibility for as a data controller.

And it found a catalogue of failures — specifically finding that Yahoo UK Services had: Failed to take appropriate technical and organisational measures to protect the data against exfiltration by unauthorised persons; had failed to take appropriate measures to ensure that its data processor — Yahoo! Inc — complied with the appropriate data protection standards; had failed to ensure appropriate monitoring was in place to protect the credentials of Yahoo! employees with access to Yahoo! customer data; and also that the inadequacies found had been in place for “a long period of time without being discovered or addressed”.

Commenting in a statement, the ICO deputy commissioner of operations, James Dipple-Johnstone, said: “People expect that organisations will keep their personal data safe from malicious intruders who seek to exploit it. The failings our investigation identified are not what we expect from a company that had ample opportunity to implement appropriate measures, and potentially stop UK citizens’ data being compromised.”

According to the ICO personal data compromised in the breach included names, email addresses, telephone numbers, dates of birth, hashed passwords, and encrypted or unencrypted security questions and answers.

It considered the breach to be a “serious contravention of Principle 7 of the Data Protection Act 1998” — which states that appropriate technical and organisational measures must be taken against unauthorised or unlawful processing of personal data.

Happily for Oath, GDPR does not apply historically because the UK’s domestic regime only allows for maximum penalties of £500k.

And given Verizon was able to knock $350M off the acquisition price of Yahoo on account of a pair of massive data breaches, well, it’s not going to be too concerned with the regulatory sting here.

Reputation wise is perhaps another matter. Though, again, Yahoo had disclosed the breaches before the acquisition closed so any damage had already been publicly attached to Yahoo.

An Oath spokesman told us the company does not comment directly on regulatory actions — but pointed to several developments since Yahoo was acquired, including the doubling in size of the global security organization; the creation in March of a cybersecurity advisory board; and the relaunch in April of an integrated bug bounty program.

Also, as we reported last year, Yahoo’s chief information security officer, Bob Lord — who was in charge at the time the breach was unearthed — lost out to AOL’s Chris Nims in the merger process, with the latter taking up the security chief’s chair of the new umbrella entity, Oath.

Security is certainly now being generally pushed up the C-suite agenda for all organizations handling EU data as a consequence of GDPR concentrating minds on much more sizable legal liabilities.

The regulation’s data protection by design requirements also mean privacy considerations need to be baked into the data processing lifecycle, ergo policies and processes must be in place, alongside strong IT governance and security measures, to ensure compliance with the law — with the idea being to shrink the ability for attackers to intrude as happened so extensively in the Yahoo breaches.

“Under the GDPR and the new Data Protection Act 2018, individuals have stronger rights and more control and choice over their personal data. If organisations, especially well-resourced, experienced ones, do not properly safeguard their customers’ personal data, they may find customers taking their business elsewhere,” added Dipple-Johnstone.

Earlier this year the ICO issued a larger fine for a 2015 hack of Carphone Warehouse which compromised data of more than 3M people, and also included historical payment card details for a subset of the affected users.