Decrypted: Contact-tracing privacy, Zoom buys Keybase, Microsoft eyes CyberX

As the world looks to reopen after weeks of lockdown, governments are turning to contact tracing to understand the spread of the deadly coronavirus.

Most nations are leaning toward privacy-focused apps that use Bluetooth signals to create an anonymous profile of where a person has been and when. Some, like Israel, are bucking the trend and are using location and cell phone data to track the spread, prompting privacy concerns.

Some of the biggest European economies — Germany, Italy, Switzerland and Ireland — are building apps that work with Apple and Google’s contact-tracing API. But the U.K., one of the worst-hit nations in Europe, is going it alone.

Unsurprisingly, critics have both security and privacy concerns, so much so that the U.K. may end up switching over to Apple and Google’s system anyway. Given that one of Israel’s contact-tracing systems was found on an passwordless server this week, and India denied a privacy issue in its contact-tracing app, there’s not much wiggle-room to get these things wrong.

Turns out that even during a pandemic, people still care about their privacy.

Here’s more from the week.


THE BIG PICTURE

Zoom acquires Keybase, but questions remain

When Zoom announced it acquired online encryption key startup Keybase, for many, the reaction was closer to mild than wild. Even Keybase, a service that lets users store and manage their encryption keys, acknowledged its uncertain future. “Keybase’s future is in Zoom’s hands, and we’ll see where that takes us,” the company wrote in a blog post. Terms of the deal were not disclosed.

Zoom has faced security snafu after snafu. But after dancing around the problems, it promised to call in the cavalry and double down on fixing its encryption. So far, so good. But where does Keybase, largely a consumer product, fit into the fray? It doesn’t sound like even Zoom knows yet, per enterprise reporter Ron Miller. What’s clear is that Zoom needs encryption help, and few have the technical chops to pull that off.

Keybase’s team might — might — just help Zoom make good on its security promises.

Slack starts stripping location data from images

Slack has started to strip uploaded photos of their metadata.

What may seem like an inconsequential change to how the tech giant handles storing files on its servers, it will make it far more difficult to trace photos back to their original owners.

Almost every digital file — from documents on your computer to photos taken on your phone — contains metadata. That’s data about the file itself, such as how big the file is, when it was created, and by whom. Photos and videos often include the precise coordinates of where they were taken.

But that can be a problem for higher-risk Slack users, like journalists and activists, who have to take greater security precautions to keep their sources safe. The metadata inside these files can out sources, deanonymize whistleblowers, or otherwise make it easier for unfriendly governments to target individuals. Even if a journalist removes the metadata from a photo before publishing, a copy of the photo — with its metadata — may remain on Slack’s servers. Whether a hacker breaks in or a government demands the data, it can put sources at risk.

Slack confirmed to TechCrunch that it’s now started to strip photo metadata, including locations.

We can confirm that we recently began stripping EXIF (exchangeable image file) metadata from images uploaded to Slack, including GPS coordinates,” said a Slack spokesperson.

TechCrunch tested this by uploading a photo to Slack containing location data, then pulling a copy of the image from the server. That server copy, when checked again, no longer had location data embedded in the document. Some metadata remains, like the make and model of the device that took the photo.

Slack did not say what prompted the change.

US Marshals says prisoners’ personal information taken in data breach

A data breach at the U.S. Marshals Service exposed the personal information of current and former prisoners, TechCrunch has learned.

A letter, sent to those affected and obtained by TechCrunch, said the Justice Department notified the U.S. Marshals on December 30, 2019 of a data breach affecting a public-facing server storing personal information on current and former prisoners in its custody. The letter said the breach may have included their address, date of birth and Social Security number, which can be used for identity fraud.

But the notice didn’t say how many current and former prisoners are affected by the breach.

As the law enforcement arm of the federal courts, U.S. Marshals are tasked with capturing fugitives and serving federal arrest warrants. Last year, U.S. Marshals arrested over 90,000 fugitives and served over 105,000 warrants.

A spokesperson for the Justice Department did not respond to a request for comment by email or phone.

It’s the latest federal government security lapse in recent weeks.

The Defense Information Systems Agency, a Dept of Defense division charged with providing technology and communications support to the U.S. government — including the president and other senior officials — said a data breach between May and July 2019 resulted in the theft of employees’ personal information.

Last month, the Small Business Administration admitted that 8,000 applicants, who applied for an emergency loan after facing financial difficulties because of the coronavirus pandemic, had their data exposed.

A passwordless server run by NSO Group sparks contact-tracing privacy concerns

As countries work to reopen after weeks of lockdown, contact-tracing apps help to understand the spread of the deadly coronavirus strain, COVID-19.

While most governments lean toward privacy-focused apps that use Bluetooth signals to create an anonymous profile of a person’s whereabouts, others, like Israel, use location and cell phone data to track the spread of the virus.

Israel-based private security firm NSO Group, known for making mobile hacking tools, is leading one of Israel’s contact-tracing efforts.

Security researcher Bob Diachenko discovered one of NSO’s contact-tracing systems on the internet, unprotected and without a password, for anyone to access. After he contacted the company, NSO pulled the unprotected database offline. Diachenko said he believes the database contains dummy data.

NSO told TechCrunch that the system was only for demonstrating its technology and denied it was exposed because of a security lapse. NSO is still waiting for the Israeli government’s approval to feed cell records into the system. But experts say the system should not have been open to begin with, and that centralized databases of citizens’ location data pose a security and privacy risk.

Codename ‘Fleming’

NSO began work on its contact-tracing system codenamed Fleming in March.

Fleming is designed to “pour” in confirmed coronavirus test data from the health authorities and phone location data from the cell networks to identify people who may have been exposed to a person with the virus. Anyone who came into close proximity to a person diagnosed with coronavirus would be notified.

The unprotected database was hosted on an Amazon Web Services server in Frankfurt, where the data protection regime is one of the strictest in the world.

It contained about six weeks of location data, spanning around March 10 to April 23. It also included specific dates, times and the location of a “target” — a term that NSO used in the database to describe people — that may have come into contact with a potentially infected person.

The data also included the duration of the encounter to help score the likelihood of a transmitted infection.

The login page for NSO’s Fleming is protected with a password. Its backend database was unprotected. (Image: TechCrunch)

“NSO Group has successfully developed ‘Fleming’, an innovative, unique and purely analytical system designed to respond to the coronavirus pandemic,” said Oren Ganz, a director at NSO Group. “Fleming has been designed for the benefit of government decision-makers, without compromising individual privacy. This system has been demonstrated worldwide with great transparency to media organizations, and approximately 100 individual countries,” he said.

TechCrunch was also given a demonstration of how the system works.

“This transparent demo, the same shown to individual countries and media organizations, was the one located on the open random server in question, and the very same demo observed today by TechCrunch. All other speculation about this overt, open system is not correct, and does not align with the basic fact this transparent demonstration has been seen by hundreds of people in media and government worldwide,” said Ganz.

John Scott-Railton, a senior researcher at the Citizen Lab, part of the Munk School at the University of Toronto, said that any database storing location data poses a privacy risk.

“Not securing a server would be an embarrassment for a school project,” said Scott-Railton. “For a billion-dollar company to not password protect a secretive project that hopes to handle location and health data suggest a quick and sloppy roll out.”

“NSO’s case is the precedent that proves the problem: rushed COVID-19 tracking efforts will imperil our privacy and online safety,” he said.

Israel’s two tracing systems

As global coronavirus infections began to spike in March, the Israeli government passed an emergency law giving its domestic security service Shin Bet “unprecedented access” to collect vast amounts of cell data from the phone companies to help identify possible infections.

By the end of March, Israeli defense minister Naftali Bennett said the government was working on a new contact tracing system, separate from the one used by Shin Bet.

It was later revealed that NSO was building the second contact-tracing system.

Tehilla Shwartz Altshuler, a privacy expert and a senior fellow at the Israel Democracy Institute, told TechCrunch that she too was given a demonstration of Fleming over a Zoom call in the early days of the outbreak.

Without the authority to obtain cell records, NSO told her that it used location data gathered from advertising platforms, or so-called data brokers. Israeli media also reported that NSO used advertising data for “training” the system.

Data brokers amass and sell vast troves of location data collected from the apps installed on millions of phones. The apps that track your movements and whereabouts are often also selling those locations to data brokers, which then resell the data to advertisers to serve more targeted ads.

NSO denied it used location data from a data broker for its Fleming demo.

“The Fleming demo is not based on real and genuine data,” said Ganz. “The demo is rather an illustration of public obfuscated data. It does not contain any personal identifying information of any sort.”

Since governments began to outline their plans for contact-tracing systems, experts warned that location data is not accurate and can lead to both false positives and false negatives. Currently, NSO’s system appears to rely on this data for its core functions.

“This kind of location data will not get you a reliable measure of whether two people came into close contact,” said Scott-Railton.

NSO’s connection to the Middle East

Israel is not the only government interested in Fleming. Bloomberg reported in March that a dozen nations were allegedly testing NSO’s contact-tracing technology.

A review of the unprotected database showed large amounts of location data points in Israel, but also Rwanda, Saudi Arabia and the United Arab Emirates.

Spokespeople for the Saudi, Rwandan and Emirati consulates in New York did not respond to our emails. NSO did not answer our questions about its relationship — if any — with these governments.

A map showing a sample of about 20,000 location data points across Israel (top-left); Abu Dhabi and Dubai, United Arab Emirates (top-right); Riyadh, Saudi Arabia (bottom-left) and Rwanda (bottom-right). (Image: TechCrunch)

Saudi Arabia is a known customer of NSO Group. United Nations experts have called for an investigation into allegations that the Saudi government used NSO’s Pegasus spyware to hack into the phone of Amazon chief executive Jeff Bezos. NSO has denied the claims.

NSO is also embroiled in a legal battle with Facebook-owned WhatsApp for allegedly building a hacking tool designed to be delivered over WhatsApp, which was used to hack into the cell phones of 1,400 users, including government officials, journalists and human rights activists, using AWS servers based in the U.S. and Frankfurt. NSO also rebuffed the claims.

Privacy concerns

Experts have expressed concerns over the use of centralized data, fearing that it could become a target for hackers.

Most countries are favoring decentralized efforts, like the joint project between Apple and Google, which uses anonymized Bluetooth signals picked up from phones in near proximity, instead of collecting cell location data into a single database. Bluetooth contact tracing has won the support of academics and security researchers over location-based contact-tracing efforts, which they say would enable large-scale surveillance.

Shwartz Altshuler told TechCrunch that location-based contact tracing is a “huge infringement” of privacy.

“It means that you can’t have any secrets,” she said. “You can’t have any meetings if you’re a journalist, and you can’t go to places where people want to know where you are.”

Favoring their own contact-tracing efforts, Apple and Google have already banned governments building contact-tracing apps utilizing their joint API from using location tracking, fearing that data stored on a centralized server could be breached.

Alan Woodward, a professor at the University of Surrey. said location data makes it “possible to build social graphs and to begin identifying who met who, when and where.”

“Even if it is just trial data, it’s still sensitive if it’s real people,” he said.

Just this week, the U.S. and U.K. governments warned that nation-state hackers are targeting organizations involved in the coronavirus response.

Adtech scores a pandemic pause from UK privacy oversight

The coronavirus is proving to have an unexpected upside for the adtech industry.

The U.K.’s data protection agency has paused an investigation into the industry’s processing of internet users’ personal data, saying targeted suspension of privacy oversight is merited because of disruption to businesses as a result of the COVID-19 pandemic.

The investigation into adtech industry practices by the Information Commissioner’s Office (ICO) is linked to a 2018 complaint it received about systematic, massive-scale, high-velocity personal data trading associated with the real-time bidding component of programmatic advertising.

A series of complaints have since been filed over the issue across the EU that assert it amounts to “the most massive leakage of personal data recorded so far.”

The first of these complaints was lodged in the U.K. with the ICO, but the complainants are still waiting for any relief.

And now their wait goes on…

One of the complainants, Brave’s Dr Johnny Ryan, described the regulatory inaction over a period of some two years since he sounded the alarm to the watchdog as “astounding.”

“They’ve failed to use any of their powers. Even their powers of investigation,” Ryan told TechCrunch. “We’re not even talking about enforcement. They’ve failed to ask their questions using their strong voice. The lack of action — it’s actually really hard to remember just how little action there is — it’s quite astounding, just how vacuous this vacuum is. How much of a pause this was a pause of.

“That’s astounding,” he added. “I claim it’s the biggest data breach the U.K. has ever had — but I’ve never had anyone contradict that. It’s almost indisputable because the figures are so big. So we’ve got this enormous breach, and… it’s continuing — so it’s not some discrete thing that’s now over… The harm accumulates. So this is a problem. It’s a breach pandemic!”

We also contacted the ICO with questions about the decision to suspend the adtech investigation — including asking how U.K. citizens can be confident their data rights are being defended against abuse by powerful industry platforms.

The regulator did not engage with what we asked — instead sending this generic statement:

The ICO recently set out its regulatory approach during the COVID-19 pandemic, where we spoke about reassessing our priorities and resources.

Taking this into account we have made the decision to pause our investigation into real time bidding and the Adtech industry.

It is not our intention to put undue pressure on any industry at this time but our concerns about Adtech remain and we aim to restart our work in the coming months, when the time is right.

This is by no means the first “breather” the regulator has offered the adtech industry vis-à-vis this complaint.

In fact there have been a series of “warnings” — followed by a series of periods of, er, mildly worded blog posts. (See here, here and here.) Enforcement? Not a sniff.

Europe’s General Data Protection Regulation (GDPR), meanwhile, will turn two later this month — meaning it’ll be two years since the updated framework was supposed to start to apply.

Many privacy experts and campaigners are questioning the quality and quantity of enforcement set alongside the flagship update to legal safeguards for citizens’ data — which actually date all the way back to 1995.

Brave’s Ryan said the ICO’s regulatory abdication does not reflect well on the success of the wider EU data protection regime — pointing out that the U.K. watchdog is the best resourced of the bloc’s (post-Brexit) 27 Member States (the U.K. remains in the EU until the end of the Brexit transition period, so is still technically a member right now).

“If the EU’s biggest regulator in this domain — which is one of the jewels in the EU’s regulatory crown — its biggest and most well resourced, in terms of cash, regulators is unable to enforce against the biggest data protection infringement that the country it regulates for has ever experienced is the GDPR just a kind of collective hallucination?” he said. “Or is that something that is limited to the U.K.?”

A bigger issue he points to is that the U.K., post-Brexit, will need to request a data protection “adequacy agreement” from the European Commission if it wishes for its businesses to be able to freely exchange data with EU businesses as they can now.

“When the U.K. requests that the European Commission consider the U.K. as a safe and adequate third country where personal data from the EU can freely flow, one of the questions to be considered is do you have a regulator that can protect this personal data? And the answer today is no,” said Ryan. “No, you do not have a regulator that is able to protect personal data of European citizens.”

“This [ICO inaction] should have a post-Brexit implication — which will affect so many sectors of the U.K. economy,” he warned.

Ryan’s employer, Brave — which makes a pro-privacy web browser — recently lodged a complaint with the European Commission against EU Member States, producing a report and accusing governments of under-resourcing their data protection agencies. It has asked the Commission to launch an infringement procedure.

“How is only 3% of the [ICO] staff mainly focused on digital issues?” Ryan added. “Clearly more than 3% of infringement is digital and more than 3% of life is — so unless the ICO is labouring under the misapprehension that we are at the beginning of this digital transition they are the wrong regulator for this decade. This is last century’s regulator. So there’s a huge management problem inside the ICO. It seems they are unwilling or unable to regulate digital issues… They need to get fit for purpose.

“They are still living in a print-based world. And we are confronting them urgently with problems that are not print based — but that affect every aspect of our lives. Including, apparently, the last election. And presumably the next one too… So this is shocking on many, many levels.”

As a consequence of Brexit, U.K. citizens should expect the ICO to be their sole data protection rights enforcer, rather than — as can be the case now — other EU regulators being involved in defending their rights, such as in the case of major tech platforms which often locate themselves under a legal jurisdiction elsewhere in the EU.

Google, for example, has said it will relocate U.K. users to a U.S. jurisdiction in response to Brexit.

UK’s NHS COVID-19 app lacks robust legal safeguards against data misuse, warns committee

A UK parliamentary committee that focuses on human rights issues has called for primary legislation to be put in place to ensure that legal protections wrap around the national coronavirus contact tracing app.

The app, called NHS COVID-19, is being fast tracked for public use — with a test ongoing this week in the Isle of Wight. It’s set to use Bluetooth Low Energy signals to log social interactions between users to try to automate some contacts tracing based on an algorithmic assessment of users’ infection risk.

The NHSX has said the app could be ready for launch within a matter of weeks but the committee says key choices related to the system architecture create huge risks for people’s rights that demand the safeguard of primary legislation.

“Assurances from Ministers about privacy are not enough. The Government has given assurances about protection of privacy so they should have no objection to those assurances being enshrined in law,” said committee chair, Harriet Harman MP, in a statement.

“The contact tracing app involves unprecedented data gathering. There must be robust legal protection for individuals about what that data will be used for, who will have access to it and how it will be safeguarded from hacking.

“Parliament was able quickly to agree to give the Government sweeping powers. It is perfectly possible for parliament to do the same for legislation to protect privacy.”

The NHSX, a digital arm of the country’s National Health Service, is in the process of testing the app — which it’s said could be launched nationally within a few weeks.

The government has opted for a system design that will centralize large amounts of social graph data when users experiencing COVID-19 symptoms (or who have had a formal diagnosis) choose to upload their proximity logs.

Earlier this week we reported on one of the committee hearings — when it took testimony from NHSX CEO Matthew Gould and the UK’s information commissioner, Elizabeth Denham, among other witnesses.

Warning now over a lack of parliamentary scrutiny — around what it describes as an unprecedented expansion of state surveillance — the committee report calls for primary legislation to ensure “necessary legal clarity and certainty as to how data gathered could be used, stored and disposed of”.

The committee also wants to see an independent body set up to carry out oversight monitoring and guard against ‘mission creep’ — a concern that’s also been raised by a number of UK privacy and security experts in an open letter late last month.

“A Digital Contact Tracing Human Rights Commissioner should be responsible for oversight and they should be able to deal with complaints from the Public and report to Parliament,” the committee suggests.

Prior to publishing its report, the committee wrote to health minister Matt Hancock, raising a full spectrum of concerns — receiving a letter in response.

In this letter, dated May 4, Hancock told it: “We do not consider that legislation is necessary in order to build and deliver the contact tracing app. It is consistent with the powers of, and duties imposed on, the Secretary of State at a time of national crisis in the interests of protecting public health.”

The committee’s view is Hancock’s ‘letter of assurance’ is not enough given the huge risks attached to the state tracking citizens’ social graph data.

“The current data protection framework is contained in a number of different documents and it is nearly impossible for the public to understand what it means for their data which may be collected by the digital contact tracing system. Government’s assurances around data protection and privacy standards will not carry any weight unless the Government is prepared to enshrine these assurances in legislation,” it writes in the report, calling for a bill that it says myst include include a number of “provisions and protections”.

Among the protections the committee is calling for are limits on who has access to data and for what purpose.

“Data held centrally may not be accessed or processed without specific statutory authorisation, for the purpose of combatting Covid-19 and provided adequate security protections are in place for any systems on which this data may be processed,” it urges.

It also wants legal protections against data reconstruction — by different pieces of data being combined “to reconstruct information about an individual”.

The report takes a very strong line — warning that no app should be released without “strong protections and guarantees” on “efficacy and proportionality”.

“Without clear efficacy and benefits of the app, the level of data being collected will be not be justifiable and it will therefore fall foul of data protection law and human rights protections,” says the committee.

The report also calls for regular reviews of the app — looking at efficacy; data safety; and “how privacy is being protected in the use of any such data”.

It also makes a blanket call for transparency, with the committee writing that the government and health authorities “must at all times be transparent about how the app, and data collected through it, is being used”.

A lack of transparency around the project was another of the concerns raised by the 177 academics who signed the open letter last month.

The government has committed to publishing data protection impact assessments for the app. But the ICO’s Denham still hadn’t had sight of this document as of this Monday.

Another call by the committee is for a time-limit to be attached to any data gathered by or generated via the app. “Any digital contact tracing (and data associated with it) must be permanently deleted when no longer required and in any event may not be kept beyond the duration of the public health emergency,” it writes.

We’ve reached out to the Department of Health and NHSX for comment on the human rights committee’s report.

There’s another element to this fast moving story: Yesterday the Financial Times reported that the NHSX has inked a new contract with an IT supplier which suggests it might be looking to change the app architecture — moving away from a centralized database to a decentralized system for contacts tracing. Although NHSX has not confirmed any such switch at this point.

Some other countries have reversed course in their choice of app architecture after running into technical challenges related to Bluetooth. The need to ensure public trust in the system was also cited by Germany for switching to a decentralized model.

The human rights committee report highlights a specific app efficacy issue of relevance to the UK, which it points out is also linked to these system architecture choices, noting that: “The Republic of Ireland has elected to use a decentralised app and if a centralised app is in use in Northern Ireland, there are risks that the two systems will not be interoperable which would be most unfortunate.”

UK eyeing switch to Apple-Google API for coronavirus contacts tracing — report

The UK may be rethinking its decision to shun Apple and Google’s API for its national coronavirus contacts tracing app, according to the Financial Times, which reported yesterday that the government is paying an IT supplier to investigate whether it can integrate the tech giants’ approach after all.

As we’ve reported before coronavirus contacts tracing apps are a new technology which aims to repurpose smartphones’ Bluetooth signals and device proximity to try to estimate individuals’ infection risk.

The UK’s forthcoming app, called NHS COVID-19, has faced controversy because it’s being designed to use a centralized app architecture. This means developers are having to come up with workarounds for platform limitations on background access to Bluetooth as the Apple-Google cross-platform API only works with decentralized systems.

The choice of a centralized app architecture has also raised concerns about the impact of such an unprecedented state data grab on citizens’ privacy and human rights, and the risk of state ‘mission creep‘.

The UK also looks increasingly isolated in its choice in Europe after the German government opted to switch to a decentralized model, joining several other European countries that have said they will opt for a p2p approach, including Estonia, Ireland and Switzerland.

In the region, France remains the other major backer of a centralized system for its forthcoming coronavirus contacts tracing app, StopCovid.

Apple and Google, meanwhile, are collaborating on a so-called “exposure notification” API for national coronavirus contacts tracing apps. The API is slated to launch this month and is designed to remove restrictions that could interfere with how contact events are logged. However it’s only available for apps that don’t hold users’ personal data on central servers and prohibits location tracking, with the pair emphasizing that their system is designed to put privacy at the core.

Yesterday the FT reported that NHSX, the digital transformation branch of UK’s National Health Service, has awarded a £3.8M contract to the London office of Zuhlke Engineering, a Switzerland-based IT development firm which was involved in developing the initial version of the NHS COVID-19 app.

The contract includes a requirement to “investigate the complexity, performance and feasibility of implementing native Apple and Google contact tracing APIs within the existing proximity mobile application and platform”, per the newspaper’s report.

The work is also described as a “two week timeboxed technical spike”, which the FT suggests means it’s still at a preliminary phase — thought it also notes the contract includes a deadline of mid-May.

The contracted work was due to begin yesterday, per the report.

We’ve reached out to Zuhlke for comment. Its website describes the company as “a strong solutions partner” that’s focused on projects related to digital product delivery; cloud migration; scaling digital platforms; and the Internet of Things.

We also put questions arising from the FT report to NHSX.

At the time of writing the unit had not responded but yesterday a spokesperson told the newspaper: “We’ve been working with Apple and Google throughout the app’s development and it’s quite right and normal to continue to refine the app.”

The specific technical issue that appears to be causing concern relates to a workaround the developers have devised to try to circumvent platform limitations on Bluetooth that’s intended to wake up phones when the app itself is not being actively used in order that the proximity handshakes can still be carried out (and contacts events properly logged).

Thing is, if any of the devices fail to wake up and emit their identifiers so other nearby devices can log their presence there will be gaps in the data. Which, in plainer language, means the app might miss some close encounters between users — and therefore fail to notify some people of potential infection risk.

Recent reports have suggested the NHSX workaround has a particular problem with iPhones not being able to wake up other iPhones. And while Google’s Android OS is the more dominant platform in the UK (running on circa ~60% of smartphones, per Kantar) there will still be plenty of instances of two or more iPhone users passing near each other. So if their apps fail to wake up they won’t exchange data and those encounters won’t be logged.

On this, the FT quotes one person familiar with the NHS testing process who told it the app was able to work in the background in most cases, except when two iPhones were locked and left unused for around 30 minutes, and without any Android devices coming within 60m of the devices. The source also told it that bringing an Android device running the app close to the iPhone would “wake up” its Bluetooth connection.

Clearly, the government having to tell everyone in the UK to use an Android smartphone not an iPhone wouldn’t be a particularly palatable political message.

One source with information about the NHSX testing process told us the unit has this week been asking IT suppliers for facilities or input on testing environments with “50-100 Bluetooth devices of mixed origin”, to help with challenges in testing the Bluetooth exchanges — which raises questions about how extensively this core functionality has been tested up to now. (Again, we’ve put questions to the NHSX about testing and will update this report with any response.)

Work on planning and developing the NHS COVID-19 app began March 7, according to evidence given to a UK parliamentary committee by the NHSX CEO’s, Matthew Gould, last month.

Gould has also previously suggested that the app could be “technically” ready to launch in as little as two or three weeks time from now. While a limited geographical trial of the app kicked off this week in the Isle of Wight. Prior to that, an alpha version of the app was tested at an RAF base involving staff carrying out simulations of people going shopping, per a BBC report last month.

Gould faced questions over the choice of centralized vs decentralized app architecture from the human rights committee earlier this week. He suggested then that the government is not “locked” to the choice — telling the committee: “We are constantly reassessing which approach is the right one — and if it becomes clear that the balance of advantage lies in a different approach then we will take that different approach. We’re not irredeemably wedded to one approach; if we need to shift then we will… It’s a very pragmatic decision about what approach is likely to get the results that we need to get.”

However it’s unclear how quickly such a major change to app architecture could be implemented, given centralized vs decentralized systems work in very different ways.

Additionally, such a big shift — more than two months into the NHSX’s project — seems, at such a late stage, as if it would be more closely characterized as a rebuild, rather than a little finessing (as suggested by the NHSX spokesperson’s remark to the FT vis-a-vis ‘refining’ the app).

In related news today, Reuters reports that Colombia has pulled its own coronavirus contacts tracing app after experiencing glitches and inaccuracies. The app had used alternative technology to power contacts logging via Bluetooth and wi-fi. A government official told the news agency it aims to rebuild the system and may now use the Apple-Google API.

Australia has also reported Bluetooth related problems with its national coronavirus app. And has also been reported to be moving towards adopting the Apple-Google API.

While, Singapore, the first country to launch a Bluetooth app for coronavirus contacts tracing, was also the first to run into technical hitches related to platform limits on background access — likely contributing to low download rates for the app (reportedly below 20%).

No cookie consent walls — and no, scrolling isn’t consent, says EU data protection body

You can’t make access to your website’s content dependant on a visitor agreeing that you can process their data — aka a ‘consent cookie wall’. Not if you need to be compliant with European data protection law.

That’s the unambiguous message from the European Data Protection Board (EDPB), which has published updated guidelines on the rules around online consent to process people’s data.

Under pan-EU law, consent is one of six lawful bases that data controllers can use when processing people’s personal data.

But in order for consent to be legally valid under Europe’s General Data Protection Regulation (GDPR) there are specific standards to meet: It must be clear and informed, specific and freely given.

Hence cookie walls that demand ‘consent’ as the price for getting inside the club are not only an oxymoron but run into a legal brick wall.

No consent behind a cookie wall

The regional cookie wall has been crumbling for some time, as we reported last year — when the Dutch DPA clarified its guidance to ban cookie walls.

The updated guidelines from the EDPB look intended to hammer the point home. The steering body’s role is to provide guidance to national data protection agencies to encourage a more consistent application of data protection rules.

The EDPB’s intervention should — should! — remove any inconsistencies of interpretation on the updated points by national agencies of the bloc’s 27 Member States. (Though compliance with EU data protection law tends to be a process; aka it’s a marathon not a sprint, though on the cookie wall issues the ‘runners’ have been going around the tracks for a considerable time now.)

As we noted in our report on the Dutch clarification last year, the Internet Advertising Bureau Europe was operating a full cookie wall — instructing visitors to ‘agree’ to its data processing terms if they wished to view the content.

The problem that we pointed out is that that wasn’t a free choice. Yet EU law requires a free choice for consent to be legally valid. So it’s interesting to note the IAB Europe has, at some point since, updated its cookie consent implementation — removing the cookie wall and offering a fairly clear (if nudged) choice to visitors to either accept or deny cookies for “aggregated statistics”…

As we said at the time the writing was on the wall for consent cookie walls.

The EDPB document includes the below example to illustrate the salient point that consent cookie walls do not “constitute valid consent, as the provision of the service relies on the data subject clicking the ‘Accept cookies’ button. It is not presented with a genuine choice.”

It’s hard to get clearer than that, really.

Scrolling never means ‘take my data’

A second area to get attention in the updated guidance, as a result of the EDPB deciding there was a need for additional clarification, is the issue of scrolling and consent.

Simply put: Scrolling on a website or digital service can not — in any way — be interpreted as consent.

Or, as the EDPB puts it, “actions such as scrolling or swiping through a webpage or similar user activity will not under any circumstances satisfy the requirement of a clear and affirmative action” [emphasis ours].

Logical reason being such signals are not unambiguous. (Additionally, the EDPB example raises the point of how would a user withdraw consent if such a signal were valid? By scrolling back up the same web page? Obviously that would be ridiculous and confusing.)

Here’s the relevant example from the document:

Again, harder to get clearer than that.

So any websites still trying to drop tracking cookies the moment a site visitor scrolls the page are risking regulatory enforcement. (Reminder: GDPR fines can scale as high as €20M or 4% of global annual turnover.)

Nonetheless, recent research suggests cookie consent theatre remains rife in the EU — albeit, not only limited to the ‘scroll and you’ve been tracked’ flavor of the practice.

Manipulative consent pop-ups and dark patterns also remain a major problem, with such tactics being actively deployed to undermine legal protections for EU citizens’ data.

Still, a lot of clarifying light has now been shone into this area by both regulators and courts, shrinking the operating space for bad faith actors.

A ruling by the European Court of Justice last year made it clear that active consent is required for tracking cookies, for example — also demolished ‘pre-checking’ as a valid way of gathering consent, among other stipulations.

Plus there’s increasing pressure on regulators to actually enforce the rules — with GDPR’s two year anniversary fast approaching.

So where consent is concerned, the rule of thumb, if you need one, is you can’t steal consent nor conceal consent. And if you wish to shortcut consent you can only do so if your shortcut is A) clearly and accurately signposted and B) you offer a similarly easy route to opt-out again. Simples.

A security expert says India’s contact tracing app has flaws. New Delhi says they are ‘by design’

The Indian government has said that its contact tracing app Aarogya Setu “by design” fetches the location data of its 90 million users, allowing them to view the concentration of people who have tested positive for the coronavirus in their vicinity.

New Delhi issued the statement after France-based security researcher Baptiste Robert found what he argues are design flaws and privacy issues.

The government said it has always disclosed that it fetches users’ location data, a feature that critics say falls short of the privacy protections offered by similar technologies, including the joint project run by Apple and Google.

Aarogya Setu’s privacy policy says the app — in addition to collecting location data of a user at the time of registration — also “continuously collects your location data and stores securely on your mobile device, a record of all the places you have been at 15-minute intervals.” The app uploads this data to its server along with the user’s digital ID if they test positive for COVID-19, or self-declare seeing symptoms that indicate that they might be infected with the infectious disease, it says.

Collecting location data is a complicated subject, regardless of the good intention of its developers and operators. On Monday, Google and Apple banned the use of location tracking on their coronavirus tracing technology.

While some developers have argued that they need access to location data to track how outbreaks move and identify hotspots, privacy advocates have cautioned that if this data ever gets exposed, it could ostracize those who are affected.

Robert’s other concern is that Aarogya Setu, which was launched early last month, allows anyone to view the concentration of people in 500 meters to up to 10 kilometers who are either suspicious they have coronavirus, or are certain that they have the disease. He told TechCrunch that he was able to develop a script and view similar data for any nook and cranny of the world’s second most populous nation.

He said the government, which introduced a nationwide lockdown in late March, could have kept the radius limited to 500 meters.

In response, New Delhi said that its system is designed in a way that would prevent any script from making bulk requests. Additionally, it said, “getting data for multiple latitude and longitude this way is no different from asking several people of their location’s COVID-19 statistics.”

“All this information is already public for all locations and hence does not compromise on any personal or sensitive data,” the response said.

Some people argued today that at a crisis like this, when thousands of people are dying, these “flaws” were the least of their concerns and that the app served a much greater purpose. But Aarogya Setu, which has amassed 90 million monthly active people in less than 35 days, has also ruffled some feathers for the way it is being scaled up. New Delhi said earlier this month that all government and private sector employees need to have this app installed on their smartphones.

On Tuesday, local authority in Noida city, home to more than 640,000 people, on the outskirts of Delhi, said those who did not have Aarogya Setu app installed on their phone would be fined or sent to prison.

NHS COVID-19: The UK’s coronavirus contacts tracing app explained

The UK has this week started testing a coronavirus contacts tracing app which NHSX, a digital arm of the country’s National Health Service, has been planning and developing since early March. The test is taking place in the Isle of Wight, a 380km2 island off the south coast of England, with a population of around 140,000.

The NHS COVID-19 app uses Bluetooth Low Energy handshakes to register proximity events (aka ‘contacts’) between smartphone users, with factors such as the duration of the ‘contact event’ and the distance between the devices feeding an NHS clinical algorithm that’s being designed to estimate infection risk and trigger notifications if a user subsequently experiences COVID-19 symptoms.

The government is promoting the app as an essential component of its response to fighting the coronavirus — the health minister’s new mantra being: ‘Protect the NHS, stay home, download the app’ — and the NHSX has said it expects the app to be “technically” ready to deploy two to three weeks after this week’s trial.

However there are major questions over how effective the tool will prove to be, especially given the government’s decision to ‘go it alone’ on the design of its digital contacts tracing system — which raises some specific technical challenges linked to how modern smartphone platforms operate, as well as around international interoperability with other national apps targeting the same purpose.

In addition, the UK app allows users to self report symptoms of COVID-19 — which could lead to many false alerts being generated. That in turn might trigger notification fatigue and/or encourage users to ignore alerts if the ratio of false alarms exceeds genuine alerts.

Keep calm and download the app?

How users will generally respond to this technology is a major unknown. Yet mainstream adoption will be needed to maximize utility; not just one-time downloads. Dealing with the coronavirus will be a marathon not a sprint — which means sustaining usage will be vital to the app functioning as intended. And that will require users to trust that the app is both useful for the claimed public health purpose, by being effective at shrinking infection risk, and also that using it will not create any kind of disadvantages for them personally or for their friends and family.

The NHSX has said it will publish the code for the app, the DPIA (data protection impact assessment) and the privacy and security models — all of which sounds great, though we’re still waiting to see those key details. Publishing all that before the app launches would clearly be a boon to user trust.

A separate consideration is whether there should be a dedicated legislation wrapper put around the app to ensure clear and firm legal bounds on its use (and to prevent abuse and data misuse).

As it stands the NHS COVID-19 app is being accelerated towards release without this — relying on existing legislative frameworks (with some potential conflicts); and with no specific oversight body to handle any complaints. That too could impact user trust.

The overarching idea behind digital contacts tracing is to leverage uptake of smartphone technology to automate some contacts tracing, with the advantage that such a tool might be able to register fleeting contacts, such as between strangers on the street or public transport, that may more difficult for manual contacts tracing methods to identify. Though whether these sorts of fleeting contacts create a significant risk of infection with the SARS-CoV-2 virus has not yet been quantified.

All experts are crystal clear on one thing: Digital contacts tracing is only going to be — at very best — a supplement to manual contact tracing. People who do not own or carry smartphones or who do not or cannot use the app obviously won’t register in any captured data. Technical issues may also create barriers and data gaps. It’s certainly not a magic bullet — and may, in the end, turn out to be ill-suited for this use case (we’ve written a general primer on digital contacts tracing here).

One major component of the UK approach is that it’s opted to create a so-called ‘centralized’ system for coronavirus contacts tracing — which leads to a number of specific challenges.

While the NHS COVID-19 app stores contacts events on the user’s device initially, at the point when (or if) a user chooses to report themselves having coronavirus symptoms then all their contacts events data is uploaded to a central server. This means it’s not just a user’s own identifier but a list of any identifiers they have encountered over the past 28 days — so, essentially, a graph of their recent social interactions.

This data cannot be deleted after the fact, according to the NHSX, which has also said it may be used for “research” purposes related to public health — raising further questions around privacy and trust.

Questions around the legal bases for this centralized approach also remain to be answered in detail by the government. UK and EU data protection law emphasize data minimization as a key principle; and while there’s flexibility built into these frameworks for a public health emergency there is still a requirement on the government to detail and justify key data processing decisions.

The UK’s decision to centralize contacts data has another obvious and immediate consequence: It means the NHS COVID-19 app will not be able to plug into an API that’s being jointly developed by Apple and Google to provide technical support for Bluetooth-based national contacts tracing apps — and due to be release this month.

The tech giants have elected to support decentralized app architectures for these apps — which, conversely, do not centralize social graph data. Instead, infection risk calculations are performed locally on the device.

By design, these approaches avoid providing a central authority with information on who infected whom.

In the decentralized scenario, an infected user consents to their ephemeral identifier being shared with other users so apps can do matching locally, on the end-user device — meaning exposure notifications are generated without a central authority needing to be in the loop. (It’s also worth noting there are ways for decentralized protocols to feed aggregated contact data back to a central authority for epidemiological research, though the design is intended to prevent users’ social graph being exposed. A system of ‘exposure notification’, as Apple and Google are now branding it, has no need for such data, is their key argument. The NHSX counters that by suggesting social graph data could provide useful epidemiological insights — such as around how the virus is being spread.)

At the point a user of the NHS COVID-19 app experiences symptoms or gets a formal coronavirus diagnosis — and chooses to inform the authorities — the app will upload their recent contacts to a central server where infection risk calculations are performed.

The system will then send exposure notifications to other devices — in instances where the software deems there may be at risk of infection. Users might, for example, be asked to self isolate to see if they develop symptoms after coming into contact with an infected person, or told to seek a test to determine if they have COVID-19 or not.

A key detail here is that users of the NHS COVID-19 app are assigned a fixed identifier — basically a large, random number — which the government calls an “installation ID”. It claims this identifier is ‘anonymous’. However this is where political spin in service of encouraging public uptake of the app is being allowed to obscure a very different legal reality: A fixed identifier linked to a device is in fact pseudonymous data, which remains personal data under UK and EU law. Because, while the user’s identity has been ‘obscured’, there’s still a clear risk of re-identification.

Truly ‘anonymous’ data is a very high bar to achieve when you’re dealing with large data-sets. In the NHS COVID-19 app case there’s no reason beyond spin for the government to claim the data is “anonymous”; given the system design involves a device-linked fixed identifier that’s uploaded to a central authority alongside at least some geographical data (a partial postcode: which the app also asks users to input — so “the NHS can plan your local NHS response”, per the official explainer).

The NHSX has also said future versions of the app may ask users to share even more personal data, including their location. (And location data-sets are notoriously difficult to defend against re-identification.)

Nonetheless the government has maintained that individual users of the app will not be identified. But under such a system architecture this assertion sums to ‘trust us with your data’; the technology itself has not been designed to remove the need for individual users to trust a central authority, as is the case with bona fide decentralized protocols.

This is why Apple and Google are opting to support the latter approach — it cuts the internationally thorny issue of ‘government trust’ out of their equation.

However it also means governments that do want to centralize data face a technical headache to get their apps to function smoothly on the only two smartphone platforms that matter.

Technical and geopolitical headaches

The specific technical issue here relates to how these mainstream platforms manage background access to Bluetooth.

Using Bluetooth as a proxy for measuring coronavirus infection risk is of course a very new and novel technology. Singapore was reported to be the first country to attempt this. Its TraceTogether app, which launched in March, reportedly gained only limited (<20%) uptake — with technical issues on iOS being at least partly blamed for the low uptake.

The problem that the TraceTogether app faced initially is the software needed to be actively running and the iPhone open (not locked) for the tracing function to work. That obviously interferes with the normal multitasking of the average iPhone user — discouraging usage of the app.

It’s worth emphasizing that the UK is doing things a bit differently vs Singapore, though, in that it’s using Bluetooth handshakes rather than a Bluetooth advertising channel to power the contacts logging.

The NHS COVID-19 app has been designed to listen passively for other Bluetooth devices and then wake up in order to perform the handshake. This is intended as a workaround for these platform limits on background Bluetooth access. However it is still a workaround — and there are ongoing questions over how robustly it will perform in practice. 

An analysis by The Register suggests the app will face a fresh set of issues in that iPhones specifically will fail to wake each other up to perform the handshakes — unless there’s also an Android device in the vicinity. If correct, it could result in big gaps in the tracing data (around 40% of UK smartphones run iOS vs 60% running Android).

Battery drain may also resurface as an issue with the UK system, though the NHSX has claimed its workaround solves this. (Though it’s not clear if they’ve tested what happens if an iPhone user switches on a battery saving mode which limits background app activity, for example.)

Other Bluetooth-based contract tracing apps that have tried to workaround platforms limits have also faced issues with interference related to other Bluetooth devices — such as Australia’s recently launched app. So there are a number of potential issues that could trouble performance.

Being outside the Apple-Google API also certainly means the UK app is at the mercy of future platform updates which could derail the specific workaround. Best laid plans that don’t involve using an official interface as your plug are inevitably operating on shaky ground.

Finally, there’s a huge and complex issue that’s essentially being glossed over by government right now: Interoperability with other national apps.

How will the UK app work across borders? What happens when Brits start travelling again? With no obvious route for centralized vs decentralized systems to interface and play nice with each other there’s a major question mark over what happens when UK citizens want to travel to countries with decentralized systems (or indeed vice versa). Mandatory quarantines because the government picked a less interoperable app architecture? Let’s hope not.

Notably, the Republic of Ireland has opted for a decentralized approach for its national app, whereas Northern Ireland, which is part of the UK but shares a land border with the Republic, will — baring any NHSX flip — be saddled with a centralized and thus opposing choice. It’s the Brexit schism all over again in app form.

Earlier this week the NHSX was asked about this cross-border issue by a UK parliamentary committee — and admitted it creates a challenge “we’ll have to work through”, though it did not suggest how it proposes to do that.

And while that’s a very pressing backyard challenge, the same interoperability gremlins arise across the English Channel — where a number of European countries are opting for decentralized apps, including Estonia, Germany and Switzerland. While Apple and Google’s choice at the platform level means future US apps may also be encouraged down a decentralized route. (The two US tech giants are demonstrably flexing their market power to press on and influence governments app design choices internationally.)

So countries that fix on a ‘DIY’ approach for the digital component of their domestic pandemic response may find it leads to some unwelcome isolation for their citizens at the international level.