The FBI is mad because it keeps getting into locked iPhones without Apple’s help

The debate over encryption continues to drag on without end.

In recent months, the discourse has largely swung away from encrypted smartphones to focus instead on end-to-end encrypted messaging. But a recent press conference by the heads of the Department of Justice (DOJ) and the Federal Bureau of Investigation (FBI) showed that the debate over device encryption isn’t dead, it was merely resting. And it just won’t go away.

At the presser, Attorney General William Barr and FBI Director Chris Wray announced that after months of work, FBI technicians had succeeded in unlocking the two iPhones used by the Saudi military officer who carried out a terrorist shooting at the Pensacola Naval Air Station in Florida in December 2019. The shooter died in the attack, which was quickly claimed by Al Qaeda in the Arabian Peninsula.

Early this year — a solid month after the shooting — Barr had asked Apple to help unlock the phones (one of which was damaged by a bullet), which were older iPhone 5 and 7 models. Apple provided “gigabytes of information” to investigators, including “iCloud backups, account information and transactional data for multiple accounts,” but drew the line at assisting with the devices. The situation threatened to revive the 2016 “Apple versus FBI” showdown over another locked iPhone following the San Bernardino terror attack.

After the government went to federal court to try to dragoon Apple into doing investigators’ job for them, the dispute ended anticlimactically when the government got into the phone itself after purchasing an exploit from an outside vendor the government refused to identify. The Pensacola case culminated much the same way, except that the FBI apparently used an in-house solution instead of a third party’s exploit.

You’d think the FBI’s success at a tricky task (remember, one of the phones had been shot) would be good news for the Bureau. Yet an unmistakable note of bitterness tinged the laudatory remarks at the press conference for the technicians who made it happen. Despite the Bureau’s impressive achievement, and despite the gobs of data Apple had provided, Barr and Wray devoted much of their remarks to maligning Apple, with Wray going so far as to say the government “received effectively no help” from the company.

This diversion tactic worked: in news stories covering the press conference, headline after headline after headline highlighted the FBI’s slam against Apple instead of focusing on what the press conference was nominally about: the fact that federal law enforcement agencies can get into locked iPhones without Apple’s assistance.

That should be the headline news, because it’s important. That inconvenient truth undercuts the agencies’ longstanding claim that they’re helpless in the face of Apple’s encryption and thus the company should be legally forced to weaken its device encryption for law enforcement access. No wonder Wray and Barr are so mad that their employees keep being good at their jobs.

By reviving the old blame-Apple routine, the two officials managed to evade a number of questions that their press conference left unanswered. What exactly are the FBI’s capabilities when it comes to accessing locked, encrypted smartphones? Wray claimed the technique developed by FBI technicians is “of pretty limited application” beyond the Pensacola iPhones. How limited? What other phone-cracking techniques does the FBI have, and which handset models and which mobile OS versions do those techniques reliably work on? In what kinds of cases, for what kinds of crimes, are these tools being used?

We also don’t know what’s changed internally at the Bureau since that damning 2018 Inspector General postmortem on the San Bernardino affair. Whatever happened with the FBI’s plans, announced in the IG report, to lower the barrier within the agency to using national security tools and techniques in criminal cases? Did that change come to pass, and did it play a role in the Pensacola success? Is the FBI cracking into criminal suspects’ phones using classified techniques from the national security context that might not pass muster in a court proceeding (were their use to be acknowledged at all)?

Further, how do the FBI’s in-house capabilities complement the larger ecosystem of tools and techniques for law enforcement to access locked phones? Those include third-party vendors GrayShift and Cellebrite’s devices, which, in addition to the FBI, count numerous U.S. state and local police departments and federal immigration authorities among their clients. When plugged into a locked phone, these devices can bypass the phone’s encryption to yield up its contents, and (in the case of GrayShift) can plant spyware on an iPhone to log its passcode when police trick a phone’s owner into entering it. These devices work on very recent iPhone models: Cellebrite claims it can unlock any iPhone for law enforcement, and the FBI has unlocked an iPhone 11 Pro Max using GrayShift’s GrayKey device.

In addition to Cellebrite and GrayShift, which have a well-established U.S. customer base, the ecosystem of third-party phone-hacking companies includes entities that market remote-access phone-hacking software to governments around the world. Perhaps the most notorious example is the Israel-based NSO Group, whose Pegasus software has been used by foreign governments against dissidents, journalists, lawyers and human rights activists. The company’s U.S. arm has attempted to market Pegasus domestically to American police departments under another name. Which third-party vendors are supplying phone-hacking solutions to the FBI, and at what price?

Finally, who else besides the FBI will be the beneficiary of the technique that worked on the Pensacola phones? Does the FBI share the vendor tools it purchases, or its own home-rolled ones, with other agencies (federal, state, tribal or local)? Which tools, which agencies and for what kinds of cases? Even if it doesn’t share the techniques directly, will it use them to unlock phones for other agencies, as it did for a state prosecutor soon after purchasing the exploit for the San Bernardino iPhone?

We have little idea of the answers to any of these questions, because the FBI’s capabilities are a closely held secret. What advances and breakthroughs it has achieved, and which vendors it has paid, we (who provide the taxpayer dollars to fund this work) aren’t allowed to know. And the agency refuses to answer questions about encryption’s impact on its investigations even from members of Congress, who can be privy to confidential information denied to the general public.

The only public information coming out of the FBI’s phone-hacking black box is nothingburgers like the recent press conference. At an event all about the FBI’s phone-hacking capabilities, Director Wray and AG Barr cunningly managed to deflect the press’s attention onto Apple, dodging any difficult questions, such as what the FBI’s abilities mean for Americans’ privacy, civil liberties and data security, or even basic questions like how much the Pensacola phone-cracking operation cost.

As the recent PR spectacle demonstrated, a press conference isn’t oversight. And instead of exerting its oversight power, mandating more transparency, or requiring an accounting and cost/benefit analysis of the FBI’s phone-hacking expenditures — instead of demanding a straight and conclusive answer to the eternal question of whether, in light of the agency’s continually-evolving capabilities, there’s really any need to force smartphone makers to weaken their device encryption — Congress is instead coming up with dangerous legislation such as the EARN IT Act, which risks undermining encryption right when a population forced by COVID-19 to do everything online from home can least afford it.

The bestcase scenario now is that the federal agency that proved its untrustworthiness by lying to the Foreign Intelligence Surveillance Court can crack into our smartphones, but maybe not all of them; that maybe it isn’t sharing its toys with state and local police departments (which are rife with domestic abusers who’d love to get access to their victims’ phones); that unlike third-party vendor devices, maybe the FBI’s tools won’t end up on eBay where criminals can buy them; and that hopefully it hasn’t paid taxpayer money to the spyware company whose best-known government customer murdered and dismembered a journalist.

The worst-case scenario would be that, between in-house and third-party tools, pretty much any law enforcement agency can now reliably crack into everybody’s phones, and yet nevertheless this turns out to be the year they finally get their legislative victory over encryption anyway. I can’t wait to see what else 2020 has in store.

First major GDPR decisions looming on Twitter and Facebook

The lead data regulator for much of big tech in Europe is moving inexorably towards issuing its first major cross-border GDPR decision — saying today it’s submitted a draft decision related to Twitter’s business to its fellow EU watchdogs for review.

“The draft decision focusses on whether Twitter International Company has complied with Articles 33(1) and 33(5) of the GDPR,” said the Irish Data Protection Commission (DPC) in a statement.

Europe’s General Data Protection Regulation came into application two years ago, as an update to the European Union’s long-standing data protection framework which bakes in supersized fines for compliance violations. More interestingly, regulators have the power to order that violating data processing cease. While, in many EU countries, third parties such as consumer rights groups can file complaints on behalf of individuals.

Since GDPR begun being applied, there have been thousands of complaints filed across the bloc, targeting companies large and small — alongside a rising clamour around a lack of enforcement in major cross-border cases pertaining to big tech.

So the timing of the DPC’s announcement on reaching a draft decision in its Twitter probe is likely no accident. (GDPR’s actual anniversary of application is May 25.)

The draft decision relates to an inquiry the regulator instigated itself, in November 2018, after the social network had reported a data breach — as data controllers are required to do promptly under GDPR, risking penalties should they fail to do so.

Other interested EU watchdogs (all of them in this case) will now have one month to consider the decision — and lodge “reasoned and relevant objections” should they disagree with the DPC’s reasoning, per the GDPR’s one-stop-shop mechanism which enables EU regulators to liaise on cross-border inquiries.

In instances where there is disagreement between DPAs on a decision the regulation contains a dispute resolution mechanism (Article 65) — which loops in the European Data Protection Board (EDPB) to make a final decision on a majority basis.

On the Twitter decision, the DPC told us it’s hopeful this can be finalized in July.

Commissioner Helen Dixon has previously said the first cross border decisions would be coming “early” in 2020. However the complexity of working through new processes — such as the one-stop-shop — appear to have taken EU regulators longer than hoped.

The DPC is also dealing with a massive case load at this point, with more than 20 cross border investigations related to complaints and/or inquiries still pending decisions — with active probes into the data processing habits of a large number of tech giants; including Apple, Facebook, Google, Instagram, LinkedIn, Tinder, Verizon (TechCrunch’s parent company) and WhatsApp — in addition to its domestic caseload (operating with a budget that’s considerably less than it requested from the Irish government).

The scope of some of these major cross-border inquiries may also have bogged Ireland’s regulator down.

But — two years in — there are signs of momentum picking up, with the DPC’s deputy commissioner, Graham Doyle, pointing today to developments on four additional investigations from the cross-border pile — all of which concern Facebook owned platforms.

The furthest along of these is a probe into the level of transparency the tech giant provides about how user data is shared between its WhatsApp and Facebook services.

“We have this week sent a preliminary draft decision to WhatsApp Ireland Limited for their submissions which will be taken in to account by the DPC before preparing a draft decision in that matter also for Article 60 purposes,” said Doyle in a statement on that. “The inquiry into WhatsApp Ireland examines its compliance with Articles 12 to 14 of the GDPR in terms of transparency including in relation to transparency around what information is shared with Facebook.”

The other three cases the DPC said it’s making progress on relate to GDPR consent complaints filed back in May 2018 by the EU privacy rights not-for-profit, noyb.

noyb argues that Facebook uses a strategy of “forced consent” to continue processing individuals’ personal data — when the standard required by EU law is for users to be given a free choice unless consent is strictly necessary for provision of the service. (And noyb argues that microtargeted ads are not core to the provision of a social networking service; contextual ads could instead be served, for example.)

Back in January 2019, Google was fined $57M by France’s data watchdog, CNIL, over a similar complaint.

Per its statement today, the DPC said it has now completed the investigation phase of this complaint-based inquiry which it said is focused on “Facebook Ireland’s obligations to establish a lawful basis for personal data processing”.

“This inquiry is now in the decision-making phase at the DPC,” it added.

In further related developments it said it’s sent draft inquiry reports to the complainants and companies concerned for the same set of complaints for (Facebook owned) Instagram and WhatsApp. 

Doyle declined to give any firm timeline for when any of these additional inquiries might yield final decisions. But a summer date would, presumably, be the very earliest timeframe possible.

The regulator’s hope looks to be that once the first cross-border decision has made it through the GDPR’s one-stop-shop mechanism — and yielded something all DPAs can sign up to — it will grease the tracks for the next tranche of decisions.

That said, not all inquiries and decisions are equal clearly. And what exactly the DPC decides in such high profile probes will be key to whether or not there’s disagreement from other data protection agencies. Different EU DPAs can take a harder or softer line on applying the bloc’s rules, with some considerably more ‘business friendly‘ than others. Albeit, the GDPR was intended to try to shrink differences of application.

If there is disagreement among regulators on major cross border cases, such as the Facebook ones, the GDPR’s one-stop-shop mechanism will require more time to work through to find consensus. So critics of the regulation are likely to have plenty of attack area still.

Some of the inquiries the DPC is leading are also likely to set standards which could have major implications for many platforms and digital businesses so there will be vested interests seeking to influence outcomes on all sides. But with GDPR hitting its second birthday — and still hardly any decision-shaped lumps taken out of big tech — the regional pressure for enforcements to get flowing is massive.

Given the blistering pace of tech developments — and the market muscle of big tech being applied to steamroller individual rights — EU regulators have to be able to close the gap between investigation and enforcement or watch their flagship framework derided as a paper tiger…

Schrems II

Summer is also shaping up to be an interesting time for privacy watchers for another reason, with a landmark decision due from Europe’s top court on July 16 on the so called ‘Schrems II’ case (named for the Austrian lawyer, privacy rights campaigner and noyb founder, Max Schrems, who lodged the original complaint) — which relates to the legality of Standard Contractual Clauses (SCC) as a mechanism for personal data transfers out of the EU.

The DPC’s statement today makes a point of flagging this looming decision, with the regulator writing: “The case concerns proceedings initiated and pursued in the Irish High Court by the DPC which raised a number of significant questions about the regulation of international data transfers under EU data protection law. The judgement from the CJEU on foot of the reference made arising from these proceedings is anticipated to bring much needed clarity to aspects of the law and to represent a milestone in the law on international transfers.”

A legal opinion issued at the end of last year by an influential advisor to the court emphasized that EU data protection authorities have an obligation to step in and suspend data transfers by SCC if they are being used to send citizens’ data to a place where their information cannot be adequately protected.

Should the court hold to that view, all EU DPAs will have an obligation to consider the legality of SCC transfers to the US “on a case-by-case basis”, per Doyle.

“It will be in every single case you’d have to go and look at the set of circumstances in every single case to make a judgement whether to instruct them to cease doing it. There won’t be just a one size fits all,” he told TechCrunch. “It’s an extremely significant ruling.”

(If you’re curious about ‘Schrems I’, read this from 2015.)

Apple’s handling of Siri snippets back in the frame after letter of complaint to EU privacy regulators

Apple is facing fresh questions from its lead data protection regulator in Europe following a public complaint by a former contractor who revealed last year that workers doing quality grading for Siri were routinely overhearing sensitive user data.

Earlier this week the former Apple contractor, Thomas le Bonniec, sent a letter to European regulators laying out his concern at the lack of enforcement on the issue — in which he wrote: “I am extremely concerned that big tech companies are basically wiretapping entire populations despite European citizens being told the EU has one of the strongest data protection laws in the world. Passing a law is not good enough: it needs to be enforced upon privacy offenders.”

The timing of the letter comes as Europe’s updated data protection framework, the GDPR, reaches its two-year anniversary — facing ongoing questions around the lack of enforcement related to a string of cross-border complaints.

Ireland’s Data Protection Commission (DPC) has been taking the brunt of criticism over whether the General Data Protection Regulation is functioning as intended — as a result of how many tech giants locate their regional headquarters on its soil (Apple included).

Responding to the latest Apple complaint from le Bonniec, the DPC’s deputy commissioner, Graham Doyle, told TechCrunch: “The DPC engaged with Apple on this issue when it first arose last summer and Apple has since made some changes. However, we have followed up again with Apple following the release of this public statement and await responses.”

At the time of writing Apple had not responded to a request for comment.

The Irish DPC is currently handling with more than 20 major cross-border cases, as lead data protection agency — probing the data processing activities of companies including Apple, Facebook, Google and Twitter. So le Bonniec’s letter adds to the pile of pressure on commissioner Helen Dixon to begin issuing decisions vis-a-vis cross-border GDPR complaints. (Some of which are now a full two years’ old.)

Last year Dixon said the first decisions for these cross-border cases would be coming “early” in 2020.

At issue is that if Europe’s recently updated flagship data protection regime isn’t seen to be functioning well two years in — and is still saddled with a bottleneck of high profile cases, rather than having a string of major decisions to its name — it will be increasingly difficult for the region’s lawmakers to sell it as a success.

At the same time the existence of a pan-EU data protection regime — and the attention paid to contravention, by both media and regulators — has had a tangible impact on certain practices.

Apple suspended human review of Siri snippets globally last August, after The Guardian had reported that contractors it employed to review audio recordings of users of its voice assistant tech — for quality grading purposes — regularly listened in to sensitive content such as medical information and even recordings of couples having sex.

Later the same month it made changes to the grading program, switching audio review to an explicitly opt-in process. It also brought the work in house — meaning only Apple employees have since been reviewing Siri users’ opt-in audio.

The tech giant also apologized. But did not appear to face any specific regulatory sanction for practices that do look to have been incompatible with Europe’s laws — owing to the lack of transparency and explicit consent around the human review program. Hence le Bonniec’s letter of complaint now.

A number of other tech giants also made changes to their own human grading programs around the same time.

Doyle also pointed out that guidance for EU regulators on voice AI tech is in the works, saying: “It should be noted that the European Data Protection Board is working on the production of guidance in the area of voice assistant technologies.”

We’ve reached out to the European Data Protection Board for comment.

Skyflow raises $7.5M to build its privacy API business

Skyflow, a Mountain View-based privacy API company, announced this morning that it has closed a $7.5 million round of capital it describes as a seed investment. Foundation Capital’s Ashu Garg led the round, with the company touting smaller checks from Jeff Immelt (former GE CEO) and Jonathan Bush (former AthenaHealth CEO).

For Skyflow, founded in 2019, the capital raise and its constituent announcement mark an exit from quasi-stealth mode.

TechCrunch knew a little about Skyflow before it announced its seed round because one if its co-founders, Anshu Sharma is a former Salesforce executive and former venture partner at Storm Ventures, a venture capital firm that focuses on enterprise SaaS businesses. That he left the venture world to eventually found something new caught our eye.

Sharma co-founded the company with Prakash Khot, another former Salesforce denizen.

So what is Skyflow? In a sense it’s the nexus between two trends, namely the growing importance of data security (privacy, in other words), and API -based companies. Skyflow’s product is an API that allows its customers — businesses, not individuals — to store sensitive user information, like Social Security numbers, securely.

Chatting with Sharma in advance of the funding, the CEO told TechCrunch that many providers of cybersecurity solutions today sell products that raise a company’s walls a little higher against certain threats. Once breached, however, the data stored inside is loose. Skyflow wants to make sure that its customers cannot lose your personal information.

Sharma likened Skyflow to other API companies that work to take complex services — Twilio’s telephony API, Stripe’s payments API, and so forth — and provide a simple endpoint for companies to hook into, giving them access to something hard with ease.

Comparing his company’s product to privacy-focused solutions like Apple Pay, the CEO said in a release that “Skyflow has taken a similar approach to all the sensitive data so companies can run their workflows, analytics and machine learning to serve the customer, but do so without exposing the data as a result of a potential theft or breach.”

It’s an interesting idea. If the technology works as promised, Skyflow could help a host of companies that either can’t afford, or simply can’t be bothered, to properly protect your data that they have collected.

If you are not still furious with Equifax, a company that decided that it was a fine idea to collect your personal information so it could grade you and then lost “hundreds of millions of customer records,” Skyflow might not excite you. But if the law is willing to let firms leak your data with little punishment, tooling to help companies be a bit less awful concerning data security is welcome.

Skyflow is not the only API-based company that has raised recently. Daily.co picked up funds recently for its video-chatting API, FalconX raised money for its crypto pricing and trading API, and CNBC reported today that another privacy-focused API company called Evervault has also taken on capital.

Skyflow’s model, however, may differ a little from how other API-built companies have priced themselves. Given that the data it will store for customers isn’t accessed as often, say, as a customer might ping Twilio’s API, Skyflow won’t charge usage rates for its product. After discussing the topic with Sharma, our impression is that Skyflow — once it formally launches its service commercially– will look something like a SaaS business.

The cloud isn’t coming, it’s here. And companies are awful at cybersecurity. Skyflow is betting it’s engineering-heavy team can make that better, while making money. Let’s see.

Automattic pumps $4.6M into New Vector to help grow Matrix, an open, decentralized comms ecosystem

Automattic, the open source force behind WordPress .com, WooCommerce, Longreads, Simplenote and Tumblr, has made a $4.6M strategic investment into New Vector — the creators of an open, decentralized communications standard called Matrix. They also develop a Slack rival (Riot) which runs on Matrix.

The investment by Automattic, which is at a higher valuation than the last tranche New Vector took in, extends an $8.5M Series A last year, from enterprise tech specialists Notion Capital and Dawn Capital plus European seed fund Firstminute Capital — and brings the total raised to date to $18.1M. (Which includes an earlier $5M in strategic investment from an Ethereum-based secure chat and crypto wallet app, Status).

New Vector’s decentralized tech powers instant messaging for a number of government users, including France — which forked Riot to launch a messaging app last year (Tchap) — and Germany, which just announced its armed forces will be adopting Matrix as the backbone for all internal comms; as well as for the likes of KDR, Mozilla, RedHat and Wikimedia, to name a few.

Getting Automattic on board is clearly a major strategic boost for Matrix — one that’s allowing New Vector to dream big.

“It’s very much a step forwards,” New Vector CEO and CTO and Matrix co-founder, Matthew Hodgson, tells TechCrunch. “We’re hopefully going to get the support from Automattic for really expanding the ecosystem, bringing Matrix functionality into WordPress — and all the various WordPress plugins that Automattic does. And likewise open up Matrix to all of those users too.”

A blog post announcing the strategic investment dangles the intriguing possibility of a decentralized Tumblr — or all WordPress sites automatically getting their own Matrix chatroom.

“This is huge news, not least because WordPress literally runs over 36% of the websites on today’s web – and the potential of bringing Matrix to all those users is incredible,” New Vector writes in the blog post. “Imagine if every WP site automatically came with its own Matrix room or community?  Imagine if all content in WP automatically was published into Matrix as well as the Web?… Imagine there was an excellent Matrix client available as a WordPress plugin for embedding realtime chat into your site?”

Those possibilities remain intriguing ideas for now. But as well as ploughing funding into New vector Automattic is opening up a job for a Matrix.org/WordPress integrations engineer — so the Matrix team has another tangible reason to be excited about future integrations.

“One of the best and the biggest open source guys really believes in what we’re doing and is interested in trying to open up the worlds of WordPress into the decentralized world of Matrix,” adds Hodgson. “In some ways it’s reassuring that a relatively established company like Automattic is keeping its eye on the horizon and putting their chips on the decentralized future. Whereas they could be ‘doing a Facebook’ and just sitting around and keeping everything centralized and as locked down as possible.”

“It’s a bit of a validation,” says Matrix co-founder and New Vector head of ops and products, Amandine le Pape. “The same way getting funding from VCs was validation of the fact it’s a viable business. Here it’s a validation it’s actually a mainstream open source project which can really grow.”

New Vector co-founders, Matthew Hodgson and Amandine le Pape

While the strategic investment offer from Automattic was obviously just a great opportunity to be seized by New vector, given ideological alignment and integration potential, it also comes at helpful time, per le Pape, given they’ve been growing their SaaS business.

“The business model that we’re looking at with New Vector to go and drive — both to fund Matrix and also to keep the lights on and grow the projects and the company — is very, very similar to what Automattic have successfully done with WordPress.com,” adds Hodgson. “So being able to compare notes directly with their board and our board to go and say to them how do you make this work between the WordPress.org and the WordPress.com split should be a really useful tool for us.”

While Matrix users can choose to host their own servers there’s obviously a high degree of complexity (and potential expense) involved in doing so. Hence New Vector’s business model is to offer a paid Matrix hosting service, called Modular, where it takes care of the complexity of hosting for a fee. (Marketing copy on the Modular website urges potential customers to: “Sign up and deploy your own secure chat service in seconds!”)

“Some of our highest profile customers like Mozilla could go and run it themselves, obviously. Mozilla know tech. But in practice it’s a lot easier and a lot cheaper overall for them to just go and get us to run it,” adds Hodgson. “The nice thing is that they have complete self sovereignty over their data. It’s their DNS. We give them access to the database. They could move off at any time… switch hosting provider or run it themselves. [Users] typically start off with us as a way to get up and running.”

Talking of moving, Hodgson says he expects Automattic to move over from Slack to Riot following this investment.

“I am very excited about what New Vector is doing with Matrix — creating a robust, secure, open protocol that can bring all flavors of instant messaging and collaboration together, in the way that the web or email has its foundation layer,” added Automattic founder, Matt Mullenweg, in a supporting statement. “I share New Vector’s passion for open source and the power of open standards. I’m excited to see how Automattic and New Vector can collaborate on our shared vision in the future.” 

Mullenweg was already a supporter of Matrix, chipping into its seed via Patreon back in 2017. At the time the team was transitioning from being incubated and wholly financed by Amdocs, a telco supplier where New Vectors’ co-founders used to work (running its unified comms division), to spinning out and casting around for new sources of funding to continue development of their decentralized standard.

Some three years on — now with another multi-million dollar tranche of funding in the bank — Hodgson says New Vector is able to contemplate the prospect of profitability ahead, with ~16.8 million users and 45,000 deployments at this point (up from 11M and 40k back in October).

“I think there’s also a high chance — touch wood — that this injection gives us a path straight through to profitability if needed,” he tells us. “Given the macroeconomic uncertainty thanks to the [COVID-19] pandemic, the opportunity to say we have this amount of cash in the bank, assuming our customers follow roughly the trajectory that we’d seen so far… this would be a way to get out the other side without having to depend on any further funding.

“If things are on track we probably would do additional funding next year in order to double down on the success. But right now this at least gives us a pretty chunky safety net.”

The coronavirus crisis has been accelerating interest in Matrix “significantly”, per Hodgson, as entities that might have been contemplating a switch to decentralized comms down the line feel far greater imperative to take control of their data — now that so many users are logging on from home.

“As lockdowns began we saw sign ups increase by a factor of about 10,” he says. “It’s tapered off a little bit but it was a real scaling drama overnight. We had to launch an entirely new set of videoconferencing deployments on Jitsi’s offering, as well as scaling up the hardware for the service which we run by several times over.

“We’re also seeing retention go up, which was nice. We assumed there would be a huge spike of users desperately trying to find a home and then they wouldn’t necessarily stick around. In practice they’ve stuck around more than the existing user base which is reassuring.”

In some cases, New Vector has seen customers radically shrink planned deployment timescales — from months to a matter of days.

“We literally had one [educational] outfit in German reach out and say that tender in September — we want you to go live on Monday,” says Hodgson, noting that in this instance the customer skipped the entire tendering process because of they felt they needed a secure system school kids could use. (And privacy concerns ruling out use of centralized options such as Zoom or Microsoft Teams.)

“The biggest impact from a New Vector perspective at least has been that a lot of our slower moving, bigger opportunities — particularly in the public sector with governments — have suddenly sped up massively,” he adds. “Because it was previously a nice to have premium thing — ‘wouldn’t it be good if we had our own encrypted messenger and if everybody wasn’t using Telegram or WhatsApp to run our country’ — and then suddenly, with the entire population of whichever country it might be suddenly having to work remotely it’s become an existential requirement to have high quality communication, and having that encrypted and self sovereign is a massive deal.”

In terms of competing with Slack (et al), the biggest consideration is usability and UX, according to Hodgson.

So, over the last year, New Vector has hired a dedicated in-house design team to focus on smoothing any overly geeky edges — though most of this work is yet to be pushed out to users.

“We’ve actually pivoted the entire development of Riot to be design led,” he says. “It’s no longer a whole bunch of developers, like myself, going and hacking away on it — instead the product owner and the product direction’s being laid by the design team. And it is an unrecognizable difference — in terms of focus and usability.

“Over the coming year we are expecting Riot to basically be rebuilt at least cosmetically to get rid of the complexity and the geekiness and the IRC hangovers which we have today in favor of something that can genuinely punch its weight against Slack and Discord.”

In another major recent development New Vector switched on end-to-end encryption across the piece in Riot, making it the default for all new non-public conversations (DMs and private chats).

“It’s the equivalent of email suddenly mandating PGP and managing not to break everything,” says Hodgson of that feat.

A key challenge was to “get parity” with users of the non-encrypted version of Matrix before it could be enabled everywhere — with associated problems to tackle, such as search.

“Typically we were doing search on the server and if the messages are encrypted the server obviously can’t index them — so we had to shift all of our search capabilities to run client side. We went and wrote a whole bunch of REST that allows you to basically embed a search engine into Riot on the client, including on the desktop version, so that people can actually reach their encrypted message history there and share it between devices,” he explains.

Another focus for the e2e was the verification process — which is also now built in by default.

“When you now log into Riot it forces you to scan a QR code on an existing login if you’ve already logged in somewhere. A bit like you do on WhatsApp web but rather than just using it to authenticate you it also goes and proves that you are a legitimate person on that account,” he says. “So everyone else then knows to trust that login completely — so that if there is an attack of some kind, if you admin tries to add a malicious device into your account to spy on you or if there’s a man-in-the-middle attack, or something like that, everybody can see that the untrusted device hasn’t been verified by you.

“It’s basically building out a simple web of trust of your devices and immediate contacts so that you have complete protection against ghost devices or other nastier attempts to go and compromise the account. The combination of using QR codes and also using emoji comparison rather than having to read out numbers to one another is I think almost unique now, in terms of creating really, really super robust end-to-end encryption.”

The e2e encryption Matrix uses is based on algorithms popularized by the Signal protocol. It was audited by NCC Group in 2016 but plans for the new funding include a full stack audit — once they’ve ironed out any teething issues with the new default e2e.

“[We want to] at least pick a path, a particular set of clients and servers — because we can’t do the whole thing, obviously, because Matrix has got 60-70 different apps on it now, or different clients. And there are at least four viable server implementations but we will pick the long term supported official path and at least find a set which we can then audit and recommend to governments,” says Hodgson of the audit plans.

They’re also working with Jitsi on a project to make the latter’s WebRTC-compatible videoconferencing platform e2e encrypted too — another key piece as Jitsi’s tech is what New Vector offers for video calling via Matrix.

“We partner with Jitsi for the videoconferencing side of things and we’re working with them on their e2e encrypted videoconferencing… They [recently] got the world’s first WebRTC -based e2e encrypted conferencing going. And they plan to use Matrix as the way to exchange the keys for that — using also all of the verification process [New Vector has developed for Riot]. Because end-to-end encryption’s great, obviously in terms of securing the data — but if you don’t know who you’re talking to, in terms of verifying their identity, it’s a complete waste of time,” adds Hodgson.

So when Jitsi’s e2e encryption launches New Vector will be able to include e2e encrypted videoconferencing as part of its decentralized bundle too.

How much growth is New Vector expecting for Matrix over the next 12 months? “We’ve tripled almost all of the sizing metrics for the network in the last year, and I think we tripled the year before that so I’m hoping that we can continue on that trajectory,” he says on that.

Another “fun thing” New Vector has been working on, since the end of last year, is a peer-to-peer version of Matrix — having developed a “sufficiently lightweight server implementation” that allows Matrix users to run ‘riot’ in a decentralized p2p space via a web browser (or via the app on a mobile device).

“We turned on the peer-to-peer network about a month ago now and they’re at the point right now of making it persistent — previously if all of the clients on the network went away then the entire network disappeared, whereas now it has the ability to persist even if people start restarting their browsers and apps. And it’s very much a mad science project but as far as I know nobody else is remotely in that ballpark,” he says.

“The nice thing is it looks and feels identical to Matrix today. You can use all of the clients, all of the bridges that people have already written… It just happens to be that the Riot is connecting to a server wedged into itself rather than talking to one sitting on the server… So it’s a total paradigm shift.”

“We weren’t sure it was going to work at all but in practice it’s working better than we could have hoped,” he adds. “Over the next year or so we’re going to expect to see more and more emphasis on peer-to-peer — possibly even by default. So that if you install Riot you don’t have to pick a server and go through this fairly clunky thing of figuring out what service provider to trust and do you want to buy one from us as New Vector or do you want to a Swiss ISP. Instead you can start off bobbing around the ocean in a pure peer-to-peer land, and then if you want to persist your data somewhere then you go and find a server to pin yourself to a home on the Internet. But it would be a completely different way of thinking about things.”

Those interested in dipping a toe in p2p decentralized IM can check out this flavor of Riot in a web browser via p2p.riot.im

Europe to Facebook: Pay taxes and respect our values — or we’ll regulate

A livestreamed “debate” yesterday between Facebook CEO Mark Zuckerberg and a European commissioner shaping digital policy for the internal market, Thierry Breton, sounded cordial enough on the surface, with Breton making several chummy references to “Mark” — and talking about “having dialogue to establish the right governance” for digital platforms — while Zuckerberg kept it respectful sounding by indirectly addressing “the commissioner”.

But the underlying message from Europe to Facebook remained steely: Comply with our rules or expect regulation to make that happen.

If Facebook chooses to invest in ‘smart’ workarounds — whether for ‘creatively’ shrinking its regional tax bill or circumventing democratic values and processes — the company should expect lawmakers to respond in kind, Breton told Zuckerberg.

“In Europe we have [clear and strong] values. They are clear. And if you understand extremely well the set of our values on which we are building our continent, year after year, you understand how you need to behave,” said the commissioner. “And I think that when you are running a systemic platform it’s extremely important to understand these values so that we will be able to anticipate — and even better — to work together with us, to build, year after year, the new governance.

“We will not do this overnight. We will have to build it year after year. But I think it’s extremely important to anticipate what could create some “bad reaction” which will force us to regulate.”

“Let’s think about taxes,” Breton added. “I have been a CEO myself and I always talk to my team, don’t try to be too smart. Pay taxes where you have to pay taxes. Don’t got to a haven. Pay taxes. Don’t be too smart with taxes. It’s an important issue for countries where you operate — so don’t be too smart.

“‘Don’t be too smart’ it may be something that we need to learn in the days to come.”

Work with us, not against us

The core message that platforms need to fit in with European rules, not vice versa, is one Breton has been sounding ever since taking up a senior post in the Commission late last year.

Although yesterday he was careful to throw his customary bone alongside it too, saying he doesn’t want to have to regulate; his preference remains for cooperation and ‘partnership’ between platforms and regulators in service of citizens — unless of course he has no other choice. So the message from Brussels to big tech remains: ‘Do what we ask or we’ll make laws you can’t ignore’.

This Commission, of which Breton is a part, took up its five-year mandate at the end of last year — and has unveiled several pieces of a major digital policy reform plan this year, including around sharing industrial data for business and research; and proposing rules for certain ‘high risk’ AI applications.

But a major rethink of platform liabilities remains in the works. Though yesterday Breton declined to give any fresh details on the forthcoming legislation, saying only that it would arrive by the end of the year.

The Digital Services Act could have serious ramifications for Facebook’s business which explains why Zuckerberg made time to dial into a video chat with the Brussels lawmaker. Something the Facebook CEO has consistently refused the British parliament — and denied multiple international parliaments when parliamentarians joined forced to try to question him about political disinformation.

The hour-long online discussion between the tech giant CEO and a Brussels lawmaker intimately involved in shaping the future of regional platform regulation was organized by Cerre, a Brussels-based think tank which is focused on the regulation of network and digital industries.

It was moderated by Cerre, with DG Bruno Liebhaberg posing and choosing the questions, with a couple selected from audience submissions.

Zuckerberg had brought his usual laundry list of talking points whenever regulation that might limit the scope and scale of his global empire is discussed — seeking, for example, to frame the only available options vis-a-vis digital rules as a choice between the US way or China.

That’s a framing that does not go down well in Europe, however.

The Commission has long talked up the idea of championing a third, uniquely European way for tech regulation — saying it will put guardrails on digital platforms in order to ensure they operate in service of European values and so that citizens’ rights and freedoms are not only not eroded by technology but actively supported. Hence its talk of ‘trustworthy AI’.

(That’s the Commission rhetoric at least; however its first draft for regulating AI was far lighter touch than rights advocates had hoped, with a narrow focus on so-called ‘high risk’ applications of AI — glossing over the full spectrum of rights risks which automation can engender.)

Zuckerberg’s simplistic dichotomy of ‘my way or the China highway’ seems unlikely to win him friends or influence among European lawmakers. It implies he simply hasn’t noticed — or is actively ignoring — regional ambitions to champion a digital regulation standard of its own. Neither of which will impress in Brussels.

The Facebook CEO also sought to leverage the Cambridge Analytica data misuse scandal — claiming the episode is an example of the risks should dominant platforms be required to share data with rivals, such as if regulation bakes in portability requirements to try to level the competitive playing field.

It was too much openness in the past that led to Facebook users’ data being nefariously harvested by the app developer that was working for Cambridge Analytica, was his claim.

That claim is also unlikely to go down well in Europe where Zuckerberg faced hostile questions from EU parliamentarians back in 2018, after the scandal broke — including calls for European citizens to be compensated for misuse of their Facebook data.

Facebook’s business, meanwhile, remains subject to multiple, ongoing investigations related to its handling of EU citizens’ personal data. Yet Zuckerberg’s only mention of Europe’s GDPR during the conversation was a claim of “compliance” with the pan-EU data protection framework which he also suggested means it’s raised the standards it offers users elsewhere.

Another area where the Facebook CEO sought to muddy the water — and so lobby to narrow the scope of any future pan-EU platform regulations — was around which bits of data should be considered to belong to a particular user. And whether, therefore, the user should have the right to port them elsewhere.

“In general I’ve been very in favor of data portability and I think that having the right regulation to enforce this would be very helpful. In general I don’t think anyone is against the idea that you should be able to take your data from one service to another — I think all of the hard questions are in how you define what is your data and, especially in the context of social services, what is another person’s data?” he said.

He gave the example of friends birthdays — which Facebook can display to users — questioning whether a user should therefore be able to port that data into a calendar app.

“Do your friends need to now sign off and every single person agree that they’re okay with you exporting that data to your calendar because if that needs to happen because in practice it’s just going to be too difficult and no developer’s going to bother building that integration,” he suggested. “And it might be kind of annoying to request that from all of your friends. So where would we draw the line on what is your data and what is your friends is I think a very critical question here.

“This isn’t just an abstract thing. Our platform started off more open and on the side of data portability — and to be clear that’s exactly one of the reasons why we got into the issues around Cambridge Analytica that we got into because our platform used to work in the way where a person could more easily sign into an app and bring data that their friends had shared with them, under the idea that if their friend had shared something with you, for you to be able to see and use that, you should be able to use that in a different app.

“But obviously we’ve seen the downsides of that — which is that if you bring data that a friend has shared with you to another app and that app ends up being malicious then now a lot of people’s data can be used in a way they didn’t expect. So getting the nuance right on data portability I think is extremely important. And we have to recognize that there are direct trade-offs about openness and privacy. And if our directive is we want to lock everything down from a privacy perspective as much of possible then it won’t be as possible to have an open ecosystem as we want. And that’s going to mean making compromises on innovation and competition and academic research, and things like that.”

Regulation that helps industry “balance these two important values around openness and privacy”, as Zuckerberg put it, would thus be welcomed at 1 Hacker Way.

Breton followed this monologue by raising what he called “the stickiness” of data, and pointing out that “access to data is the number one asset for the platform economy”.

“It’s important in this platform economy but — but! — competition will come. And you will have some platforms allowing this portability probably faster than you think,” he said. “So I think it’s already important to anticipate at the end of the day what your customers are willing to have.”

“Portability will happen,” Breton added. “It’s not easy, it’s not an easy way to find an easy pass but… what we are talking about is how to frame this fourth dimension — the data space… We are still at the very beginning. It will take probably one generation. And it will take time. But let me tell you something but in terms of personal data, more and more the customers will understand and will requests that the personal data belongs to them. They will ask for portability one way or the other.”

On “misinformation”, which was the first topic Zuckerberg chose to highlight — referring to it as misinformation (rather than ‘disinformation’ or indeed ‘fakes’) — he had come prepared with a couple of stats to back up a claim that Facebook has “stepped up efforts” to fight fakes related to the coronavirus crisis.

“In general we’ve been able to really step up the efforts to fight misinformation. We’ve taken down hundreds of thousands of pieces of harmful misinformation. And our independent fact-checking program has yielded more than 50M warnings being shown on pieces of content that are false related to COVID,” he said, claiming 95% of the time people are shown such labelled content “they don’t end up clicking through” — further suggesting “this is a really good collaboration”.

(Albeit, back of an envelop math says 5% of 50M is still 2.5 million clicks in just that one narrow example… )

Breton came in later in the conversation with another deflator, after he was asked whether the current EU code of practice on disinformation — a self-regulatory initiative which several tech platforms have signed up for — is “sufficient” governance.

“We will never do enough,” he rejoined. “Let’s be clear. In terms of disinformation we will never do enough, This is a disease of the center. So everything we have done has to be followed.”

“It’s a huge issue,” Breton went on, saying his preference as a former CEO is for KPIs that “demonstrate we’re progressing”. “So of course we need to follow the progress and if I’m not able to report [to other EU institutions and commissioners] with strong KPIs we will have to regulate — stronger.”

He added that platforms cooperating on self regulation in this area gave him reason to be optimistic that further progress could be made — but emphasized: “This issue is extremely important for our democracy. Extremely… So we will be extremely attentive.”

The commissioner also made a point of instructing Zuckerberg that the buck stops with him — as CEO — lightly dismissing the prospect of Facebook’s newly minted ‘oversight board‘ providing any camouflage at all on the decision-making front, after Zuckerberg had raised it earlier in the conversation.

“When you’re a CEO at the end of the day you are the only one to be responsible, no one else… You have an obligation to do your due diligence when you take decisions,” said Breton, after scattering a little polite praise for the oversight board as “a very good idea”.

“Understand what I’m trying to tell you — when you are the CEO of an important platform you have to deal with a lot of stakeholders. So it’s important of course that you have bodies, could be advisory bodies, could be a board of director, it could be any kind of things, to help you to understand what these stakeholders have to tell you because at the end of the day the mission of a CEO is to be able to listen to everyone and then to take the decision. But at the end of the day it will be Mark that will be responsible.”

In another direct instruction, Breton warned the Facebook CEO against playing “a gatekeeper role”.

“Be careful to help our internal market, don’t play a role where you will be a systemic player, the gatekeeper controlling others to play with. Be careful with the democracy. Anticipate what’s going to happen. Be careful with disinformation. It could have a bad impact on what is extremely important for us — including our values,” he said, appealing to Zuckerberg “to work together, to design together the right governance tools and behavior” — and ending with a Silicon Valley-style appeal to ‘build the future together’.

The inescapable point Breton was working towards was just “because something is not prohibited it doesn’t mean that it’s authorized”. So, in other words, platforms must learn to ask regulators for permission — and should not expect any forgiveness if they fail to do this. This principle is especially important for the digital market and the information society at large, Breton concluded.

A particular low point for Zuckerberg during the conversation came earlier, when Liebhaberg had asked for his assessment of the effectiveness of content moderation measures Facebook has taken so far — and specifically in terms of how swiftly it’s removing illegal and/or harmful content. (Related: Last week France became the latest EU country to pass a law requiring platforms quickly remove illegal content such as hate speech.)

Zuckerberg made a nod to his usual “free expression vs content moderation” talking point — before segwaying into a claim of progress on “increasingly proactive” content moderation via the use of artificial intelligence (“AI”) and what he referred to as “human systems”.

“Over the last few years… we’ve upgraded all of our content review systems to now… our primary goal at this point is what percent of the content that’s going to be harmful can our systems proactively identify and take down before anyone even sees that? Some of that is AI and some of that is human systems,” he said, referring to the up to 30,000 people Facebook pays to use their brain and eyes for content moderation as “human systems”.

“If a person has to see it and report it to us we’re not going to catch everything ourselves but in general if someone has to report it to us then that means that we should be doing a bit better in future. So there’s still a lot of innovation to happen here,” Zuckerberg went on, adding: “We’re getting a lot better at this. I think our systems are continually improving.”

His use of the plural of “systems” at this point suggests he was including human beings in his calculus.

Yet he made no mention of the mental health toll that the moderation work entails for the thousands of people Facebook’s business depends upon to pick up the circa 20% of hate speech be conceded its AI systems still cannot identify. (He did not offer any performance metrics for how (in)effective AI systems are at proactively identifying other types of content which human moderates are routinely exposed to so Facebook users don’t have to — such as images of rape, murder and violence.)

Just last week Facebook paid $52M to settle a lawsuit brought by 11,000 current and former content moderators who developed mental health issues including PTSD on the job.

The Verge reported that under the terms of the settlement, every moderator will receive $1,000 which can be spent how they like but which Facebook intends to partly fund medical treatment, such as for seeking a diagnosis related to any mental health issues a moderator may be suffering.

How will Europe’s coronavirus contacts tracing apps work across borders?

A major question mark attached to national coronavirus contacts tracing apps is whether they will function when citizens of one country travel to another. Or will people be asked to download and use multiple apps if they’re traveling across borders?

Having to use multiple apps when travelling would further complicate an unproven technology which seeks to repurpose standard smartphone components for estimating viral exposure — a task for which our mobile devices were never intended.

In Europe, where a number of countries are working on smartphone apps that use Bluetooth radios to try to automate some contacts tracing by detecting device proximity, the interoperability challenge is particularly pressing, given the region is criss-crossed with borders. Although, in normal times, European Union citizens can all but forget they exist thanks to agreements intended to facilitate the free movement of EU people in the Schengen Area.

Currently, with many EU countries still in degrees of lockdown, there’s relatively little cross border travel going on. But the European Commission has been focusing attention on supporting the tourism sector during the coronavirus crisis — proposing a tourism & transport package this week which sets out recommendations for a gradual and phased lifting of restrictions.

Once Europeans start traveling again, the effectiveness of any national contacts tracing apps could be undermined if systems aren’t able to talk to each other. In the EU, this could mean, for example, a French citizen who travels to Germany for a business trip — where they spend time with a person who subsequently tests positive for COVID — may not be warned of the exposure risk. Or indeed, vice versa.

In the UK, which remains an EU member until the end of this year (during the Brexit transition period), the issue is even more pressing — given Ireland’s decision to opt for a decentralized app architecture for its national app. Over the land border in Northern Ireland, which is part of the UK, the national app would presumably be the centralized system that’s being devised by the UK’s NHSX. And the NHSX’s CEO has admitted this technical division presents a specific challenge for the NHS COVID-19 app.

There are much broader questions over how useful (or useless) digital contacts tracing will prove to be in the fight against the coronavirus. But it’s clear that if such apps don’t interoperate smoothly in a multi-country region such as Europe there will be additional, unhelpful gaps opening up in the data.

Any lack of cross-border interoperability will, inexorably, undermine functionality — unless people given up travelling outside their own countries for good.

EU interoperability as agreed goal

EU Member States recognize this, and this week agreed to a set of interoperability guidelines for national apps — writing that: “Users should be able to rely on a single app independently of the region or Member State they are in at a certain moment.”

The full technical detail of interoperability is yet to be figured out — “to ensure the operationalisation of interoperability as soon as possible”, as they put it.

But the intent is to work together so that different apps can share a minimum of data to enable exposure notifications to keep flowing as Europeans travel around the region, as (or once) restrictions are lifted. 

Whatever the approach taken with approved apps, all Member States and the Commission consider that interoperability between these apps and between backend systems is essential for these tools to enable the tracing of cross-border infection chains,” they write. “This is particularly important for cross-border workers and neighbouring countries. Ultimately, this effort will support the gradual lifting of border controls within the EU and the restoration of freedom of movement. These tools should be integrated with other tools contemplated in the COVID-19 contact tracing strategy of each Member State.”

European users should be able to expect interoperability. But whether smooth cross-border working will happen in practice remains a major question mark. Getting multiple different health systems and apps that might be calculating risk exposure in slightly different ways to interface and share the relevant bits of data in a secure way is itself a major operational and technical challenge.

However this is made even more of a headache given ongoing differences between countries over the core choice of app architecture for their national coronavirus contacts tracing.

This boils down to a choice of either a decentralized or centralized approach — with decentralized protocols storing and processing data locally on smartphones (i.e. the matching is done on device); and centralized protocols that upload exposure data and perform matching on a central server which is controlled by a national authority, such as a health service.

While there looks to be clear paths for interoperability between different decentralized protocols — here, for example, is a detailed discussion document written by backers of different decentralized protocols on how proximity tracing systems might interoperate across regions — interoperability between decentralized and centralized protocols, which are really polar opposite approaches, looks difficult and messy to say the least.

And that’s a big problem if we want digital contacts tracing to smoothly take place across borders.

(Additionally, some might say that if Europe can’t agree on a common way forward vis-a-vis a threat that affects all the region’s citizens it does not reflect well on the wider ‘European project’; aka the Union to which many of the region’s countries belong. But health is a Member State competence, meaning the Commission has limited powers in this area.)

In the eHealth Network ‘Interoperability guidelines’ document Member States agree that interoperability should happen regardless of which app architecture a European country has chosen.

But a section on cross-border transmission chains can’t see a way forward on how exactly to do that yet [emphasis ours] — i.e. beyond general talk of the need for “trusted and secure” mechanisms:

Solutions should allow Member States’ servers to communicate and receive relevant keys between themselves using a trusted and secure mechanism.

Roaming users should upload their relevant proximity encounter information to the home country backend. The other Member State(s) should be informed about possible infected or exposed users*.

*For roaming users, the question of to which servers the relevant proximity contacts details should be sent will be further explored during technical discussions. Interoperability questions will also be explored in relation to how a users’ app should behave after confirmed as COVID-19 positive and the possible need for a confirmation of infection free.

Conversely, the 19 academics behind the proposal for interoperability of different decentralized contacts tracing protocols, do include a section at the end of the document discussing how, in theory, such systems could plug into ‘alternatives’: aka centralized systems.

But it’s thick with privacy caveats.

Privacy risks of crossing system streams

The academics warn that while interoperability between decentralized and centralized systems “is possible in principle, it introduces substantial privacy concerns” — writing that, on the one hand, decentralized systems have been designed specifically to avoid the ability of an central authority being able to recover the identity of users; and “consequently, centralized risk calculation cannot be used without severely weakening the privacy of users of the decentralized system”.

While, on the other, if decentralized risk calculation is used as the ‘bridge’ to achieve interoperability between the two philosophically opposed approaches — by having centralized systems “publish a list of all decentralized ephemeral identifiers it believes to be at risk of infection due to close proximity with positive-tested users of the centralized system” — then it would make it easier for attackers to target centralized systems with reidentification attacks of any positive-tested users. So, again, you get additional privacy risks.

“In particular, each user of the decentralized system would be able to recover the exact time and place they were exposed to the positive-tested individual by comparing their list of recorded ephemeral identifiers which they emitted with the list of ephemeral identifiers published by the server,” they write, specifying that the attack would reveal in which “15 minute” an app user was exposed to a COVID-positive person.

And while they concede there’s a similar risk of reidentification attacks against all forms of decentralized systems, they contend this is more limited — given that decentralized protocol design is being used to mitigate this risk “by only recording coarse timing information”, such as six-hour intervals.

So, basically, the argument is there’s a greater chance that you might only encounter one other person in a 15 minute interval (and therefore could easily guess who might have given you COVID) vs a six-hour window. Albeit, with populations likely to continue to be encouraged to stay at home as much as possible for the foreseeable future, there is still a chance a user of a decentralized system might only pass one other person over a larger time interval too.

As trade offs go, the argument made by backers of decentralized systems is they’re inherently focused on the risks of reidentification — and actively working on ways to mitigate and limit those risks by system design — whereas centralized systems gloss over that risk entirely by assuming trust in a central authority to properly handle and process device-linked personal data. Which is of course a very big assumption.

While such fine-grained details may seem incredibly technical for the average user to need to digest, the core associated concern for coronavirus apps generally — and interoperability specifically — is that users need to be able to trust apps to use them.

So even if a person trusts their own government to handle their sensitive health data, they may be less inclined to trust another country’s government. Which means there could be some risk that centralized systems operating within a mutli-country region such as Europe might end up polluting the ‘trust well’ for these apps more generally — depending on exactly how they’re made to interoperate with decentralized systems.

The latter are designed so users don’t have to trust an authority to oversee their personal data. The former are absolutely not. So it’s really chalk and cheese.

Ce n’est pas un problème?

At this point, momentum among EU nations has largely shifted behind decentralized protocols for coronavirus contacts tracing apps. As previously reported, there has been a major battle between different EU groups supporting opposing approaches. And — in a key shift — privacy concerns over centralized systems being associated with governmental ‘mission creep’ and/or a lack of citizen trust appear to have encouraged Germany to flip to a decentralized model.

Apple and Google’s decision to support decentralized systems for the contacts tracing API they’re jointly developing, and due to release later this month (sample code is out already), has also undoubtedly weighted the debate in favor of decentralized protocols. 

Not all EU countries are aligned at this stage, though. Most notably France remains determined to pursue a centralized system for coronavirus contacts tracing.

As noted above, the UK has also been building an app that’s designed to upload data to a central server. Although it’s reportedly investigating switching to a decentralized model in order to be able to plug into the Apple and Google API — given technical challenges on iOS associated with background Bluetooth access.

Another outlier is Norway — which has already launched a centralized app (which also collects GPS data — against Commission and Member States’ own recommendations that tracing apps should not harvest location data).

High level pressure is clearly being applied, behind the scenes and in public, for EU Member States to agree on a common approach for coronavirus contacts tracing apps. The Commission has been urging this for weeks. Even as French government ministers have preferred to talk in public about the issue as a matter of technological sovereignty — arguing national governments should not have their health policy decisions dictated to them by U.S. tech giants.

“It is for States to chose their architecture and requests were made to Apple to enable both [centralized and decentralized systems],” a French government spokesperson told us late last month.

While there may well be considerable sympathy with that point of view in Europe, there’s also plenty of pragmatism on display. And, sure, some irony — given the region markets itself regionally and globally as a champion of privacy standards. (No shortage of op-eds have been penned in recent weeks on the strange sight of tech giants seemingly schooling EU governments over privacy; while veteran EU privacy advocates have laughed nervously to find themselves fighting in the same camp as data-mining giant Google.)

Commission EVP Margrethe Vestager could also be heard on BBC radio this week suggesting she wouldn’t personally use a coronavirus contacts tracing app that wasn’t built atop a decentralized app architecture. Though the Brexit-focused UK government is unlikely to have an open ear for the views of Commission officials, even piped through establishment radio news channels.

The UK may be forced to listen to technological reality though, if it’s workaround for iOS Bluetooth background access proves as flakey as analysis suggests. And it’s telling that the NHSX is funding parallel work on an app that could plug into the Apple-Google API, per reports in the FT, which would mean abandoning the centralized architecture.

Which leaves France as the highest profile hold-out.

In recent weeks a team at Inria, the government research agency that’s been working on its centralized ROBERT coronavirus contacts tracing protocol, proposed a third way for exposure notifications — called DESIRE — which was billed as an evolution of the approach “leveraging the best of centralized and decentralized systems”.

The new idea is to add a new secret cryptographically generated key to the protocol, called Private Encounter Tokens (PETs), which would encode encounters between users — as a way to provide users with more control over which identifiers they disclose to a central server, and thereby avoid the system harvesting social graph data.

“The role of the server is merely to match PETs generated by diagnosed users with the PETs provided by requesting users. It stores minimal pseudonymous data. Finally, all data that are stored on the server are encrypted using keys that are stored on the mobile devices, protecting against data breach on the server. All these modifications improve the privacy of the scheme against malicious users and authority. However, as in the first version of ROBERT, risk scores and notifications are still managed and controlled by the server of the health authority, which provides high robustness, flexibility, and efficacy,” the Inria team wrote in the proposal. 

The DP-3T consortium, backers of an eponymous decentralized protocol that’s gained widespread backing from governments in Europe — including Germany’s, followed up with a “practical assessment” of Inria’s proposal — in which they suggest the concept makes for “a very interesting academic proposal, but not a practical solution”; given limitations in current mobile phone Bluetooth radios and, more generally, questions around scalability and feasibility. (tl;dr this sort of idea could take years to properly implement and the coronavirus crisis hardly involves the luxury of time.)

The DP-3T analysis is also heavily skeptical that DESIRE could be made to interoperate with either existing centralized or decentralized proposals — suggesting a sort of ‘worst of both words’ scenario on the cross-border functionality front. So, er…

One person familiar with EU Member States’ discussions about coronavirus tracing apps and interoperability, who briefed TechCrunch on condition of anonymity, also suggested the DESIRE proposal would not fly given its relative complexity (vs the pressing need to get apps launched soon if they are to be of any use in the current pandemic). This person also pointed to question marks over required bandwidth and impact on device battery life. For DESIRE to work they suggested it would need universal uptake by all Europe’s governments — and every EU nation agreeing to adopt a French proposal would hardly carry the torch for nation state sovereignty.

What France does with its tracing app remains a key unanswered question. (An earlier planned debate on the issue in its parliament was shelved.) It is a major EU economy and, where interoperability is concerned, simple geography makes it a vital piece of the Western European digital puzzle, given it has land borders (and train links into) a large number of other countries.

We reached out to the French government with questions about how it proposes to make its national coronavirus contacts tracing app interoperable with decentralized apps that are being developed elsewhere across the EU — but at the time of writing it had not responded to our email.

This week in a video interview with BFM Business, the president of Inria, Bruno Sportisse, was reported to have expressed hope that the app will be able to interoperate by June — but also said in an interview that if the project is unsuccessful “we will stop it”.

“We’re working on making those protocols interoperable. So it’s not something that is going to be done in a week or two,” Sportisse also told BFM (translated from French by TechCrunch’s Romain Dillet). “First, every country has to develop its own application. That’s what every country is doing with its own set of challenges to solve. But at the same time we’re working on it, and in particular as part of an initiative coordinated by the European Commission to make those protocols interoperable or to define new ones.”

One thing looks clear: Adding more complexity further raises the bar for interoperability. And development timeframes are necessarily tight.

The pressing imperatives of a pandemic crisis also makes talk of technological sovereignty sound a bit of, well, a bourgeois indulgence. So France’s ambition to single-handedly define a whole new protocol for every nation in Europe comes across as simultaneously tone-deaf and flat-footed — perhaps especially in light if Germany’s swift U-turn the other way.

In a pinch and a poke, European governments agreeing to coalesce around a common approach — and accepting a quick, universal API fix which is being made available at the smartphone platform level — would also offer a far clearer message to citizens. Which would likely help engender citizen trust in and adoption of national apps — that would, in turn, given the apps a greater chance of utility. A pan-EU common approach might also feed tracing apps’ utility by yielding fewer gaps in the data. The benefits could be big.

However, for now, Europe’s digital response to the coronavirus crisis looks messier than that — with ongoing wrinkles and questions over how smoothly different nationals apps will be able to work together as countries opt to go their own way.

Mozilla goes full incubator with ‘Fix The Internet’ startup lab and early stage investments

After testing the waters this spring with its incubator-esque MVP Lab, Mozilla is doubling down on the effort with a formal program dangling $75,000 investments in front of early stage companies. The focus on “a better society” and the company’s open-source clout should help differentiate it from the other options out there.

Spurred on by the success of a college hackathon using a whole four Apple Watches in February, Mozilla decided to try a more structured program in the spring. The first test batch of companies is underway, having started in April an 8-week program offering $2,500 per team member and $40,000 in prizes to give away at the end. Developers in a variety of domains were invited to apply, as long as they fit the themes of empowerment, privacy, decentralization, community, and so on.

It drew the interest of some 1,500 people in 520 projects, and 25 were chosen to receive the full package and stipend during the development of their MVP. The rest were invited to an “Open Lab” with access to some of Mozilla’s resources.

One example of what they were looking for is Ameelio, a startup whose members are hoping to render paid video calls in prisons obsolete with a free system, and provide free letter delivery to inmates as well.

“The mission of this incubator is to catalyze a new generation of internet products and services where the people are in control of how the internet is used to shape society,” said Bart Decrem, a Mozilla veteran (think Firefox 1.0) and one of the principals at the Builders Studio. “And where business models should be sustainable and valuable, but do not need to squeeze every last dollar (or ounce of attention) from the user.”

“We think we are tapping into the energy in the student and professional ‘builder communities’ around wanting to work on ideas that matter. That clarion call really resonates,” he said. Not only that, but students with canceled internships are showing up in droves, it seems — mostly computer science, but design and other disciplines as well. There are no restrictions on applicants, like country of origin, previous funding, or anything like that.

The new incubator will be divided into three tiers.

First is the “Startup Studio,” which involves a $75,000 investment, “a post-money SAFE for 3.5% of the company when the SAFE converts (or we will participate in an already active funding round),” Decrem clarified.

Below that, as far as pecuniary commitment goes, is the “MVP Lab,” similar to the spring program but offering a total of $16,000 per team. And below that is the Open Lab again, but with ten $10,000 prizes rather than a top 3.

There are no hard numbers on how many teams will make up the two subsidized tiers, but think 20-30 total as opposed to 50 or 100. Meanwhile, collaboration, cross-pollination, and open source code is encouraged, as you might expect in a Mozilla project. And the social good aspect is strong as well, as a sampling of the companies in the spring batch shows.

Neutral is a browser plugin that shows the carbon footprint of your Amazon purchases, adding some crucial guilt to transactions we forget are powered by footsore humans and gas-guzzling long-distance goods transport. Meething, Cabal, and Oasis are taking on video conferencing, team chat, and social feeds from a decentralized standpoint, using the miracles of modern internet architecture to accomplish with distributed systems what once took centralized servers.

This summer will see the program inaugurated, but it’s only “the beginning of a multiyear effort,” Decrem said.

Inria releases some source code of French contact-tracing app

French research institute Inria has released a small portion of the source code that is going to power France’s contact-tracing app StopCovid. It is available on several GitLab repositories under the Mozilla Public License 2.0. While the French government announced that everything would be open source, it’s going to bit more complicated than that.

As Inria wrote in the announcement, the project is now divided in three parts. Critical elements of the infrastructure are not going to be available on the GitLab repositories. Instead, Inria will only release documentation on the security implementations, as ANSSI and France’s data protection watchdog CNIL recommended some level of transparency on this front.

A second part is going to be released publicly but Inria is not looking for external contributions or, as developers would say, pull/merge requests. You can expect front-facing work here and things that don’t interact directly with the contact-tracing protocol.

The third part consists of the contact-tracing protocol and its implementation. This time, Inria and the community of companies and research teams working on StopCovid are looking for external contributions. The idea here is to improve the protocol itself when it comes to privacy and security.

France is moving forward with its centralized contact-tracing protocol called ROBERT. I analyzed the pros and cons of the protocol when Inria and Fraunhofer released the specifications.

It’s very different from Apple and Google’s contact-tracing API as ROBERT relies on a central server to assign a permanent ID as well as a bunch of ephemeral IDs attached to this permanent ID. Your phone collects the ephemeral IDs of other app users around you. When somebody is diagnosed COVID-positive, the server receives all the ephemeral IDs associated with people they’ve interacted with. If one or several of your ephemeral IDs get flagged, you receive a notification.

By choosing a pseudonymous system, you have to trust your government that its implementation is rock-solid. For instance, if the app sends too much information when it communicates with the server, it would become possible to put names on permanent IDs.

Inria says that StopCovid could be released in early June, if everything goes well. France’s digital minister, Cédric O, said in a TV interview that the government wanted to release StopCovid on June 2.

Google’s Android ad ID targeted in strategic GDPR tracking complaint

Now here’s an interesting GDPR complaint: Is Google illegally tracking Android users in Europe via a unique, device-assigned advertising ID?

First, what is the Android advertising ID? Per Google’s description to developers building apps for its smartphone platform it’s — [emphasis added by us]

The advertising ID is a unique, user-resettable ID for advertising, provided by Google Play services. It gives users better controls and provides developers with a simple, standard system to continue to monetize their apps. It enables users to reset their identifier or opt out of personalized ads (formerly known as interest-based ads) within Google Play apps.

Not so fast, says noyb — a European not-for-profit privacy advocacy group that campaigns to get regulators to enforce existing rules around how people’s data can be used — the problem with offering a tracking ID that can only be reset is that there’s no way for an Android user to not be tracked.

Simply put, resetting a tracker is not the same thing as being able to not be tracked at all.

noyb has now filed a formal complaint against Google under Europe’s General Data Protection Regulation (GDPR), accusing it of tracking Android users via the ad ID without legally valid consent.

As we’ve said many, many, many times before, GDPR applies a particular standard if you’re relying on consent — as Google appears to be here, since Android users are asked to consent to its terms on device set up, yet must agree to a resettable but not disable-able advertising ID.

Yet, under the EU data protection framework, for consent to be legally valid it must be informed, purpose limited and freely given.

Freely given means there must be a choice (which must also be free).

Thus the question arises, if an Android user can’t say no to an ad ID tracker — they can merely keep resetting it (with no user control over any previously gathered data) — where’s their free choice to not be tracked by Google?

“In essence, you buy a new Android phone, but by adding a tracking ID they ship you a tracking device,” said Stefano Rossetti, privacy lawyer at noyb.eu, in a statement on the complaint.

noyb’s contention is that Google’s ‘choice’ is “between tracking or more tracking” — which isn’t, therefore, a genuine choice to not be tracked at all.

Google claims that users can control the processing of their data, but when put to the test Android does not allow deleting the tracking ID,” it writes. “It only allows users to generate a new tracking ID to replace the existing one. This neither deletes the data that was collected before, nor stops tracking going forward.”

“It is grotesque,” continued Rossetti. “Google claims that if you want them to stop tracking you, you have to agree to new tracking. It is like cancelling a contract only under the condition that you sign a new one. Google’s system seems to structurally deny the exercise of users’ rights.”

We reached out to Google for comment on noyb’s complaint. At the time of writing the company had not responded but we’ll update this report if it provides any remarks.

The tech giant is under active GDPR investigation related to a number of other issues — including its location tracking of users; and its use of personal data for online advertising.

The latest formal complaint over its Android ad ID has been lodged with Austria’s data protection authority on behalf of an Austrian citizen. (GDPR contains provisions that allow for third parties to file complaints on behalf of individuals.)

noyb says the complaint is partially based on a recent report by the Norwegian Consumer Council — which analyzed how popular apps are rampantly sharing user data with the behavioral ad industry.

In terms of process, it notes that the Austrian DPA may involve other European data watchdogs in the case.

This is under a ‘one-stop-shop’ mechanism in the GDPR whereby interested watchdogs liaise on cross-border investigations, with one typically taking a lead investigator role (likely to be the Irish Data Protection Commission in any complaint against Google).

Under Europe’s GDPR, data regulators have major penalty powers — with fines that can scale as high as 4% of global annual turnover, which in Google’s case could amount to up to €5BN. And the ability to order data processing is suspended or stopped. (An outcome that would likely be far more expensive to a tech giant like Google.)

However there has been a dearth of major fines since the regulation began being applied, almost two years ago (exception: France’s data watchdog hit Google with a $57M fine last year). So pressure is continuing to pile up over enforcement — especially on Ireland’s Data Protection Commission which handles many cross-border complaints but has yet to issue any decisions in a raft of cross-border cases involving a number of tech giants.