Italy’s data watchdog latest to warn over use of Google Analytics

Another strike against use of Google Analytics in Europe: The Italian data protection authority has found a local web publisher’s use of the popular analytics tool to be non-compliant with EU data protection rules owing to user data being transferred to the U.S. — a country that lacks an equivalent legal framework to protect the info from being accessed by US spooks.

The Garante found the web publisher’s use of Google Analytics resulted in the collection of many types of user data, including device IP address, browser information, OS, screen resolution, language selection, plus the date and time of the site visit, which were transferred to the U.S. without adequate supplementary measures being applied to raise the level of protection to the necessary EU legal standard.

Protections applied by Google were not sufficient to address the risk, it added, echoing the conclusion of several other EU DPAs who have also found use of Google Analytics violates the bloc’s data protection rules over the data export issue.

Italy’s DPA has given the publisher in question (a company called Caffeina Media Srl) 90 days to fix the compliance violation. But the decision has wider significance as it has also warned other local websites that are using Google Analytics to take note and check their own compliance, writing in a press release [translated from Italian with machine translation]:

“[T]he Authority draws the attention of all Italian managers of websites, public and private, to the illegality of transfers made to the United States through GA [Google Analytics], also in consideration of the numerous reports and questions that are being received by the Office, and invites all data controllers to verify the compliance of the methods of use of cookies and other tracking tools used on its websites, with particular attention to Google Analytics and other similar services, with the legislation on the protection of personal data.”

Earlier this month, France’s data protection regulator issued updated guidance warning over illegal use of Google Analytics — following a similar finding of fault with a local website’s use of the software in February.

The CNIL’s guidance suggests only very narrow possibilities for EU-based site owners to use Google’s analytics tool legally — either by applying additional encryption where keys are held under the exclusive control of the data exporter itself or other entities established in a territory offering an adequate level of protection; or by using a proxy server to avoid direct contact between the user’s terminal and Google’s servers.

Austria’s DPA also upheld a similar complaint over a site’s use of Google Analytics in January.

While the European Parliament found itself in hot water over the same core issue at the start of the year.

All these strikes against Google Analytics link back to a series of strategic complaints filed in August 2020 by European privacy campaign group noyb — which targeted 101 websites with regional operators it had identified as sending data to the US via Google Analytics and/or Facebook Connect integrations.

The complaints followed a landmark ruling by the bloc’s top court in July 2020 — which invalidated a data transfer agreement between the EU and the US, called Privacy Shield, and made it clear that DPAs have a duty to step in and suspend data flows to third countries where they suspect EU citizens’ information of being at risk. 

The so-called ‘Schrems II’ ruling is named after noyb founder and long time European privacy campaigner, Max Schrems, who filed a complaint against Facebook’s EU-US data transfers, citing surveillance practices revealed by NSA whistleblower Edward Snowden, which ended up — via legal referral — in front of the CJEU. (A prior challenge by Schrems also resulted in the previous EU-US data transfer arrangement being struck down by the court in 2015.)

In a more recent development, a replacement for Privacy Shield is on the way: In March, the EU and the US announced they had reached political agreement on this.

However the legal details of the planned data transfer framework still have to be finalized — and the proposed mechanism reviewed and adopted by EU institutions — before it can be put to any use. Which means that use of US-based cloud services remains shrouded in legal risk for EU customers. 

The bloc’s lawmakers have suggested the replacement deal may be finalized by the end of this year — but there’s no simple legal patch EU users of Google Analytics can reach for in the meanwhile. 

Additionally, the gap between US surveillance law and EU privacy law continues to grow in certain regards — and it’s by no means certain the negotiated replacement will be robust enough to survive the inevitable legal challenges.

A simple legal patch for such a fundamental clash of rights and priorities looks like a high bar — failing substantial reform of existing laws (which neither side looks moved to offer).

Hence we’ve started to see software-level responses by certain US cloud giants — to provide European customers with more controls over data flows — in a bid to find a way to route around the data transfers legal risk.

Silicon Valley rallies behind New York ban on geofence and keyword search warrants

A coalition of tech giants, including Google, Microsoft and Yahoo, have pledged support for a New York bill that would ban the use of controversial search warrants that can identify people based on their location data and internet search keywords.

In a brief statement, the coalition known as Reform Government Surveillance said it “supports the adoption of New York Assembly Bill A84A, the Reverse Location Search Prohibition Act, which would prohibit the use of reverse location and reverse keyword searches.”

The bill, if passed, would become the first state law to ban so-called geofence warrants and keyword search warrants, which rely on demanding tech companies turn over data about users who were near the scene of a crime or searched for particular keywords at a specific point in time. But the bill hasn’t moved since it was referred to a committee for discussions in January, the first major hurdle before it can be considered for a floor vote.

Reform Government Surveillance was set up in 2013 by several Silicon Valley tech companies to lobby lawmakers for reforms to U.S. surveillance laws following the leak of classified documents by NSA contractor Edward Snowden. The coalition is now at 11 members — Amazon, Apple, Dropbox, Evernote, Google, Meta, Microsoft, Snap, Twitter, Yahoo (which owns TechCrunch), and Zoom.

The move by the tech coalition to back the New York bill is not entirely altruistic. At least three of RGS’ members — Google, Microsoft and Yahoo — are frequently tapped by law enforcement for location data and search records of users because of the vast amount of data that they store on billions of users around the world. Last year Google said that about one-quarter of all its U.S. legal demands are geofence warrants, a figure that’s growing exponentially each year.

Geofence warrants are requested by law enforcement and signed by a judge to order companies like Google, Microsoft and Yahoo, which collect and store billions of location data points from its users’ phones and apps, to turn over location data on phones that were in a certain geographic area at a certain time to help identify suspects. Keyword search warrants work in a similar way, except a judge orders a search giant like Google to turn over user records of who searched for certain keywords during a particular moment in time.

Critics say these kinds of warrants are unconstitutional and can include data on individuals with no connection to a crime whatsoever, and in some cases can be accused of wrongdoing simply by being physically close to the scene of a crime.

The ACLU, which supports the passage of the bill, told TechCrunch that it has seen thousands of people call on state lawmakers to pass the bill into law after launching an email campaign last week.

Google touts more Workspace controls for users in Europe

Google has announced a package of additional controls for users of its productivity suite, Google Workspace (neé G Suite), in Europe — which it’s rolling out by the end of this year and next.

It says these extra control will enable organizations — both public and private sector — to “control, limit, and monitor transfers of data to and from the EU starting at the end of 2022”, announcing the incoming capabilities in a blog post.

The move looks intended to respond to heightened legal risk around exports of personal data — following a landmark EU legal ruling in July 2020 — which risks putting a dampener on regional use of US cloud services.

Earlier this year, a number of data protection agencies kicked off a coordinated enforcement action focused on public sector bodies’ use of cloud services — with the goal of investigating whether adequate data protection measures are applied, including when data is exported out of the bloc. And the European Data Protection Board (EDPB), which is steering the action, is due to publish a ‘state of play’ report before the end of 2022 — matching Google’s timeline for rolling out (some of) the new controls.

There have also, in recent months, been decisions by data protection agencies finding certain uses of tools like Google Analytics to be incompatible with the bloc’s privacy laws.

Google is referring to the incoming extra capabilities users in Europe will gain as “Sovereign Controls for Google Workspace” — in what also sounds like a conscious echo of a concept that EU lawmakers like to refer to as “digital sovereignty”.

EU lawmakers use that phrasing to talk about the region gaining autonomy over digital infrastructure — much of which is supplied by US tech firms. But, here, Google looks to be trying to spin an alternative version of ‘sovereignty’ by suggesting that technical measures and user configurations alone can provide enough autonomy for the EU, regardless of the tech itself still being supplied by a US giant, in the hopes that customers in the bloc keep buying its tools.

“European organizations are moving their operations and data to the cloud in increasing numbers to enable collaboration, drive business value, and transition to hybrid work. However, the cloud solutions that underpin these powerful capabilities must meet an organization’s critical requirements for security, privacy, and digital sovereignty. We often hear from European Union policymakers and business leaders that ensuring the sovereignty of their cloud data, through regionalization and additional controls over administrative access, is crucial in this evolving landscape,” it writes in the blog post.

“Today, we’re announcing Sovereign Controls for Google Workspace, which will provide digital sovereignty capabilities for organizations, both in the public and private sector, to control, limit, and monitor transfers of data to and from the EU starting at the end of 2022, with additional capabilities delivered throughout 2023. This commitment builds on our existing Client-side encryption,  Data regions, and Access Controls capabilities.”

What extra capabilities has Google announced now? In the near term, there looks to be an expansion of the client-side encryption which Google announced for Workspace last summer.

“Organizations can choose to use Client-side encryption pervasively across all their users, or create rules that apply to specific users, organizational units, or shared drives,” says Google. “Client-side encryption is now generally available for Google Drive, Docs, Sheets, and Slides, with plans to extend the functionality to Gmail, Google Calendar, and Meet by the end of 2022.”

Google is also announcing an expansion of data location controls — although its timeframe for this capability enhancement is slower, slated as coming “by the end of 2023”.

“Data regions already allow our customers to control the storage location of their covered data at-rest,” it writes, adding: “We will enhance this capability by the end of 2023 through expanded coverage of data storage and processing in-region along with an in-country copy.”

There will also be more access controls — to meet what Google couches as “evolving digital sovereignty standards”.

It says these incoming access controls will enable customers to:

  • Restrict and/or approve Google support access through Access Approvals;
  • Limit customer support to EU-based support staff through Access Management;
  • Ensure round-the-clock support from Google Engineering staff, when needed, with remote-in virtual desktop infrastructure;
  • Generate “comprehensive” log reports on data access and actions through the Access Transparency function.

But, again, these extra controls are not coming until the end of 2023.

Google is not starting from scratch here — having trailed incoming “data sovereignty controls” for EU users last fall, when it also talked about offering cloud services on “Europe’s terms“.

Although it will, of course, be for the bloc’s regulators to judge whether what it offers meets the required legal standard for the data flows in question to, er, legally flow.

Google generally argues that hybrid working complicates a legal requirement to “retain control of data wherever it resides” — before suggesting its approach, of “cloud-native architecture” (it specifies that Google Workspace “functions fully within a browser, without requiring caches or installed software on employee devices”) combined with a context-aware (“zero-trust”) approach to security which works by geofencing users and devices, plus controls for admins to let them set sharing boundaries and define rules that govern user communication, can help its customers navigate these choppy legal waters while still allowing the software’s core collaborative functions to work.

Use in the EU of cloud services from US-based companies has been shrouded in legal uncertainty for a number of years — most recently since July 2020 when the bloc’s top court struck down the flagship EU-US Privacy Shield data transfer agreement over a fatal clash between US surveillance law and EU privacy rights.

For the four years it stood, Privacy Shield simplified EU to US data exports with a self-certification system to authorize exports of Europeans’ personal data. But that regime ended with the July 2020 CJEU strike down.

And while the court did not outlaw data exports entirely, it did crank up the complexity of using other transfer mechanisms (such as standard contractual clauses) — making it clear that regional data protection agencies have a duty to step in and suspend data transfers if they believe European’s information is flowing to a destination where it’s at risk. (The EDPB subsequently put out guidance on so-called ‘supplementary measures’ that may help raise the standard of protection, such as robust encryption.)

The fact of the EU-US Privacy Shield being struck down by the CJEU made it plain that the US is a risky destination for EU data — hence US-based cloud services have been in the frame ever since.

And while the court ruling was not immediately followed by orders to cease data flows, EU agencies have been stepping up action and enforcements on the data transfers issue in recent months. The European Data Protection Supervisor gave the European Parliament a smackdown at the start of this year over a COVID-19 testing booking website (which used Google Analytics and included code for Stripe), for example.

Other subsequent decisions from data supervisors have similarly taken issue with use of certain Google’s tools.

The CJEU ruling followed the 2013 Snowden disclosures by NSA whistleblower Edward Snowden — who published details of US government mass surveillance programs tapping commercial digital services — revelations which also led to the prior EU-US data transfer deal, Safe Harbor, being struck down in 2015 by an earlier legal challenge.

So, while the EU and the US announced reaching a political agreement on a replacement for Privacy Shield this March, a third attempt to bridge the same legal schism will undoubtedly face a fresh court challenge. And the odds that Privacy Shield 2.0 survives the CJEU’s assessment look fairly slender, failing substantial reform of US surveillance law (which does not appear to be on the table).

All of which makes Google’s strategy — of offering its customers in the EU an expanding bundle of technical and organizational measures (such as client-side encryption, data localization and other bespoke controls like EU-based tech support) — look like a reasoned attempt to find a way to securing and future-proofing critical business data flows in the eyes of EU regulators, regardless of any political deal on paper. (Although its blog post also notes that Google Cloud will be “making the protections” offered by the new EU data transfer framework available “once it is implemented” (an event still likely multiple months away).)

“We remain committed to equipping our customers in Europe and across the globe with powerful technical solutions that help them adapt to, and stay on top of, a rapidly evolving regulatory landscape. We’ve designed and built Google Workspace to operate on a secure foundation, providing capabilities to keep our users safe, their data secure, and their information private. Digital sovereignty is core to our ongoing mission in Europe and elsewhere, and a guiding principle that customers can rely on now and into the future,” Google adds.

Discussing the tech giant’s announcement, Dr Lukasz Olejnik, an independent cybersecurity researcher and consultant based in Europe, describes the latest development as “an interesting evolution of a product and service” which he assesses as almost certainly motivated by EU law and policy.

“It appears to support directly the recommendations of EDPB, which also reflect my previous analysis. Specifically the support for using specific technical and organisational setup,” he suggests. “As for the technical side, the processing is to be supported by client-side encryption, in ways that keys never leave the premises of an EU-located company. Client-side encryption capability is already offered by Workspace. Today, it could still be seen as a bit cumbersome — and it is unclear if the new controls would make anything easier. Let’s hope so. Still, it appears that what’s new is this all-in-one control.”

“The expansion of in-country data centres is an expected development but an additional one that would support the ECJ judgment,” he also tells us, adding: “What’s still lacking is an easy-to-use and usable management of access to data. Like the data in Google Docs. For example, today it is far from simple to easily list all the shared documents, to remove some sharing configuration. To expect people to do this file-after-file, for individual files, is far from usable at a scale. This should be simplified, not on an individual file basis. It seems that — maybe! — the new Access Control capability may offer help here? How it works in practice remains to be seen.”

TikTok delays opening of first European data center again

TikTok has once again delayed the timeline for opening its first data center in the European Union, in Dublin, Ireland — saying the facility is now not expected to be fully operational until next year.

The video sharing social network has been trailing plans to store the data of EU, EEA and UK users in the region since 2020.

This Ireland data center was initially slated to be up and running in early 2022. That timeline was subsequently pushed back to late 2022. Now it’s been punted into 2023.

Currently, TikTok user data is held outside the region, in either Singapore or the US.

Asked about this lengthy delay, a TikTok spokeswoman said: “We initially announced our intention to establish a data centre in August 2020. The challenges resulting from the ongoing global pandemic have significantly impacted our original timeline.”

A European “transparency and accountability center” — which was announced by TikTok in April 2021 as a hub where outside experts could get info on its platform practices in areas like content moderation, security and privacy — has been operating virtually since last year, also on account of the coronavirus pandemic, with the company saying a physical center would also be opened in Ireland in 2022.

TikTok has faced concerns over the security of user data for several years on account of its parent company, Beijing-based ByteDance, being subject to China’s Internet Security Law — which, since 2017, has given the Chinese Chinese Communist Party sweeping powers to obtain data from digital companies.

Ireland’s Data Protection Commission (DPC), which is TikTok’s lead EU privacy regulator, announced two inquiries into the company’s data processing activities in September 2021 — one of which was focused on international data transfers, the other on its handling of children’s data. Although there’s been no update on the progress of its investigations since. (We’ve asked about the data transfers probe and will update if we get a response from the DPC.)

The issue of exports of personal data out of the EU has been mired in legal uncertainty for years, following revelations in 2013 by the NSA whistleblower Edward Snowden of how government mass surveillance programs were extracting data from consumer services like social networks. (Facebook continues to face uncertainty over the legality of its EU-US data transfers in relation to a very long running data transfer complaint, for example, with a revised draft decision sent to it in February.)

While the Snowden revelations centered on US government bulk data intercepts, the Chinese state’s digital surveillance of the Internet is equally (and for some likely even more) problematic from a privacy point of view. Which puts TikTok, as a Chinese-owned social network, in a tricky spot on data security and data governance.

Data localization has been proposed as one way for Internet businesses to shrink these sorts of data transfer-based legal risks and — on as regards the EU — seek to comply with regional data protection rules which require Europeans’ personal data to enjoy the same level of legal safeguards if it’s exported outside the bloc as it has inside.

However a global social network like TikTok which does not firewall usage regionally is never going to be able to entirely silo storage of data based on the user’s region of origin. An EU-based TikTok user might comment on the video of a US-based TikTok user, for example, or indeed vice versa. Where will that data be stored?

That said, there may be a case that certain types of international data flows taking place on these platforms could justifiably claim a legal basis as so-called ‘necessary transfers’ under EU law — such as messages sent intentionally between users.

And if the bulk of TikTok’s EU users’ data is stored inside the bloc, local privacy regulators may also take a kinder view on those remaining data exports.

TikTok describes its plan to localize EU users’ data in the region as a “European data governance strategy” — emphasizing other measures it claims to be taking, such as “strictly limiting” employee access to personal data and minimizing data exports — so that appears to be its hope.

Simultaneously, the company is leaning into the concern which has followed recent data transfer enforcements by EU regulators — such as decisions finding data breaches in relation to use of products like Google Analytics and Stripe — by pointing out that global products need some data to flow in order to be able to, well, function.

“Such a regional approach to data governance enables us to stay aligned with European data sovereignty goals,” argues TikTok’s head of privacy in Europe, Elaine Fox, in a blog post today. “At the same time, we are minimising data flows outside of the region in a way that allows us to maintain the global interoperability needed to ensure that our users here remain connected to our 1 billion strong community — and enjoy the benefits of a global product experience.”

Exports of personal data out of the EU are not illegal, period. The bloc’s top court left the door open for data transfers to so-called third countries in its July 2020 ruling which invalidated a major EU-US data transfer deal — saying it was still possible for data to be exported using mechanisms such as Standard Contractual Clauses (which TikTok’s Fox says the company uses) — provided that the overarching condition of adequate protection for people’s information in the destination country is met.

The EU’s European Data Protection Board followed that ruling with guidance on so called supplementary measures that data controllers may be able to apply to raise the level of protection to the required legal standard.

And while TikTok claims it is applying a mix of such measures to secure transfers it does not go into specific detail of what it’s doing. (That, presumably, is what the DPC will be assessing in its data transfer inquiry.)

“Where data transfers outside of the region are required, we rely on approved methods for data being transferred from Europe, such as standard contractual clauses,” writes Fox. “We also employ a range of complementary technical, contractual and organisational measures so that these transfers are afforded an equivalent level of data protection to that in the UK and EEA. This means in practice that any personal data is protected through a robust set of physical and logical security controls, along with various policies and data access controls for employees.”

TikTok arguably has more cause for concern on the data transfers issue than US-based Internet services because China is simply not going to be granted a transfer deal by the EU (no matter having passed its own data protection regime; geopolitically speaking it’s not workable) — whereas last month the US and the EU announced with the highest level of fanfare that they’d reached a political agreement over a replacement transatlantic data transfer deal. (Adoption will likely take months, however.)

That means US tech platforms like Facebook can look forward to the prospect of — at the least — another extended grace period while they keep passing data and before any fresh legal challenge to EU-US data flows could unpick the regime again.

As a Chinese-owned entity, TikTok won’t be able to rely on such a backstop.

So it’s unsurprising that elsewhere in its blog post the video sharing service seeks to play up the economic value of its regional operations, writing: “We have thousands of employees across the region, working on areas including brand and creator engagement, e-commerce, monetisation, music, privacy, product, public policy, R&D and trust and safety. We’ve announced permanent offices in two of our most important global hubs, Dublin and London. We’re further bolstering our local leadership teams in France, Italy and Spain and are scaling our business in new markets such as Belgium and the Netherlands.”

Data transfers are not TikTok’s only woes in Europe, though.

The social network is facing additional regional scrutiny on the consumer protection front too — with the European Commission initiating a formal dialogue over its ToS last year following a series of complaints.

In the UK the company is also subject to a privacy class action-style lawsuit over its processing of children’s data.

 

Andreessen Horowitz backs NFT investor group behind rare Wu-Tang Clan album purchase

The crypto collective that bought Wu-Tang Clan’s ‘Once Upon a Time in Shaolin’ album and an NFT of the original Doge image has a new member: venture capital firm Andreessen Horowitz.

The firm tells TechCrunch it has made an investment in PleasrDAO, a blockchain-aligned group of a few dozen crypto investors that have been teaming up to buy high-priced NFTs in recent months. The group’s purchases include the famous Doge image which they bought for $4 million (1,696.9 ETH to be exact), later fractionalized and sold to the crypto community (it’s current implied market cap is just above $100 million). The DAO also paid some $5.4 million for a work from Edward Snowden and $4 million for the Wu-Tang Clan album.

A DAO (or decentralized autonomous organization) is basically a group organized formally around blockchain voting mechanisms to make decisions and invest capital.

A16z Crypto’s investment in PleasrDAO is not their first publicly-announced bet on the organization type. In October, the firm made an investment in Friends With Benefits (FWB) DAO, which they valued at $100 million. Andreessen Horowitz did not disclose the size of their investment in PleasrDAO though council member Santiago Santos specified their overall stake of the DAO’s governance tokens was “less than 5 percent.”

Like many other crypto groups, PleasrDAO had a very singular ambition when it was formed — to bid on a work by digital artist pplpleasr. Said work was an animated video advertisement for the decentralized exchange platform Uniswap. The group formed in March after crypto founder Leighton Cusack tweeted a link to the auction and asked, “Anyone want to create a quick DAO to bid on this???” The group eventually won the bid at 310 Eth (some $525k at the time) with all proceeds for the purchase going to charity.

“Very quickly on we thought, a lot of the smartest minds in DeFi are here — there is an opportunity to really take this in an interesting direction,” Santos tells TechCrunch. “I think, over time, the DAO grew to the point where we felt that we needed some more structure and hierarchy.”

The group has since continued making investments and bulking up its portfolio while aiming to discover new artists that it can support. Santos says he has aims for the group to become a “Medici house” of NFTs “where a lot of digitally native artists can come and get initiated and discovered.”

“DAOs are almost the purest manifestation of what Web3 and crypto are all about,” a16z GP Ali Yahya tells TechCrunch.

While a16z was early to crypto, the rise of so-called “crypto native” investment funds operating closer to founder and developer networks in the space has made it necessary for legacy firms like a16z to get bolder in backing new groups like DAOs. Andreessen Horowitz sees plenty of areas where they can help PleasrDAO including policy and regulatory concerns — an area where DAOs currently seem to be operating in a gray area as unregulated investment funds of pooled capital.

A16z’s bet on the group is, in many ways, a leveraged bet on NFTs, which the group has already bet heavily on. This year, the firm backed NFT marketplace OpenSea, NFT gaming giant Sky Mavis and NFT music platform Royal, among others. The firm also anchored Meta4 — a $100 million venture fund dedicated to buying up pricey NFTs. These investments are being made from the firm’s massive $2.2 billion crypto-centric fund, which the firm debuted this summer.

“It’s easy to underestimate the potential of NFTs,” Yahya says. “It very well could be that NFTs will be enormous and that the total market cap of all NFTs down the line will actually be bigger than the market cap for fungible tokens.”

Watch Edward Snowden launch Global Encryption Day, live today

Marginalized communities, survivors of abuse, politicians, law enforcement – they all use encrypted communications to keep their information safe. But the encryption of the kinds of services you and I use every day – from messaging to confidential internal company communications – is increasingly under threat by governments globally. The reasons are sometimes understandable, such as the protection of children, but more often than not it’s about state snooping and control of populations, especially in authoritarian regimes. Democratic countries that give the green light to breaking encryption make it easier for dictatorships to argue in its favor.

That’s why a group of civil society organizations and technology companies is today launching a campaign in favor of strong encryption on what they are calling “Global Encryption Day”.

Edward Snowden, the whistleblower behind the NSA surveillance revelations and Board Member of Freedom of the Press Foundation will launch the day at a special broadcast today at 6am PT / 9am ET / 2pm BST / 3pm CET which you’ll be able to watch live here:

While governments worldwide often bleat about criminals exploiting encryption messaging to hide illegal activity, proposals to weaken or undermine strong encryption are far more likely to leave users more vulnerable to cyber-attacks and criminal activity, because once you create those back-doors, anyone can use them.

In a statement, Snowden said: “If you weaken encryption, people will die. This year alone, after the fall of the government of Afghanistan, we saw how crucial encryption is in keeping ordinary people safe. The Covid pandemic brought home how essential encrypted messaging apps on our smartphones are for communicating with loved ones if we’re ill and need help… It would have been impossible for me to whistleblow without encryption.”

He added: “Despite this, governments around the world are seeking to weaken encryption by calling on platforms to create ‘backdoors’ for law enforcement… Weakening encryption would be a colossal mistake that could put thousands of lives at risk.”

Jimmy Wales, founder of Wikipedia said: “Protecting strong encryption is essential for protecting the human rights of millions of people around the world. Everyone has the right to privacy and security… Weakening encryption puts us all at risk. When we started Wikipedia, it was prohibitively expensive to use secure encryption for every page on the site, but it was always a priority of ours and we introduced it as soon as we could. There is no excuse for not using encryption now – governments and technology platforms have a duty to protect the public.”

Global Encryption Day will also feature a series of other online events including LGBT Tech hosting an Instagram Live, and a leading civil society group in Brazil hosting an all-women panel discussing the importance of encryption.

The Global Encryption Coalition (GEC) was founded by the Center for Democracy & Technology, Global Partners Digital and the Internet Society.

Evernote quietly disappeared from an anti-surveillance lobbying group’s website

In 2013, eight tech companies were accused of funneling their users’ data to the U.S. National Security Agency under the so-called PRISM program, according to highly classified government documents leaked by NSA whistleblower Edward Snowden. Six months later, the tech companies formed a coalition under the name Reform Government Surveillance, which as the name would suggest was to lobby lawmakers for reforms to government surveillance laws.

The idea was simple enough: to call on lawmakers to limit surveillance to targeted threats rather than conduct a dragnet collection of Americans’ private data, provide greater oversight and allow companies to be more transparent about the kinds of secret orders for user data that they receive.

Apple, Facebook, Google, LinkedIn, Microsoft, Twitter, Yahoo and AOL (to later become Verizon Media, which owns TechCrunch — for now) were the founding members of Reform Government Surveillance, or RGS, and over the years added Amazon, Dropbox, Evernote, Snap and Zoom as members.

But then sometime in June 2019, Evernote quietly disappeared from the RGS website without warning. What’s even more strange is that nobody noticed for two years, not even Evernote.

“We hadn’t realized our logo had been removed from the Reform Government Surveillance website,” said an Evernote spokesperson, when reached for comment by TechCrunch. “We are still members.”

Evernote joined the coalition in October 2014, a year and a half after PRISM first came to public light, even though the company was never named in the leaked Snowden documents. Still, Evernote was a powerful ally to have onboard, and showed RGS that its support for reforming government surveillance laws was gaining traction outside of the companies named in the leaked NSA files. Evernote cites its membership of RGS in its most recent transparency report and that it supports efforts to “reform practices and laws regulating government surveillance of individuals and access to their information” — which makes its disappearance from the RGS website all the more bizarre.

TechCrunch also asked the other companies in the RGS coalition if they knew why Evernote was removed and all either didn’t respond, wouldn’t comment or had no idea. A spokesperson for one of the RGS companies said they weren’t all that surprised since companies “drop in and out of trade associations.”

The website of the Reform Government Surveillance coalition, which features Amazon, Apple, Dropbox, Facebook, Google, Microsoft, Snap, Twitter, Verizon Media and Zoom, but not Evernote, which is also a member. Image Credits: TechCrunch

While that may be true — companies often sign on to lobbying efforts that ultimately help their businesses; government surveillance is one of those rare thorny issues that got some of the biggest names in Silicon Valley rallying behind the cause. After all, few tech companies have openly and actively advocated for an increase in government surveillance of their users, since it’s the users themselves who are asking for more privacy baked into the services they use.

In the end, the reason for Evernote’s removal seems remarkably benign.

“Evernote has been a longtime member — but they were less active over the last couple of years, so we removed them from the website,” said an email from Monument Advocacy, a Washington, D.C. lobbying firm that represents RGS. “Your inquiry has helped to prompt new conversations between our organizations and we’re looking forward to working together more in the future.”

Monument has been involved with RGS since near the beginning after it was hired by the RGS coalition of companies to lobby for changes to surveillance laws in Congress. Monument has spent $2.2 million in lobbying to date since it began work with RGS in 2014, according to OpenSecrets, specifically on lobbying lawmakers to push for changes to bills under congressional consideration, such as changes to the Patriot Act and the Foreign Intelligence Surveillance Act, or FISA, albeit with mixed success. RGS supported the USA Freedom Act, a bill designed to curtail some of the NSA’s collection under the Patriot Act, but was unsuccessful in its opposition to the reauthorization of Section 702 of FISA, the powers that allow the NSA to collect intelligence on foreigners living outside the United States, which was reauthorized for six years in 2018.

RGS has been largely quiet for the past year — issuing just one statement on the importance of transatlantic data flows, the most recent hot-button issue to concern tech companies, fearing that anything other than the legal status quo could see vast swaths of their users in Europe cut off from their services.

“RGS companies are committed to protecting the privacy of those who use our services, and to safeguard personal data,” said the statement, which included the logos of Amazon, Apple, Dropbox, Facebook, Google, Microsoft, Snap, Twitter, Verizon Media and Zoom, but not Evernote.

In a coalition that’s only as strong as its members, the decision to remove Evernote from the website while it’s still a member hardly sends a resounding message of collective corporate unity — which these days isn’t something Big Tech can find much of.

EU puts out final guidance on data transfers to third countries

The European Data Protection Board (EDPB) published its final recommendations yesterday setting on guidance for making transfers of personal data to third countries to comply with EU data protection rules in light of last summer’s landmark CJEU ruling (aka Schrems II).

The long and short of these recommendations — which are fairly long; running to 48 pages — is that some data transfers to third countries will simply not be possible to (legally) carry out. Despite the continued existence of legal mechanisms that can, in theory, be used to make such transfers (like Standard Contractual Clauses; a transfer tool that was recently updated by the Commission).

However it’s up to the data controller to assess the viability of each transfer, on a case by case basis, to determine whether data can legally flow in that particular case. (Which may mean, for example, a business making complex assessments about foreign government surveillance regimes and how they impinge upon its specific operations.)

Companies that routinely take EU users’ data outside the bloc for processing in third countries (like the US), which do not have data adequacy arrangements with the EU, face substantial cost and challenge in attaining compliance — in a best case scenario.

Those that can’t apply viable ‘special measures’ to ensure transferred data is safe are duty bound to suspend data flows — with the risk, should they fail to do that, of being ordered to by a data protection authority (which could also apply additional sanctions).

One alternative option could be for such a firm to store and process EU users’ data locally — within the EU. But clearly that won’t be viable for every company.

Law firms are likely to be very happy with this outcome since there will be increased demand for legal advice as companies grapple with how to structure their data flows and adapt to a post-Schrems II world.

In some EU jurisdictions (such as Germany) data protection agencies are now actively carrying out compliance checks — so orders to suspend transfers are bound to follow.

While the European Data Protection Supervisor is busy scrutinizing EU institutions’ own use of US cloud services giants to see whether high level arrangements with tech giants like AWS and Microsoft pass muster or not.

Last summer the CJEU struck down the EU-US Privacy Shield — only a few years after the flagship adequacy arrangement was inked. The same core legal issues did for its predecessor, ‘Safe Harbor‘, though that had stood for some fifteen years. And since the demise of Privacy Shield the Commission has repeatedly warned there will be no quick fix replacement this time; nothing short of major reform of US surveillance law is likely to be required.

US and EU lawmakers remain in negotiations over a replacement EU-US data flows deal but a viable outcome that can stand up to legal challenge as the prior two agreements could not, may well require years of work, not months.

And that means EU-US data flows are facing legal uncertainty for the foreseeable future.

The UK, meanwhile, has just squeezed a data adequacy agreement out of the Commission — despite some loudly enunciated post-Brexit plans for regulatory divergence in the area of data protection.

If the UK follows through in ripping up key tenets of its inherited EU legal framework there’s a high chance it will also lose adequacy status in the coming years — meaning it too could face crippling barriers to EU data flows. (But for now it seems to have dodged that bullet.)

Data flows to other third countries that also lack an EU adequacy agreement — such as China and India — face the same ongoing legal uncertainty.

The backstory to the EU international data flows issues originates with a complaint — in the wake of NSA whistleblower Edward Snowden’s revelations about government mass surveillance programs, so more than seven years ago — made by the eponymous Max Schrems over what he argued were unsafe EU-US data flows.

Although his complaint was specifically targeted at Facebook’s business and called on the Irish Data Protection Commission (DPC) to use its enforcement powers and suspend Facebook’s EU-US data flows.

A regulatory dance of indecision followed which finally saw legal questions referred to Europe’s top court and — ultimately — the demise of the EU-US Privacy Shield. The CJEU ruling also put it beyond legal doubt that Member States’ DPAs must step in and act when they suspect data is flowing to a location where the information is at risk.

Following the Schrems II ruling, the DPC (finally) sent Facebook a preliminary order to suspend its EU-US data flows last fall. Facebook immediately challenged the order in the Irish courts — seeking to block the move. But that challenge failed. And Facebook’s EU-US data flows are now very much operating on borrowed time.

As one of the platform’s subject to Section 702 of the US’ FISA law, its options for applying ‘special measures’ to supplement its EU data transfers look, well, limited to say the least.

It can’t — for example — encrypt the data in a way that ensures it has no access to it (zero access encryption) since that’s not how Facebook’s advertising empire functions. And Schrems has previously suggested Facebook will have to federate its service — and store EU users’ information inside the EU — to fix its data transfer problem.

Safe to say, the costs and complexity of compliance for certain businesses like Facebook look massive.

But there will be compliance costs and complexity for thousands of businesses in the wake of the CJEU ruling.

Commenting on the EDPB’s adoption of final recommendations, chair Andrea Jelinek said: “The impact of Schrems II cannot be underestimated: Already international data flows are subject to much closer scrutiny from the supervisory authorities who are conducting investigations at their respective levels. The goal of the EDPB Recommendations is to guide exporters in lawfully transferring personal data to third countries while guaranteeing that the data transferred is afforded a level of protection essentially equivalent to that guaranteed within the European Economic Area.

“By clarifying some doubts expressed by stakeholders, and in particular the importance of examining the practices of public authorities in third countries, we want to make it easier for data exporters to know how to assess their transfers to third countries and to identify and implement effective supplementary measures where they are needed. The EDPB will continue considering the effects of the Schrems II ruling and the comments received from stakeholders in its future guidance.”

The EDPB put out earlier guidance on Schrems II compliance last year.

It said the main modifications between that earlier advice and its final recommendations include: “The emphasis on the importance of examining the practices of third country public authorities in the exporters’ legal assessment to determine whether the legislation and/or practices of the third country impinge — in practice — on the effectiveness of the Art. 46 GDPR transfer tool; the possibility that the exporter considers in its assessment the practical experience of the importer, among other elements and with certain caveats; and the clarification that the legislation of the third country of destination allowing its authorities to access the data transferred, even without the importer’s intervention, may also impinge on the effectiveness of the transfer tool”.

Commenting on the EDPB’s recommendations in a statement, law firm Linklaters dubbed the guidance “strict” — warning over the looming impact on businesses.

“There is little evidence of a pragmatic approach to these transfers and the EDPB seems entirely content if the conclusion is that the data must remain in the EU,” said Peter Church, a Counsel at the global law firm. “For example, before transferring personal data to third country (without adequate data protection laws) businesses must consider not only its law but how its law enforcement and national security agencies operate in practice. Given these activities are typically secretive and opaque, this type of analysis is likely to cost tens of thousands of euros and take time. It appears this analysis is needed even for relatively innocuous transfers.”

“It is not clear how SMEs can be expected to comply with these requirements,” he added. “Given we now operate in a globalised society the EDPB, like King Canute, should consider the practical limitations on its power. The guidance will not turn back the tides of data washing back and forth across the world, but many businesses will really struggle to comply with these new requirements.”

 

Mass surveillance must have meaningful safeguards, says ECHR

The highest chamber of the European Court of Human Rights (ECHR) has delivered a blow to anti-surveillance campaigners in Europe by failing to find that bulk interception of digital comms is inherently incompatible with human rights law — which enshrines individual rights to privacy and freedom of expression.

However today’s Grand Chamber judgement underscores the need for such intrusive intelligence powers to be operated with what the judges describe as “end-to-end safeguards”.

Governments in Europe that fail to do so are opening such laws up to further legal challenge under the European Convention on Human Rights.

The Grand Chamber ruling also confirms that the UK’s historic surveillance regime — under the Regulation of Investigatory Powers Act 2000 (aka RIPA) — was unlawful because it lacked the necessary safeguards.

Per the court, ‘end-to-end’ safeguards means that bulk intercept powers need to involve assessments at each stage of the process of the necessity and proportionality of the measures being taken; that bulk interception should be subject to independent authorisation at the outset, when the object and scope of the operation are being defined; and that the operation should be subject to supervision and independent ‘ex post facto’ review.

The Grand Chamber judgement identified a number of deficiencies with the bulk regime operated in the UK at the time of RIPA — including that bulk interception had been authorised by the Secretary of State, rather than by a body independent of the executive; categories of search terms defining the kinds of communications that would become liable for examination had not been included in the application for a warrant; and search terms linked to an individual (e.g. specific identifiers such as an email address) had not been subject to prior internal authorisation.

The court also found that the UK’s bulk intercept regime had breached Article 10 (freedom of expression) because it had not contained sufficient protections for confidential journalistic material.

While the regime used for obtaining comms data from communication service providers was found to have violated Articles 8 (right to privacy and family life/comms) and 10 “as it had not been in accordance with the law”.

However, the court held that the regime by which the UK could request intelligence from foreign governments and/or intelligence agencies had had sufficient safeguards in place to protect against abuse and to ensure that UK authorities had not used such requests as a means of circumventing their duties under domestic law and the Convention.

The Court considered that, owing to the multitude of threats States face in modern society, operating a bulk interception regime did not in and of itself violate the Convention,” it added in a press release. 

The RIPA regime has since replaced by the UK’s Investigatory Powers Act (IPA) — which put bulk intercept powers explicitly into law (albeit with claimed layers of oversight).

The IPA has also been subject to a number of human rights challenges — and in 2018 the government was ordered by the UK High Court to revise parts of the law which had been found to be incompatible with human rights law.

Today’s Grand Chamber judgement relates specifically to RIPA and to a number of legal challenges brought against the UK’s mass surveillance regime by journalists and privacy and digital rights campaigners in the wake of the 2013 mass surveillance revelations by NSA whistleblower Edward Snowden which the ECHR heard simultaneously.

In a similar ruling back in 2018 the lower Chamber found some aspects of the UK’s regime violated human rights law — with a majority vote then finding that its bulk interception regime had violated Article 8 because there was insufficient oversight (such as of selectors and filtering; and of search and selection of intercepted communications for examination; as well as inadequate safeguards governing the selection of related comms data). 

Human rights campaigners followed up by requesting and securing a referral to the Grand Chamber — which has now handed down its view.

It unanimously found there had been a violation of Article 8 in respect of the regime for obtaining communications data from communication service providers.

But by 12 votes to 5 it ruled there had been no violation of Article 8 in respect of the UK’s regime for requesting intercepted material from foreign governments and intelligence agencies.

In another unanimous vote the Grand Chamber found there had been a violation of Article 10, concerning both the bulk interception regime and the regime for obtaining communications data from comms service providers.

But, again, by 12 votes to 5 it ruled there had been no violation of Article 10 in respect of the regime for requesting intercepted material from foreign governments and intelligence agencies.

Responding to the judgement in a statement, the privacy rights group Big Brother Watch — which was one of the parties involved in the challenges — said the judgement “confirms definitively that the UK’s bulk interception practices were unlawful for decades”, thereby vindicating Snowden’s whistleblowing.

The organization also highlighted a dissenting opinion from Judge Pinto de Alburquerque, who wrote that:

“Admitting non-targeted bulk interception involves a fundamental change in how we view crime prevention and investigation and intelligence gathering in Europe, from targeting a suspect who can be identified to treating everyone as a potential suspect, whose data must be stored, analysed and profiled (…) a society built upon such foundations is more akin to a police state than to a democratic society. This would be the opposite of what the founding fathers wanted for Europe when they signed the Convention in 1950.”

In further remarks on the judgement, Silkie Carlo, director of Big Brother Watch, added: “Mass surveillance damages democracies under the cloak of defending them, and we welcome the Court’s acknowledgement of this. As one judge put it, we are at great risk of living in an electronic ‘Big Brother’ in Europe. We welcome the judgment that the UK’s surveillance regime was unlawful, but the missed opportunity for the Court to prescribe clearer limitations and safeguards mean that risk is current and real.”

“We will continue our work to protect privacy, from parliament to the courts, until intrusive mass surveillance practices are ended,” she added.

Privacy International — another party to the case — sought to put a more positive spin on the outcome, saying the Grand Chamber goes further than the ECHR’s 2018 ruling by “providing for new and stronger safeguards, adding a new requirement of prior independent or judicial authorisation for bulk interception”.

“Authorisation must be meaningful, rigorous and check for proper ‘end-to-end safeguards’,” it added in a statement.

Also commenting publicly, the Open Rights Group’s executive director, Jim Killock, said: “The court has shown that the UK Government’s legal framework was weak and inadequate when we took them to court with Big Brother Watch and Constanze Kurz in 2013. The court has set out clear criteria for assessing future bulk interception regimes, but we believe these will need to be developed into harder red lines in future judgments, if bulk interception is not to be abused.”

“As the court sets out, bulk interception powers are a great power, secretive in nature, and hard to keep in check. We are far from confident that today’s bulk interception is sufficiently safeguarded, while the technical capacities continue to deepen. GCHQ continues to share technology platforms and raw data with the US,” Killock went on to say, couching the judgment as “an important step on a long journey”.

 

Proton, the privacy startup behind e2e encrypted ProtonMail, confirms passing 50M users

End-to-end encrypted email provider ProtonMail has officially confirmed it’s passed 50 million users globally as it turns seven years old.

It’s a notable milestone for a services provider that intentionally does not have a data business — opting instead for a privacy pledge based on zero access architecture that means it has no way to decrypt the contents of ProtonMail users’ emails.

Although, to be clear, the 50M+ figure applies to total users of all its products (which includes a VPN offering), not just users of its e2e encrypted email. (It declined to break out email users vs other products when we asked.)

Commenting in a statement, Andy Yen, founder and CEO, said: “The conversation about privacy has shifted surprisingly quickly in the past seven years. Privacy has gone from being an afterthought, to the main focus of a lot of discussions about the future of the Internet. In the process, Proton has gone from a crowdfunded idea of a better Internet, to being at the forefront of the global privacy wave. Proton is an alternative to the surveillance capitalism model advanced by Silicon Valley’s tech giants, that allows us to put the needs of users and society first.”

ProtonMail, which was founded in 2014, has diversified into offering a suite of products — including the aforementioned VPN and a calendar offering (Proton Calendar). A cloud storage service, Proton Drive, is also slated for public release later this year.

For all these products it claims take the same ‘zero access’ hands off approach to user data. Albeit, it’s a bit of an apples and oranges comparison to compare e2e encrypted email with an encrypted VPN service — since the issue with VPN services is that they can see activity (i.e. where the encrypted or otherwise packets are going) and that metadata can sum to a log of your Internet activity (even with e2e encryption of the packets themselves).

Proton claims it doesn’t track or record its VPN users’ web browsing. And given its wider privacy-dependent reputation that’s at least a more credible claim vs the average VPN service. Nonetheless, you do still have to trust Proton not to do that (or be forced to do that by, for e.g., law enforcement). It’s not the same technical ‘zero access’ guarantee as it can offer for its e2e encrypted email.

Proton does also offer a free VPN — which, as we’ve said before, can be a red flag for data logging risk — but the company specifies that users of the paid version subsidize free users. So, again, the claim is zero logging but you still need to make a judgement call on whether to trust that.

From Snowden to 50M+

Over ProtonMail’s seven year run privacy has certainly gained cache as a brand promise — which is why you can now see data-mining giants like Facebook making ludicrous claims about ‘pivoting’ their people-profiling surveillance empires to ‘privacy’. So, as ever, PR that’s larded with claims of ‘respect for privacy’ demands very close scrutiny.

And while it’s clearly absurd for an adtech giant like Facebook to try to cloak the fact that its business model relies on stripping away people’s privacy with claims to the contrary, in Proton’s case the privacy claim is very strong indeed — since the company was founded with the goal of being “immune to large scale spying”. Spying such as that carried out by the NSA.

ProtonMail’s founding idea was to build a system “that does not require trusting us”.

While usage of e2e encryption has grown enormously since 2013 — when disclosures by NSA whistleblower, Edward Snowden, revealed the extent of data gathering by government mass surveillance programs, which were shown (il)liberally tapping into Internet cables and mainstream digital services to grab people’s data without their knowledge or consent — growth that’s certainly been helped by consumer friendly services like ProtonMail making robust encryption far more accessible — there are worrying moves by lawmakers in a number of jurisdictions that clash with the core idea and threaten access to e2e encryption.

In the wake of the Snowden disclosures, ‘Five Eyes’ countries steadily amped up international political pressure on e2e encryption. Australia, for example, passed an anti-encryption law in 2018 — which grants police powers to issue ‘technical notices’ to force companies operating on its soil to help the government hack, implant malware, undermine encryption or insert backdoors at the behest of the government.

While, in 2016, the UK reaffirmed its surveillance regime — passing a law that gives the government powers to compel companies to remove or not implement e2e encryption. Under the Investigatory Powers Act, a statutory instrument called a Technical Capability Notice (TCN) can be served on comms services providers to compel decrypted access. (And as the ORG noted in April, there’s no way to track usage as the law gags providers from reporting anything at all about a TCN application, including that it even exists.)

More recently, UK ministers have kept up public pressure on e2e encryption — framing it as an existential threat to child protection. Simultaneously they are legislating — via an Online Safety Bill, out in draft earlier this month — to put a legally binding obligation on service providers to ‘prevent bad things from happening on the Internet’ (as the ORG neatly sums it up). And while still at the draft stage, private messaging services are in scope of that bill — putting the law on a potential collision course with messaging services that use e2e encryption.

The U.S., meanwhile, has declined to reform warrantless surveillance.

And if you think the EU is a safe space for e2e encryption, there are reasons to be concerned in continental Europe too.

EU lawmakers have recently made a push for what they describe as “lawful access” to encrypted data — without specifying exactly how that might be achieved, i.e. without breaking and/or backdooring e2e encryption and therefore undoing the digital security they also say is vital.

In a further worrying development, EU lawmakers have proposed automated scanning of encrypted communications services — aka a provision called ‘chatcontrol’ that’s ostensibly targeted at prosecuting those who share child exploitation content — which raises further questions over how such laws might intersect with ‘zero access’ services like ProtonMail.

The European Pirate Party has been sounding the alarm — and dubs the ‘chatcontrol’ proposal “the end of the privacy of digital correspondence” — warning that “securely encrypted communication is at risk”.

A plenary vote on the proposal is expected in the coming months — so where exactly the EU lands on that remains to be seen.

ProtonMail, meanwhile, is based in Switzerland which is not a member of the EU and has one of the stronger reputations for privacy laws globally. However the country also backed beefed-up surveillance powers in 2016 — extending the digital snooping capabilities of its own intelligence agencies.

It does also adopt some EU regulations — so, again, it’s not clear whether or not any pan-EU automated scanning of message content could end up being applied to services based in the country.

The threats to e2e encryption are certainly growing, even as usage of such properly private services keeps scaling.

Asked whether it has concerns, ProtonMail pointed out that the EU’s current temporary chatcontrol proposal is voluntary — meaning it would be up to the company in question to decide its own policy. Although it accepts there is “some support” in the Commission for the chatcontrol proposals to be made mandatory.

“It’s not clear at this time whether these proposals could impact Proton specifically [i.e. if they were to become mandatory],” the spokesman also told us. “The extent to which a Swiss company like Proton might be impacted by such efforts would have to be assessed based on the specific legal proposal. To our knowledge, none has been made for now.”

“We completely agree that steps have to be taken to combat the spread of illegal explicit material. However, our concern is that the forced scanning of communications would be an ineffective approach and would instead have the unintended effect of undermining many of the basic freedoms that the EU was established to protect,” he added. “Any form of automated content scanning is incompatible with end-to-end encryption and by definition undermines the right to privacy.”

So while Proton is rightly celebrating that a steady commitment to zero access infrastructure over the past seven years has helped its business grow to 50M+ users, there are reasons for all privacy-minded folk to be watchful of what the next years of political developments might mean for the privacy and security of all our data.