GDPR’s two-year review flags lack of “vigorous” enforcement

It’s more than two years since a flagship update to the European Union’s data protection regime moved into the application phase. Yet the General Data Protection Regulation (GDPR) has been dogged by criticism of a failure of enforcement related to major cross-border complaints — lending weight to critics who claim the legislation has created a moat for dominant multinationals, at the expense of smaller entities.

Today the European Commission responded to that criticism as it gave a long scheduled assessment of how the regulation is functioning, in its first review two years in.

While EU lawmakers’ top-line message is the clear claim: ‘GDPR is working’ — with commissioners lauding what they couched as the many positives of this “modern and horizontal piece of legislation”; which they also said has become a “global reference point” — they conceded there is a “very serious to-do list”, calling for uniformly “vigorous” enforcement of the regulation across the bloc.

So, in other words, GDPR decisions need to flow more smoothly than they have so far.

Speaking at a Commission briefing today, Věra Jourová, Commission VP for values and transparency, said: “The European Data Protection Board and the data protection authorities have to step up their work to create a truly common European culture — providing more coherent and more practical guidance, and work on vigorous but uniform enforcement.

“We have to work together, as the Board and the Member States, to address concerns — in particular those of the small and medium enterprises.”

Justice commissioner, Didier Reynders, also speaking at the briefing, added: “We have to ensure that [GDPR] is applied harmoniously — or at least with the same vigour across the European territory. There may be some nuanced differences but it has to be applied with the same vigour.

“In order for that to happen data protection authorities have to be sufficiently equipped — they have to have the relevant number of staff, the relevant budgets, and there is a clear will to move in that direction.”

Front and center for GDPR enforcement is the issue of resourcing for national data protection authorities (DPAs), who are tasked with providing oversight and issuing enforcement decisions.

Jourová noted today that EU DPAs — taken as a whole — have increased headcount by 42% and budget by 49% between 2016 and 2019.

However that’s an aggregate which conceals major differences in resourcing. A recent report by pro-privacy browser Brave found that half of all national DPAs receive just €5M or less in annual budget from their governments, for example. Brave also found budget increases peaked for the application of the GDPR — saying, two years in, governments are now slowing the increase.

It’s also true that DPA case load isn’t uniform across the bloc, with certain Member States (notably Ireland and Luxembourg) handling many more and/or more complex complaints than others as a result of how many multinationals locate their regional HQs there.

One key issue for GDPR thus relates to how the regulation handles cross border cases.

A one-stop-shop mechanism was supposed to simplify this process — by having a single regulator (typically in the country where the business has its main establishment) taking a lead on complaints that affect users in multiple Member States, and other interested DPAs not dealing directly with the data processor. But they do remain involved — and, once there’s a draft decision, play an important role as they can raise objections to whatever the lead regulator has decided.

However a lot of friction seems to be creeping in via current processes, via technical issues related to sharing data between DPAs — and also via the opportunity for additional legal delays.

In the case of big tech, GDPR’s one-stop-shop has resulted in a major backlog around enforcement, with multiple complaints being re-routed via Ireland’s Data Protection Commission (DPC) — which is yet to issue a single decision on a cross border case. And has more than 20 such investigations ongoing.

Last month Ireland’s DPC trailed looming decisions on Twitter and Facebook — saying it had submitted a draft decision on the Twitter case to fellow DPAs and expressed hope that case could be finalized in July.

Its data protection commissioner, Helen Dixon, had previously suggested the first cross border decisions would be coming in “early” 2020. In the event, we’re past half way through the year still with no enforcement on show.

This looks especially problematic as there is a counter example elsewhere in the EU: France’s CNIL managed to issue a decision in a major GDPR case against Google all the way back in January 2019. Last week the country’s top court for administrative law cemented the regulator’s findings — dismissing Google’s appeal. Its $57M fine against Google remains the largest yet levied against big tech under GDPR.

Asked directly whether the Commission believes Ireland’s DPC is sufficiently resourced — with the questioner noting it has multiple ongoing investigations into Facebook, in particular, with still no decisions taken on the company — Jourová emphasized DPAs are “fully independent”, before adding: “The Commission has no tools to push them to speed up but the cases you mention, especially the cases that relate to big tech, are always complex and they require thorough investigation — and it simply requires more time.”

However CNIL’s example shows effective enforcement against major tech platforms is possible — at least, where there’s a will to take on corporate power. Though France’s relative agility may also have something to do with not having to deal simultaneously with such a massive load of complex cross-border cases.

At the same time, critics point to Ireland’s cosy political relationship with the corporate giants it attracts via low tax rates — which in turn raises plenty of questions when set against the oversized role its DPA has in overseeing most of big tech. The stench of forum shopping is unmistakable.

Criticism of national regulators extends beyond Ireland, too, though. In the UK, privacy experts have slammed the ICO’s repeated failure to enforce the law against the adtech industry — despite its own assessments finding systemic flouting of the law. The country remains an EU Member State until the end of the year — and the ICO is the best resourced DPA in the bloc, in terms of budget and headcount (and likely tech expertise too). Which hardly reflects well on the functional state of the regulation.

Despite all this, the Commission continues to present GDPR as a major geopolitical success, claiming — as it did again today — that it’s ahead of the digital regulatory curve globally at a time when lawmakers almost everywhere are considering putting harder limits on Internet players.

But there’s only so long it can sell a success on paper. Without consistently “vigorous” enforcement, the whole framework crumbles — so the EU’s executive has serious skin in the game when it comes to GDPR actually doing what it says on the tin.

Pressure is coming from commercial quarters too — not only privacy and consumer rights groups.

Earlier this year, Brave lodged a complaint with the Commission against 27 EU Member States — accusing them of under resourcing their national data protection watchdogs. It called on the EU executive to launch an infringement procedure against national governments, and refer them to the bloc’s top court if necessary. So startups are banging the drum for enforcement too.

If decision wheels don’t turn on their own, courts may eventually be needed to force Europe’s DPAs to get a move on — albeit, the Commission is still hoping it won’t have to come to that.

“We saw a considerable increase of capacities both in Ireland and Luxembourg,” said Jourová, discussing the DPA resourcing issue. “We saw a sufficient increase in at least half of other Member States DPAs so we have to let them do very responsible and good work — and of course wait for the results.”

Reynders suggested that while there has been an increase in resource for DPAs the Commission may need to conduct a “deeper” analysis — to see if more resource is needed in some Member States, “due to the size of the companies at work in the jurisdiction of such a national authority”.

“We have huge differences between the Member States about the need to react to the requests from the companies. And of course we need to reinforce the cooperation and the co-ordination on cross border issues. We need to be sure that it’s possible for all the national authorities to work together. And in the network of national authorities it’s the case — and with the Board [EDPB] it’s possible to organize that. So we’ll continue to work on it,” he said.

“So it’s not only a question to have the same kind of approach in all the Member States. It’s to be fit to all the demands coming in your jurisdiction and it’s true that in some jurisdictions we have more multinationals and more members of high tech companies than in others.”

“The best answer will be a decision from the Irish data protection authority about important cases,” he added.

We’ve reached out to the Irish DPC and the EDPB for comment on the Commission’s GDPR assessment.

Asked whether the Commission has a list of Member States that it might instigate infringement proceedings against related to the terms of GDPR — which, for example, require governments to provide adequate resourcing to their national DPA in order that they can properly oversee the regulation — Reynders said it doesn’t currently have such a list.

“We have a list of countries where we try to see if it’s possible to reinforce the possibilities for the national authorities to have enough resources — human resources, financial resources, to organize better cross border activities — if at the end we see there’s a real problem about the enforcement of the GDPR in one Member State we will propose to go maybe to the court with an infringement proceeding — but we don’t have, for the moment, a list of countries to organize such a kind of process,” he said.

The commissioners were a lot more comfortable talking up the positives of GDPR, with Jourová noting, with a sphinx-like smile, how three years ago there was “literal panic” and an army of lobbyists warning of a “doomsday” for business and innovation should the legislation pass. “I have good news today — no dooms day was here,” she said.

“Our approach to the GDPR was the right one,” she went on. “It created the more harmonized rules across the Single Market and more and more companies are using GDPR concepts, such as privacy by design and by default, as a competitive differentiation.

“I can say that the philosophy of one continent, one law is very advantageous for European small and medium enterprises who want to operate on the European Single Market.

“In general GDPR has become a truly European trade mark,” she added. “It puts people and their rights at the center. It does not leave everything to the market like in the US. And it does not see data as a means for state supervision, as in China. Our truly European approach to data is the first answer to difficult questions we face as a society.”

She also noted that the regulation served as inspiration for the current Commission’s tech-focused policy priorities — including a planned “human centric approach to AI“.

“It makes us pause before facial recognition technology, for instance, will be fully developed or implemented. And I dare to say that it makes Europe fit for the digital age. On the international side the GDPR has become a reference point — with a truly global convergence movement. In this context we are happy to support trade and safe digital data flows and work against digital protectionism.”

Another success the commissioners credited to the GDPR framework is the region’s relatively swift digital response to the coronavirus — with the regulation helping DPAs to more quickly assess the privacy implications of COVID-19 contacts tracing apps and tools.

Reynders lauded “a certain degree of flexibility in the GDPR” which he said had been able to come into play during the crisis, feeding into discussions around tracing apps — on “how to ensure protection of personal data in the context of such tracing apps linked to public and individual health”.

Under its to-do list, other areas of work the Commission cited today included ensuring DPAs provide more such support related to the application of the regulation by coming out with guidelines related to other new technologies. “In various new areas we will have to be able to provide guidance quickly, just as we did on the tracing apps recently,” noted Reynders.

Further increasing public awareness of GDPR and the rights it affords is another Commission focus — though it said more than two-third of EU citizens above the age of 16 have at least heard of the GDPR.

But it wants citizens to be able to make what Reynders called “best use” of their rights, perhaps via new applications.

“So the GDPR provides support to innovation in this respect,” he said. “And there’s a lot of work that still needs to be done in order to strengthen innovation.”

“We also have to convince those who may still be reticent about the GDPR. Certain companies, for instance, who have complained about how difficult it is to implement it. I think we need to explain to them what the requirements of the GDPR and how they can implement these,” he added.

Antitrust case against Facebook’s ‘super profiling’ back on track after German federal court ruling

A landmark regulatory intervention that seeks to apply structural antitrust remedies to cut big (ad)tech’s rights-hostile surveillance business models down to size has been revived after Germany’s federal court overturned an earlier ruling that had suspended enforcement of a ban on Facebook combining user data.

The upshot is the tech giant could be forced to stop combining the personal data of users of its various social services with other personal data it harvests on Internet users via its various social plug-ins and tracking pixels. Which in turn would amount to a structural separation of Facebook’s social empire.

That said, there’s still some mileage left in the legal process — which will likely delay any enforcement for months more at least. But the federal court has put the train back on the tracks.

As we’ve reported previously, the intervention by Germany’s antitrust regulator is seen as highly innovative as it joins the dots of EU privacy rights and competition law. So this case is being closely watched by regulators around the world.

Quick recap: Last year Germany’s Federal Cartel Office ordered Facebook to stop combining data on users across its different services after determining that its zero opt-out T&Cs combined with Facebook’s dominant market position in the social network space to make its pervasive people-profiling an “exploitative abuse”.

The order originated with an investigation by the Bundeskartellamt (FCO) into Facebook’s data-gathering practices, which kicked off in March 2016. Almost exactly three years later the regulator concluded it had identified abuse — and issued the order banning Facebook from combining data on users across its own suite of social platforms without first obtaining their consent.

Instead of agreeing to offer users a choice over how they’re tracked, Facebook appealed — and, last August, the Higher Regional Court in Dusseldorf granted it a suspension, delaying application of the order — and seemingly derailing the chance for an innovative regulatory intervention against a rights hostile ‘track and target’ adtech business model. 

All was not lost though, as the FCO appealed the suspension — leading to today’s fresh legal twist.

In today’s decision, the Germany Federal Court of Justice provisionally confirms the FCO’s allegation of an abuse of a dominate position by Facebook — opening the door to the regulator being able to enforce the ban.

So it’s game (back) on for the antitrust case against platform giants whose dominance stems from mass surveillance of Internet users.

In a statement, FCO president Andreas Mundt welcomed the decision.

“I am pleased about the decision by the Federal Court of Justice,” he said. “Data are an essential factor for economic strength and a decisive criterion in assessing online market power. The court’s decision provides important information on how we should deal with the issue of data and competition in the future. Whenever data are collected and used in an unlawful way, it must be possible to intervene under antitrust law to avoid an abuse of market power.”

We’ve also reached out to Facebook for comment.

The Dusseldorf Higher Regional Court has still not issued a ruling on Facebook’s original appeal against the FCO order — though it granted the company’s request for a suspension, saying it had doubts about its legality.

But the Federal Court of Justice has overturned that earlier decision. And not just overturned it — it’s blasted it with a legal equivalent of a blowtorch.

In a press release today (in German, which we’ve translated using Google Translate) the court writes [emphasis ours]: “There are no serious doubts about Facebook’s dominant position in the German market for social networks or that Facebook is abusing this dominant position with the terms of use prohibited by the Cartel Office.”

The court takes issue with Facebook’s terms and condition — finding them “abusive” because it says users are not offered a choice over the extent of the company’s tracking and targeting of them; with the court pointing out there’s no option for users to have Facebook’s content “personalization” based only on the data they reveal on Facebook.com. Instead Facebook forces users to accept what it calls “a more intensive personalization of the user experience”, which the court further notes is “associated with a potentially unlimited access to characteristics of their ‘off-Facebook’ Internet use by Facebook”.

Which doesn’t sound, y’know, proportionate.

In additional remarks, the court writes that Facebook’s super profiling of Internet users has negative impacts on people’s personal autonomy — infringing their rights, not only under EU data protection law, but it asserts this also represents an antitrust abuse as a consequence of how Facebook is exploiting its dominant position in the market for social networks.

Or, as it put it in the press release: “The lack of choice for Facebook users not only affects their personal autonomy and the protection of their right to informational self-determination, which is also protected by the GDPR [General Data Protection Regulation]. Against the background of the high hurdles to change that exist for the users of the network (“lock-in effects”), it also represents an exploitation of the users that is relevant under antitrust law, because competition is no longer effective due to Facebook’s dominant position.”

The court also points to findings by the FCO that significant numbers of Facebook users want to be able to hand over less personal information to use its service — noting that if thriving competition existed in the social network market there may well be a more privacy-friendly offer from Facebook. Instead, you get none.

In another interesting observation, the court said Facebook’s access to “a significantly larger database” — i.e. via its super profiling of users — reinforces what are already “pronounced” network effects keeping a lid on competition in the social media market. So a double negative.

Additionally, it suggests Facebook’s super profiling helps the company amass larger ad revenues — which it notes “also depend on the scope and quality of the data available”. “Finally, due to the negative effects on competition for advertising contracts, an impairment of the market for online advertising cannot be ruled out,” it adds in another shot across Zuckerberg’s bow.

Commenting on the decision, Rupprecht Podszun, a chair for civil law, German and European competition law at Heinrich Heine University, called it “a spectacular success” for the FCO and an “important step forward” in regulating Internet giants whose empires are based on this type of rights-hostile profiling. 

“The decision is a spectacular success for the competition watchdog, and an important signal for competition on the internet. The proceedings against Facebook are regarded worldwide as a pioneering case: The FCO is attempting to tame the tech giants and to stop the build-up of economic power through integration of data to ‘super profiles’. This is something new in terms of antitrust law. Exploitation of users through data aggregation, as the FCO has accused Facebook of doing, has so far been uncharted territory,” he told TechCrunch. 

“The Federal Supreme Court said it has ‘no serious doubts’ that Facebook is market-dominant and abuses its market power. The court is even stricter than the competition authority: It does not require a protection of privacy laws (as in the General Data Protection Regulation), but it says that freedom of choice and autonomy of users is key in such cases. This is an important step forward –– making the users’ self-determination a benchmark for competition on the internet.”

“The FCO can now demand from Facebook to submit a plan within four months how to stop the merging of data into so-called ‘super profiles’,” Podszun added. “Facebook merges data from the group’s own services such as Facebook, WhatsApp and Instagram with other data collected on the net via so-called Facebook Business Tools. This was the Bundeskartellamt’s central point of attack.”

The professor remains critical of the pace of regulatory progress — dubbing it “almost a bad joke” for this latest twist of the legal process to be couched an ‘interim proceeding’.

He also cautions against expecting any swift break-up of Facebook’s data mining and mingling to follow, noting there are other legal avenues for the lawyered-up company to pursue — meaning it could be months or even years more before there’s any enforcement of the FCO order.

“This is particularly problematic because economic power in digital markets consolidates extremely quickly,” Podszun also said, calling for reform of competition law so it can effectively respond to digital gatekeepers.

“The proceedings therefore show that there must be changes in the way dominance of gatekeeper companies on the Internet is dealt with. The competition authorities must be able to act more effectively and more quickly in such cases. This is where the German and European legislators are called upon to act. Plans are being developed on the national and the European level. The reasoning in the Facebook case will be a boost to competition commissioner Margrethe Vestager in her fight against these companies, too.”

The sedate pace of regional competition law vs the blistering speed of digital business has long led to calls for competition law reform — a topic that’s now front of mind for EU lawmakers.

After Facebook’s first successful appeal of the FCO order, Podszun suggested a number of changes were needed to update EU competition law for the platform era. One of which — to evolve traditional market definitions to allow for interventions in digital markets to prevent tipping — is being actively consulted on by the Commission which is now considering whether regulators need a new tool to combat tipping in digital markets. 

It is also looking at applying ex ante regulation to so called ‘gatekeepers’ — aka platforms which have gained significant market power — as another step to try to ensure ‘fair functioning’ of digital markets.

Speaking during an Atlantic Council discussion today, Commission EVP Margrethe Vestager — who both leads digital policy for the bloc and heads up its antitrust division — signalled, in her usual roundabout way, that digitally driven competition reform is indeed coming.

“We have an intense debate about competition enforcement in the digital era [and] we need to change for the times we’re in because the market dynamics are different, they are faster, you have marginal prices approaching zero, you have network effects, you have the data-driven economy. So of course we need to change with the times that we’re in,” she said. “But we will not negotiate and we will not compromise on this being build on the rule of law and the responsibility for the courts in order to make sure that we have equal treatment between businesses.”

Asked whether she’d like the power to rescind merger approvals — with the moderator citing the case of Facebook reversing a prior commitment to EU regulators who approved its acquisition of WhatsApp (that it would never combine user data between its eponymous service and the messaging app, before going on to do just that) — she responded that that is “a very specific situation”, before noting that EU regulators performed a competition analysis at the time — looking at whether, if Facebook did merge data between the services, would it be a “competition problem” or not?

“Back then they found that no that would not be the case. So in that respect… on substance this would not be a case for unscrambling ‘the eggs’. And it is indeed very difficult to unscramble the eggs,” she said.

Vestager also conceded that the third component of EU antitrust decisions — “how to make competition come back?” — remains a “work in progress”.

“Of course we haven’t seen the effect of the Android preference menu because very few Android phones have been shipped due to the COVID crisis,” she said on that. “But it remains to be seen if, when Google is preinstalled after the untying and other services can be chosen, will that work? Will that sort of open the market for others — search and browsers — than the Google choices?”

Twitter says some business users had their private data exposed

Flip the “days since the last Twitter security incident” back to zero.

Twitter said Tuesday that it has emailed its business customers, such as those who advertise on the site, to warn that their information may have been compromised in a security lapse.

The social network giant said that business users’ billing information was inadvertently stored in the browser’s cache, and it was “possible” that others, such as those who share computers, could have accessed it.

That data includes the business user’s email addresses, phone numbers, and the last four-digits of their credit card number associated with the account.

Twitter told users that it first became aware of the problem on May 20, a month after Twitter disclosed a similar bug that improperly stored Twitter user data, such as direct messages, in Firefox’s browser cache.

BBC News was first to report the news.

Twitter spokesperson Laura Pacas confirmed the incident to TechCrunch, but declined to disclose the number of people affected.

“We became aware of an incident where if you viewed your billing information on ads.twitter.com or analytics.twitter.com the billing information may have been stored in the browser’s cache,” the spokesperson said. “As soon as we discovered this was happening, we resolved the issue and communicated to potentially impacted clients to make sure they were aware and informed on how to protect themselves moving forward.”

It’s the latest security incident in recent years.

Last year alone, Twitter closed a bug that allowed a researcher to discover phone numbers associated with millions of Twitter accounts; admitted it gave account location data to one of its partners, even if the user had opted-out of having their data shared; and inadvertently gave its ad partners more data than it should have. Twitter last year also said it used phone numbers provided by users for two-factor authentication for serving targeted ads.

In 2018, Twitter admitted it stored user passwords in plaintext, and warned its millions of users to reset their passwords.

Apple’s Safari will soon tell you all the ad trackers watching you

Apple is turning the tables on invasive ad trackers.

The tech giant announced Monday a new privacy feature in its underdog browser, Safari, which will shine a spotlight on all of the ad trackers embedded on each article or website you visit.

Safari’s new anti-tracking feature sits in the top part of the browser next to the address bar, and blocks intrusive trackers as you browse the web. Users can also open the anti-tracker and view a privacy report, which details all of the trackers on the page. 

Rival browsers, like Firefox and Brave, already have anti-tracking features built in.

It’s the latest feature that tries to turn the tables on the targeted ad and tracking industry. As targeted advertising became more invasive over the years, Apple has responded by bundling features to its software, like its intelligence tracking prevention technology and allowing Safari users to install content blockers that prevent ads and trackers from loading.

The new Safari features will land in the latest version of macOS Big Sur, expected out later this year.

Apple’s iOS 14 will give users option to decline ad tracking

A new version of iOS wouldn’t be the same without a bunch of security and privacy updates. Apple on Monday announced a ton of new features it’ll bake into iOS 14, expected out later this year with the release of new iPhones and iPads.

Apple said it will allow users to share your approximate location with apps, instead of your precise location. It’ll allow apps to take your rough location without identifying precisely where you are. It’s another option that users have when they give over their location. Last year, Apple allowed users to give over their location once so that apps can’t track a person as they go about their day.

iPhones with iOS 14 will also get a camera recording indicator in the status bar. It’s a similar feature to the camera light that comes with Macs and MacBooks. The recording indicator will sit in the top bar of your iPhone’s display when your front or rear camera is in use.

But the biggest changes are for app developers themselves, Apple said. In iOS 14, users will be asked if they want to be tracked by the app. That’s a major change that will likely have a ripple effect: by allowing users to reject tracking, it’ll reduce the amount of data that’s collected, preserving user privacy.

Apple also said it will also require app developers to self-report the kinds of permissions that their apps ask for. This will improve transparency, allowing the user to know what kind of data they may have to give over in order to use the app. It’s a feature that Android users have been able to see app permissions for years on the Google Play app store.

The move is Apple’s latest assault against the ad industry as part of the tech giant’s privacy-conscious mantra.

The ad industry has frequently been the target of Apple’s barbs, amid a string of controversies that have embroiled both advertisers and data-hungry tech giants, like Facebook and Google, which make the bulk of their profits from targeted advertising. As far back as 2015, Apple CEO Tim Cook said its Silicon Valley rivals are “gobbling up everything they can learn about you and trying to monetize it.” Apple, which makes its money selling hardware, “elected not to do that,” said Cook.

As targeted advertising became more invasive, Apple countered by baking in new privacy features to its software, like its intelligence tracking prevention technology and allowing Safari users to install content blockers that prevent ads and trackers from loading.

Just last year Apple told developers to stop using third-party trackers in apps for children or face rejection from the App Store.

Telegram pledges to make anti-censorship tools for Iran and China

The encrypted instant messenger Telegram said on Monday it’s ramping up efforts to develop anti-censorship technologies serving users in countries where it is banned or partially blocked, including China and Iran.

“Over the course of the last two years, we had to regularly upgrade our ‘unblocking’ technology to stay ahead of the censors… We don’t want this technology to get rusty and obsolete. That is why we have decided to direct our anti-censorship resources into other places where Telegram is still banned by governments — places like Iran and China,” co-founder and chief executive Pavel Durov, who lived in Russia for years before going into self-imposed exile, posted on his personal Telegram channel on Monday.

The pledge noticeably came on the heels of the Russian government’s decision to lift its ban on Telegram last week. The app has generated impressive growth in Russia even after it was officially banned in the country in 2018 over its refusal to hand over encryption keys to the authorities who would then have access to users’ content. The restriction prompted the company to launch the “Digital Resistance” initiative that would provide anti-blocking tools to users.

As a result, Telegram resumed accessibility within weeks in most of Russia and the ban had since remained patchy. It doubled its monthly active users to 400 million in May since 2018, with 30 million coming from Russia.

Despite its popularity, the app is trapped in limbo as it copes with disgruntled investors who put up big bucks for the company’s ambitious blockchain platform, Telegram Open Network, which terminated abruptly in May. 

It’s unclear why Russia suddenly decided to changed tack on Telegram. In a statement, Roskomnadzor, the telecommunications authority that initially ordered the ban, said the decision arrived after it had assessed the “readiness expressed by the founder of Telegram to counter terrorism and extremism.”

This inevitably raised questions of the kind of concession Telegram has made to the Russian state. Durov stressed that his company uses advanced mechanisms to detect and prevent terrorist acts without compromising user privacy, the very ethos of Telegram. Time will tell how the app can accommodate two challenging tasks that are widely seen as mutually exclusive.

The government may also have a motive to unblock Telegram, which is particularly popular among Russian youngsters, as a constitutional vote that could extend Putin’s rule is scheduled for next month.

In response to TechCrunch’s request for comment, Durov brought attention to the company’s counter-terrorism efforts and privacy policy. “There are no sudden changes / secret deals,” he said in a tweet.

Many users in countries where Telegram is inaccessible, like China, run the app with virtual private networks (VPN) or other forms of proxy. The app has turned into a refuge for Chinese users to share and discuss information censored by the authorities.

For instance, following Beijing’s crackdown on bitcoins in 2017, traders flocked to Telegram and other encrypted messengers that were out of reach by the Chinese government. Earlier this year, many Chinese citizens seeking clarity around the coronavirus situation got around the Great Firewall to join Telegram channels maintained by volunteers sharing hourly updates on the virus. One of the largest Chinese channels focused on COVID-19 has amassed more than 85,000 followers.

French court slaps down Google’s appeal against $57M GDPR fine

France’s top court for administrative law has dismissed Google’s appeal against a $57M fine issued by the data watchdog last year for not making it clear enough to Android users how it processes their personal information.

The State Council issued the decision today, affirming the data watchdog CNIL’s earlier finding that Google did not provide “sufficiently clear” information to Android users — which in turn meant it had not legally obtained their consent to use their data for targeted ads.

“Google’s request has been rejected,” a spokesperson for the Conseil D’Etat confirmed to TechCrunch via email.

“The Council of State confirms the CNIL’s assessment that information relating to targeting advertising is not presented in a sufficiently clear and distinct manner for the consent of the user to be validly collected,” the court also writes in a press release [translated with Google Translate] on its website.

It found the size of the fine to be proportionate — given the severity and ongoing nature of the violations.

Importantly, the court also affirmed the jurisdiction of France’s national watchdog to regulate Google — at least on the date when this penalty was issued (January 2019).

The CNIL’s multimillion dollar fine against Google remains the largest to date against a tech giant under Europe’s flagship General Data Protection Regulation (GDPR) — lending the case a certain symbolic value, for those concerned about whether the regulation is functioning as intended vs platform power.

While the size of the fine is still relative peanuts vs Google’s parent entity Alphabet’s global revenue, changes the tech giant may have to make to how it harvests user data could be far more impactful to its ad-targeting bottom line. 

Under European law, for consent to be a valid legal basis for processing personal data it must be informed, specific and freely given. Or, to put it another way, consent cannot be strained.

In this case French judges concluded Google had not provided clear enough information for consent to be lawfully obtained — including objecting to a pre-ticked checkbox which the court affirmed does not meet the requirements of the GDPR.

So, tl;dr, the CNIL’s decision has been entirely vindicated.

Reached for comment on the court’s dismissal of its appeal, a Google spokeswoman sent us this statement:

People expect to understand and control how their data is used, and we’ve invested in industry-leading tools that help them do both. This case was not about whether consent is needed for personalised advertising, but about how exactly it should be obtained. In light of this decision, we will now review what changes we need to make.

GDPR came into force in 2018, updating long standing European data protection rules and opening up the possibility of supersized fines of up to 4% of global annual turnover.

However actions against big tech have largely stalled, with scores of complaints being funnelled through Ireland’s Data Protection Commission — on account of a one-stop-shop mechanism in the regulation — causing a major backlog of cases. The Irish DPC has yet to issue decisions on any cross border complaints, though it has said its first ones are imminent — on complaints involving Twitter and Facebook.

Ireland’s data watchdog is also continuing to investigate a number of complaints against Google, following a change Google announced to the legal jurisdiction of where it processes European users’ data — moving them to Google Ireland Limited, based in Dublin, which it said applied from January 22, 2019 — with ongoing investigations by the Irish DPC into a long running complaint related to how Google handles location data and another major probe of its adtech, to name two

On the GDPR one-stop shop mechanism — and, indirectly, the wider problematic issue of ‘forum shopping’ and European data protection regulation — the French State Council writes: “Google believed that the Irish data protection authority was solely competent to control its activities in the European Union, the control of data processing being the responsibility of the authority of the country where the main establishment of the data controller is located, according to a ‘one-stop-shop’ principle instituted by the GDPR. The Council of State notes however that at the date of the sanction, the Irish subsidiary of Google had no power of control over the other European subsidiaries nor any decision-making power over the data processing, the company Google LLC located in the United States with this power alone.”

In its own statement responding to the court’s decision, the CNIL notes the court’s view that GDPR’s one-stop-shop mechanism was not applicable in this case — writing: “It did so by applying the new European framework as interpreted by all the European authorities in the guidelines of the European Data Protection Committee.”

Privacy NGO noyb — one of the privacy campaign groups which lodged the original ‘forced consent’ complaint against Google, all the way back in May 2018 — welcomed the court’s decision on all fronts, including the jurisdiction point.

Commenting in a statement, noyb’s honorary chairman, Max Schrems, said: “It is very important that companies like Google cannot simply declare themselves to be ‘Irish’ to escape the oversight by the privacy regulators.”

A key question is whether CNIL — or another (non-Irish) EU DPA — will be found to be competent to sanction Google in future, following its shift to naming its Google Ireland subsidiary as the regional data processor. (Other tech giants use the same or a similar playbook, seeking out the EU’s more ‘business-friendly’ regulators.)

On the wider ruling, Schrems also said: “This decision requires substantial improvements by Google. Their privacy policy now really needs to make it crystal clear what they do with users’ data. Users must also get an option to agree to only some parts of what Google does with their data and refuse other things.”

French digital rights group, La Quadrature du Net — which had filed a related complaint against Google, feeding the CNIL’s investigation — also declared victory today, noting it’s the first sanction in a number of GDPR complaints it has lodged against tech giants on behalf of 12,000 citizens.

“The rest of the complaints against Google, Facebook, Apple and Microsoft are still under investigation in Ireland. In any case, this is what this authority promises us,” it added in another tweet.

Oracle’s BlueKai tracks you across the web. That data spilled online

Have you ever wondered why online ads appear for things that you were just thinking about?

There’s no big conspiracy. Ad tech can be creepily accurate.

Tech giant Oracle is one of a few companies in Silicon Valley that has near-perfected the art of tracking people across the internet. The company has spent a decade and billions of dollars buying startups to build its very own panopticon of users’ web browsing data.

One of those startups, BlueKai, which Oracle bought for a little over $400 million in 2014, is barely known outside marketing circles, but it amassed one of the largest banks of web tracking data outside of the federal government.

BlueKai uses website cookies and other tracking tech to follow you around the web. By knowing which websites you visit and which emails you open, marketers can use this vast amount of tracking data to infer as much about you as possible — your income, education, political views, and interests to name a few — in order to target you with ads that should match your apparent tastes. If you click, the advertisers make money.

But for a time, that web tracking data was spilling out onto the open internet because a server was left unsecured and without a password, exposing billions of records for anyone to find.

Security researcher Anurag Sen found the database and reported his finding to Oracle through an intermediary — Roi Carthy, chief executive at cybersecurity firm Hudson Rock and former TechCrunch reporter.

TechCrunch reviewed the data shared by Sen and found names, home addresses, email addresses and other identifiable data in the database. The data also revealed sensitive users’ web browsing activity — from purchases to newsletter unsubscribes.

“There’s really no telling how revealing some of this data can be,” said Bennett Cyphers, a staff technologist at the Electronic Frontier Foundation, told TechCrunch.

“Oracle is aware of the report made by Roi Carthy of Hudson Rock related to certain BlueKai records potentially exposed on the Internet,” said Oracle spokesperson Deborah Hellinger. “While the initial information provided by the researcher did not contain enough information to identify an affected system, Oracle’s investigation has subsequently determined that two companies did not properly configure their services. Oracle has taken additional measures to avoid a reoccurrence of this issue.”

Oracle did not name the companies or say what those additional measures were, and declined to answer our questions or comment further.

But the sheer size of the exposed database makes this one of the largest security lapses this year.

The more it knows

BlueKai relies on vacuuming up a never-ending supply of data from a variety of sources to understand trends to deliver the most precise ads to a person’s interests.

Marketers can either tap into Oracle’s enormous bank of data, which it pulls in from credit agencies, analytics firms, and other sources of consumer data including billions of daily location data points, in order to target their ads. Or marketers can upload their own data obtained directly from consumers, such as the information you hand over when you register an account on a website or when you sign up for a company’s newsletter.

But BlueKai also uses more covert tactics like allowing websites to embed invisible pixel-sized images to collect information about you as soon as you open the page — hardware, operating system, browser and any information about the network connection.

This data — known as a web browser’s “user agent” — may not seem sensitive, but when fused together it can create a unique “fingerprint” of a person’s device, which can be used to track that person as they browse the internet.

BlueKai can also tie your mobile web browsing habits to your desktop activity, allowing it to follow you across the internet no matter which device you use.

Say a marketer wants to run a campaign trying to sell a new car model. In BlueKai’s case, it already has a category of “car enthusiasts” — and many other, more specific categories — that the marketer can use to target with ads. Anyone who’s visited a car maker’s website or a blog that includes a BlueKai tracking pixel might be categorized as a “car enthusiast.” Over time that person will be siloed into different categories under a profile that learns as much about you to target you with those ads.

(Sources: DaVooda, Filborg/Getty Images; Oracle BlueKai)

The technology is far from perfect. Harvard Business Review found earlier this year that the information collected by data brokers, such as Oracle, can vary wildly in quality.

But some of these platforms have proven alarmingly accurate.

In 2012, Target mailed maternity coupons to a high school student after an in-house analytics system figured out she was pregnant — before she had even told her parents — because of the data it collected from her web browsing.

Some might argue that’s precisely what these systems are designed to do.

Jonathan Mayer, a science professor at Princeton University, told TechCrunch that BlueKai is one of the leading systems for linking data.

“If you have the browser send an email address and a tracking cookie at the same time, that’s what you need to build that link,” he said.

The end goal: the more BlueKai collects, the more it can infer about you, making it easier to target you with ads that might entice you to that magic money-making click.

But marketers can’t just log in to BlueKai and download reams of personal information from its servers, one marketing professional told TechCrunch. The data is sanitized and masked so that marketers never see names, addresses or any other personal data.

As Mayer explained: BlueKai collects personal data; it doesn’t share it with marketers.

‘No telling how revealing’

Behind the scenes, BlueKai continuously ingests and matches as much raw personal data as it can against each person’s profile, constantly enriching that profile data to make sure it’s up to date and relevant.

But it was that raw data spilling out of the exposed database.

TechCrunch found records containing details of private purchases. One record detailed how a German man, whose name we’re withholding, used a prepaid debit card to place a €10 bet on an esports betting site on April 19. The record also contained the man’s address, phone number and email address.

Another record revealed how one of the largest investment holding companies in Turkey used BlueKai to track users on its website. The record detailed how one person, who lives in Istanbul, ordered $899 worth of furniture online from a homeware store. We know because the record contained all of these details, including the buyer’s name, email address and the direct web address for the buyer’s order, no login needed.

We also reviewed a record detailing how one person unsubscribed from an email newsletter run by an electronics consumer, sent to his iCloud address. The record showed that the person may have been interested in a specific model of car dash-cam. We can even tell based on his user agent that his iPhone was out of date and needed a software update.

The more BlueKai collects, the more it can infer about you, making it easier to target you with ads that might entice you to that magic money-making click.

The data went back for months, according to Sen, who discovered the database. Some logs dated back to August 2019, he said.

“Fine-grained records of people’s web-browsing habits can reveal hobbies, political affiliation, income bracket, health conditions, sexual preferences, and — as evident here — gambling habits,” said the EFF’s Cyphers. “As we live more of our lives online, this kind of data accounts for a larger and larger portion of how we spend our time.”

Oracle declined to say if it informed those whose data was exposed about the security lapse. The company also declined to say if it had warned U.S. or international regulators of the incident.

Under California state law, companies like Oracle are required to publicly disclose data security incidents, but Oracle has not to date declared the lapse. When reached, a spokesperson for California’s attorney general’s office declined to say if Oracle had informed the office of the incident.

Under Europe’s General Data Protection Regulation, companies can face fines of up to 4% of their global annual turnover for flouting data protection and disclosure rules.

Trackers, trackers everywhere

BlueKai is everywhere — even when you can’t see it.

One estimate says BlueKai tracks over 1% of all web traffic — an unfathomable amount of daily data collection — and tracks some of the world’s biggest websites: Amazon, ESPN, Forbes, Glassdoor, Healthline, Levi’s, MSN.com, Rotten Tomatoes, and The New York Times. Even this very article has a BlueKai tracker because our parent company, Verizon Media, is a BlueKai partner.

But BlueKai is not alone. Nearly every website you visit contains some form of invisible tracking code that watches you as you traverse the internet.

As invasive as it is that invisible trackers are feeding your web browsing data to a gigantic database in the cloud, it’s that very same data that has kept the internet largely free for so long.

To stay free, websites use advertising to generate revenue. The more targeted the advertising, the better the revenue is supposed to be.

While the majority of web users are not naive enough to think that internet tracking does not exist, few outside marketing circles understand how much data is collected and what is done with it.

Take the Equifax data breach in 2017, which brought scathing criticism from lawmakers after it collected millions of consumers’ data without their explicit consent. Equifax, like BlueKai, relies on consumers skipping over the lengthy privacy policies that govern how websites track them.

In any case, consumers have little choice but to accept the terms. Be tracked or leave the site. That’s the trade-off with a free internet.

But there are dangers with collecting web-tracking data on millions of people.

“Whenever databases like this exist, there’s always a risk the data will end up in the wrong hands and in a position to hurt someone,” said Cyphers.

Cyphers said the data, if in the hands of someone malicious, could contribute to identity theft, phishing or stalking.

“It also makes a valuable target for law enforcement and government agencies who want to piggyback on the data gathering that Oracle already does,” he said.

Even when the data stays where it’s intended, Cyphers said these vast databases enable “manipulative advertising for things like political issues or exploitative services, and it allows marketers to tailor their messages to specific vulnerable populations,” he said.

“Everyone has different things they want to keep private, and different people they want to keep them private from,” said Cyphers. “When companies collect raw web browsing or purchase data, thousands of little details about real people’s lives get scooped up along the way.”

“Each one of those little details has the potential to put somebody at risk,” he said.


Send tips securely over Signal and WhatsApp to +1 646-755-8849.

UK gives up on centralized coronavirus contacts tracing app — will switch to model backed by Apple and Google

The UK has given up building a centralized coronavirus contacts tracing app and will instead switch to a decentralized app architecture, the BBC has reported. This means its future app will be capable of plugging into the joint ‘exposure notification’ API which has been developed in recent weeks by Apple and Google.

The UK’s decision to abandon a bespoke app architecture comes more than a month after ministers had been reported to be eyeing such a switch. They went on to award a contract to an IT supplier develop a decentralized tracing app in parallel as a backup but continued to test the centralized app, called NHS COVID-19.

A number of European countries have now successfully launched contracts tracing apps with a decentralized app architecture that’s able to plug into the ‘Gapple’ API — including Denmark, Germany, Italy, Latvia and Switzerland. Several more such apps remain in testing. While EU Member States just agreed on a technical framework to enable cross-border interoperability of apps based on the same architecture.

Germany — which launched its ‘Corona Warning App’ this week — announced the software had been downloaded 6.5M times in the first 24 hours.

The UK’s NHS COVID-19 app, meanwhile, has faced a plethora of technical barriers and privacy challenges as a direct consequence of the government’s decision to opt for a proprietary system which uploads proximity data to a central server, rather than processing exposure notifications locally on device.

Apple and Google’s API, which is being used by all Europe’s decentralized apps, does not support centralized app architectures — meaning the UK app faced challenges related to accessing Bluetooth in the background.

The centralized choice also raised big questions around cross-border interoperability, as we’ve explained before. So the UK’s move to abandon the approach and adopt a decentralized model is hardly surprising — although the time it’s taken the government to arrive at the obvious conclusion does raise some major questions over its competence at handling technology projects.

Perhaps unsurprisingly, ministers are now heavily de-emphasizing the importance of having an app in the fight against the coronavirus at all. The Department for Health and Social Care’s, Lord Bethell, told the Science and Technology Committee yesterday the app will not now be ready until the winter. “We’re seeking to get something going for the winter, but it isn’t a priority for us,” he said.

Yet the centralized version of the NHS COVID-19 app has been in testing in a limited geographical pilot on the Isle of Wight since early May — and up until the middle of last month health minister, Matt Hancock, had said it would be rolled out nationally in mid May.

Of course that timeframe came and went without launch. And now the launch is being booted right into the back end of the year. Compare and contrast that with government messaging at its daily coronavirus briefings back in May — when Hancock made “download the app” one of the key slogans.

Michael Veale, a lecturer in digital rights and regulation at UCL — who has been involved in the development of the DP3T decentralized contacts tracing standard, which influenced Apple and Google’s choice of API — welcomed the UK’s decision to ditch a centralized app architecture but questioned why the government has wasted so much time.

“This is a welcome, if a heavily and unnecessarily delayed, move by NHSX,” Veale told TechCrunch. “The Google-Apple system in a way is home-grown: Originating with research at a large consortium of universities led by Switzerland and including UCL in the UK. NHSX has no end of options and no reasonable excuse to not get the app out quickly now. Germany and Switzerland both have high quality open source code that can be easily adapted. The NHS England app will now be compatible with Northern Ireland, the Republic of Ireland, and also the many destinations for holidaymakers in and out of the UK.”

NHSX relayed our request for comment on the switch to a decentralized system and the new timeframe for an app launch to the Department of Health and Social Care — but the department had not responded to us at the time of publication.

Earlier this week the BBC reported that a former Apple executive, Simon Thompson, was taking charge of the delayed app project — while the two lead managers, the NHSX’s Matthew Gould and Geraint Lewis — were reported to be stepping back.

Government briefings to the press today have included suggestions that app testers on the Isle of Wight told it they were not comfortable receiving COVID-19 notifications via text message — and that the human touch of a phone call is preferred.

However none of the European countries that have already deployed contacts tracing apps has promoted the software as a one-stop panacea for tackling COVID-19. Rather tracing apps are intended to supplement manual contacts tracing methods — the latter involving the use of trained humans making phone calls to people who have been diagnosed with COVID-19 to ask who they might have been in contact with over the infectious period.

Even with major resource put into manual contacts tracing, apps — which use Bluetooth signals to estimate proximity between smartphone users in order to calculate virus expose risk — could still play an important role by, for example, being able to trace strangers who are sat near an infected person on public transport.

Zoom U-turns on no e2e encryption for free users

In a major security U-turn, videoconferencing platform Zoom has said it will, after all, offer end-to-end encryption to all users — including those who do not pay to use its service.

The caveat is that free users must provide certain “additional” pieces of information for verification purposes (such as a phone number where they can respond to a verification link) before being allowed to use e2e encryption — which Zoom says is a necessary check so it can “prevent and fight abuse” on its platform. However it’s a major step up from the prior offer of ‘no e2e unless you pay us‘.

“We are grateful to those who have provided their input on our E2EE design, both technical and philosophical,” Zoom writes in a blog update today. “We encourage everyone to continue to share their views throughout this complex, ongoing process.”

The company faced a storm of criticism earlier this month after Bloomberg reported comments by CEO Eric Yuan, who said it did not intend to provide e2e encryption for non-paying users because it wanted to be able to work with law enforcement.

Security and privacy experts waded it to blast the stance. One notable critic of the position was cryptography expert, Matthew Green — whose name you’ll find listed on Zoom’s e2e encryption design white paper.

“Once the precedent is set that E2E encryption is too ‘dangerous’ to hand to the masses, the genie is out of the bottle. And once corporate America accepts that private communications are too politically risky to deploy, it’s going to be hard to put it back,” Green warned in a nuanced Twitter thread.

Since the e2e encryption storm, Zoom has faced another scandal — this time related to privacy and censorship, after it admitted shutting down a number of Chinese activists accounts at the request of the Chinese government. So the company may have stumbled upon another good reason for reversing its stance — given it’s a lot more difficult to censor content you can’t see.

Explaining the shift in its blog post, Zoom says only that it follows a period of engagement “with civil liberties organizations, our CISO council, child safety advocates, encryption experts, government representatives, our own users, and others”.

“We have also explored new technologies to enable us to offer E2EE to all tiers of users,” it adds.

Its blog briefly discusses how non-paying users will be able to gain access to e2e encryption, with Zoom writing: “Free/Basic users seeking access to E2EE will participate in a one-time process that will prompt the user for additional pieces of information, such as verifying a phone number via a text message.”

“Many leading companies perform similar steps on account creation to reduce the mass creation of abusive accounts. We are confident that by implementing risk-based authentication, in combination with our current mix of tools — including our Report a User function — we can continue to prevent and fight abuse,” it adds.

Certain countries require an ID check to purchase a SIM card so Zoom’s verification provision may make it impossible for some users to access e2e encryption without leaving an identity trail which state agencies could unpick.

Per Zoom’s blog post, a beta of the e2e encryption implementation will kick off in July. The platform’s default encryption remains AES 256 GCM in the meanwhile.

The forthcoming e2e encryption will not be switched on by default — but rather offered as an option. Zoom says this is because it limits some meeting functionality (“such as the ability to include traditional PSTN phone lines or SIP/H.323 hardware conference room systems”).

“Hosts will toggle E2EE on or off on a per-meeting basis,” it further notes, adding that account administrators will also have the ability to enable and disable E2EE at the account and group level.

Today the company also released a v2 update of its e2e encryption design — posting the spec to Github.