Germany’s antitrust watchdog questions the future of behavioral advertising

Germany’s antitrust watchdog made some interesting comments vis-a-vis the programmatic advertising market yesterday — which question the appropriateness and sustainability of the (still dominant) tracking-and-profiling ad targeting business model.

In a statement accompanying publication of a sectoral report (the full report is here in German), the Federal Cartel Office’s (FCO) president, Andreas Mundt, wrote:

We should seriously ask ourselves whether we want to have virtually ‘transparent’ internet users only because we are supposed to buy certain products or services. What appears to be particularly problematic from a competition point of view is that only a very small number of companies have access to large amounts of a variety of current, first-hand user data. This imbalance must always be taken into account in the case of potential interventions.

The FCO’s review of the programmatic (non-search) ad sector found insufficient transparency for market players other than Alphabet, the dominant force — which it observes is “present at almost all levels of the value chain of non-search online advertising and has an extraordinarily strong market position with regard to practically all relevant services”.

This is a more typical observation for an antitrust regulator to make (and something other competition watchdogs have previously called out in their own reviews of the online ad market, such as the UK’s CMA in its 2020 market study). Google’s ad business also remains under antitrust probe in a number of European markets.

But the FCO’s wider questioning of the programmatic industry’s surveillance of web users implies the German regulator is uneasy about the idea of imposing what might be viewed as a ‘classic’ competition remedy to address the imbalance it’s identified around data access — say by boosting less dominant players’ visibility of web users info, such as by requiring Alphabet to share first party data with rivals so they’re not at such a disadvantage vs its high dimension view (a step which would of course mean even more surveillance and even less privacy for web users).

The FCO’s review of programmatic advertising is also notable in calling out insufficient transparency for web users whose information is subject to ad surveillance:

The situation is also intransparent from the users’ perspective. Their data form the most important basis for programmatic advertising. However, it is hardly possible for users to assess what happens to their data, who receives them and how they are used. Several legal policy proposals have been made for restricting data collection and the use of data for advertising purposes. The Bundeskartellamt [FCO] has looked into this issue from a competition law perspective.

A line in the executive summary of the report goes on to posit that “from a competition point of view consideration can thus be given to the question as to whether, overall, it would seem advisable to move away from such a system of data-driven advertising” — on account of what the FCO finds to be systemic complexity, opacity and privacy hostility in the programmatic ad market (which it also points out hinges upon “highly detailed personal profiles [being] created, which include highly sensitive information, solely for the purpose of facilitating advertising”).

This nuanced view of a situation where competition and privacy might — through an overly simplistic lens — be perceived as being in tension (i.e. if greater privacy for users results in greater market power for the handful of giants which have amassed tonnes of first party data) is not so surprising when you consider the FCO’s pioneering case against Facebook’s ‘superprofiling’, where the regulator has taken the view that the social network’s exploitative abuse of privacy is an antitrust abuse too. (That case is subject to a referral to the EU’s top court where a ruling remains pending.)

The FCO has also been investigating Google’s terms and conditions for processing user data since May 2021 — when it announced it would look into whether the adtech giant gives users sufficient choice over data processing or makes use of its services conditional on users agreeing to its processing of their information.

Earlier this year — in January — it issued a preliminary statement of objections on Google after finding it does not offer users sufficient choice. It also said it enjoys a strategic advantage over other ad businesses due to “established access to relevant data gathered from a large number of different services” — signalling it intends to require Google to offer users more choice over its processing.

Final enforcement of that case is still pending but the FCO’s review of the programmatic ad industry seems likely to bolster its preliminary take that Google dominates off the back of an unfair data advantage — and may lead it on to a conclusion that fixing that component might be the least harmful way to go about rebalancing a competitively skewed ad market that’s simultaneously and systematically exploiting consumer privacy.

What the FCO’s perspective on the programmatic ad industry’s problems might mean for other market interventions remains to be seen. But it writes of the insights gleaned from the sectoral inquiry: “There will be a particular focus on the large digital companies which play a key role in the online advertising sector.”

It also specifies that its review of the programmatic ad sector will inform “current and future proceedings” — which is relevant to the aforementioned Google/Alphabet data processing probe; and also to an open investigation of Apple’s app privacy framework (the latter has been accused by the ad industry of being anti-competitive yet clearly aligns with user privacy since it empowers iOS users to deny tracking requests from third party apps which Apple mandates must ask users if they want to be tracked).

Applying specific measures on Apple that demand it applies the same up-front request standard to its own tracking of iOS users might be one way to smooth competition concerns raised over its App Tracking Transparency without having to row back on privacy protections the feature delivers for users.

In Germany, both Google/Alphabet and Apple have been subject to a special abuse control regime since the FCO confirmed (in January 2022; and April 2023 respectively) they meet the requirement of having paramount significance across digital markets. This designation allows the competition regulator to intervene proactively on their businesses when it suspects anti-competitive behavior, rather than having to first investigate and establish a breach before being able to intervene.

Returning to the tangled issue of tracking, it’s also notable that the German antitrust regulator is zooming out for a big picture view: Its remarks highlight wider digital policy proposals that are bringing in new limits on use of data for tracking ads, such as the EU’s incoming Digital Markets Act and Digital Services Act — which suggests it’s holding out hope for regionally rebooted (joint) digital enforcement to achieve structural reform of the surveillance ad industry.

So far, no single regulator — of any stripe — has been able to unpick the tangled issue of tracking and its systemic toxicity. But maybe, just maybe, joint working that chips away at the industrial data complex from multiple angles will finally turn the tanker in a way that works for a competition agenda and web users too.

Germany’s antitrust watchdog questions the future of behavioral advertising by Natasha Lomas originally published on TechCrunch

Amazon settles with FTC for $25M after ‘flouting’ kids’ privacy and deletion requests

Amazon will pay the FTC a $25 million penalty as well as “overhaul its deletion practices and implement stringent privacy safeguards” to avoid charges of violating the Children’s Online Privacy Protection Act to spruce up its AI.

Amazon’s voice interface Alexa has been in use in homes across the globe for years, and any parent who has one knows that kids love to play with it, make it tell jokes, even use it for its intended purpose, whatever that is. In fact it was so obviously useful to kids who can’t write or have disabilities that the FTC relaxed COPPA rules to accommodate reasonable usage: Certain service-specific analysis of kids’ data, like transcription, was allowed as long as it is not retained any longer than reasonably necessary.

It seems that Amazon may have taken a rather expansive view on the “reasonably necessary” timescale, keeping kids’ speech data more or less forever. As the FTC puts it:

Amazon retained children’s recordings indefinitely — unless a parent requested that this information be deleted, according to the complaint. And even when a parent sought to delete that information, the FTC said, Amazon failed to delete transcripts of what kids said from all its databases.

Geolocation data was also not deleted, a problem the company “repeatedly failed to fix.”

This has been going on for years — the FTC alleges that Amazon knew about it as early as 2018 but didn’t take action until September of the next year, after the agency gave them a helpful nudge.

That kind of timing usually indicates that a company would have continued with this practice forever. Apparently, due to “faulty fixes and process fiascos,” some of those practices did continue until 2022!

You may well ask, what is the point of having a bunch of recordings of kids talking to Alexa? Well, if you plan on having your voice interface talk to kids a lot, it sure helps to have a secret database of audio interactions that you can train your machine learning models on. That’s how the FTC said Amazon justified its retention of this data.

FTC Commissioners Bedoya and Slaughter, as well as Chair Khan, wrote a statement accompanying the settlement proposal and complaint to particularly call out this one point:

The Commission alleges that Amazon kept kids’ data indefinitely to further refine its voice recognition algorithm. Amazon is not alone in apparently seeking to amass data to refine its machine learning models; right now, with the advent of large language models, the tech industry as a whole is sprinting to do the same.

Today’s settlement sends a message to all those companies: Machine learning is no excuse to break the law. Claims from businesses that data must be indefinitely retained to improve algorithms do not override legal bans on indefinite retention of data. The data you use to improve your algorithms must be lawfully collected and lawfully retained. Companies would do well to heed this lesson.

So today we have the $25 million fine, which is of course less than negligible for a company Amazon’s size. It’s clearly complying with the other provisions of the proposed order that will likely give them a headache. The FTC says the order would:

  • Prohibit Amazon from using geolocation, voice information and children’s voice information subject to consumers’ deletion requests for the creation or improvement of any data product.
  • Require the company to delete inactive Alexa accounts of children.
  • Require Amazon to notify users about the FTC-DOJ action against the company.
  • Require Amazon to notify users of its retention and deletion practices and controls.
  • Prohibit Amazon from misrepresenting its privacy policies related to geolocation, voice and children’s voice information.
  • Mandate the creation and implementation of a privacy program related to the company’s use of geolocation information.

This settlement and action is totally independent from the FTC’s other one announced today, with Amazon subsidiary Ring. There is a certain common thread of “failing to implement basic privacy and security protections,” though.

In a statement, Amazon said that “While we disagree with the FTC’s claims and deny violating the law, this settlement puts the matter behind us.” They also promise to “remove child profiles that have been inactive for more than 18 months,” which seems incredibly long to retain that data. I’ve followed up with questions about that duration and whether the data will be used for ML training and will update if I hear back.

Amazon settles with FTC for $25M after ‘flouting’ kids’ privacy and deletion requests by Devin Coldewey originally published on TechCrunch

Amazon’s Ring to pay $5.8M after staff and contractors caught snooping on customer videos, FTC says

Ring, the Amazon-owned maker of video surveillance devices, will pay $5.8 million over claims brought by the Federal Trade Commission that Ring employees and contractors had broad and unrestricted access to customers’ videos for years.

The settlement was filed in the U.S. District Court for the District of Columbia on Wednesday. The FTC confirmed the settlement a short time later. News of the settlement was first reported by Reuters.

The FTC said that Ring employees and contractors were able to view, download, and transfer customers’ sensitive video data for their own purposes as a result of “dangerously overbroad access and lax attitude toward privacy and security.”

According to the FTC’s complaint, Ring gave “every employee — as well as hundreds of Ukraine-based third-party contractors — full access to every customer video, regardless of whether the employee or contractor actually needed that access to perform his or her job function.” The FTC also said that Ring staff and contractors “could also readily download any customer’s videos and then view, share, or disclose those videos at will.”

The FTC alleged on at least two occasions Ring employees improperly accessed the private Ring videos of women. In one of the cases, the FTC said the employee’s spying went on for months, undetected by Ring.

According to a draft notice of the notification Ring plans to send affected customers, the individuals are no longer employed by Ring.

The government’s complaint also said that Ring failed to respond to multiple reports of credential stuffing — where hackers use stolen user credentials from one data breach to break into the accounts using the same credentials on other sites. The FTC said Ring allowed the use of easily guessable passwords — as simple as “password” and “12345678” — which made brute-forcing accounts easier, and that Ring failed to act sooner to prevent account hacks.

The FTC claims more than 55,000 U.S. customers had their accounts compromised between January 2019 and March 2020 as a result. In more than a dozen cases, hackers maintained access to hacked accounts for more than a month.

Ring subsequently made two-factor authentication mandatory for users in February 2020. Ring introduced end-to-end encryption in 2021, allowing users to encrypt their doorbell videos from anyone other than themselves — including Ring.

Along with paying $5.8 million to settle the FTC’s allegations, Ring also agreed to establish and maintain a data security program with regular assessments for the next 20 years, as well as disclosing what access its employees and contractors have to customer data.

Ring spokesperson Emma Daniels said in an emailed statement to TechCrunch that Ring disagreed with the FTC’s allegations and denied violating the law.

Amazon’s Ring to pay $5.8M after staff and contractors caught snooping on customer videos, FTC says by Zack Whittaker originally published on TechCrunch

WhatsApp is working on introducing usernames to the app

WhatsApp has long relied on phone numbers as the only identity for accounts. Users need a phone number to create an account. Anyone in an individual or a group chat can see your phone number. This might be changing as WhatsApp is working on introducing usernames.

The latest beta version of WhatsApp’s app suggests that the company could introduce this feature in the future, according to a report from WABetaInfo. The report noted that the username section will be visible on the Profile page in Settings.

Image Credits: WABetaInfo

Separately, TechCrunch was able to confirm WABetaInfo’s report. We looked at the code of the latest Android app and found references to the username field.

Image Credits: TechCrunch

It’s not clear what process the Meta-owned app might introduce for users to pick a username as there are currently 2 billion people using the messaging app. The company didn’t comment on the story and didn’t share any details about the feature.

WhatsApp’s competing messaging app Telegram has given the ability for users to hide their contact and show their usernames instead. Last year, Telegram also launched auctions for premium usernames based on the TON blockchain. It will be interesting to see how WhatsApp approaches claims for premium usernames and how it plans to protect them.

At the moment, WhatsApp users in groups and communities can see each others’ phone numbers. When this feature rolls out, the app will most likely let people hide their phone numbers from people who are not in their contact book.

Earlier this month, WhatsApp rolled out a new privacy feature that allows users to hide and lock individual conversations. These conversations can only be unlocked with a device’s biometric authentication or password.

WhatsApp is working on introducing usernames to the app by Ivan Mehta originally published on TechCrunch

TikTok’s lead privacy regulator in Europe takes heat from MEPs

MEPs in the European Parliament took the opportunity of a rare in-person appearance by Ireland’s data protection commissioner, Helen Dixon, to criticize the bloc’s lead privacy regulator for most of Big Tech over how long it’s taking to investigate the video-sharing social media platform TikTok.

This concern is the latest expression of wider worries about enforcement of the General Data Protection Regulation (GDPR) not keeping pace with usage of major digital platforms.

The Irish Data Protection Commission (DPC) opened two inquiries into aspects of TikTok’s business back in September 2021: One focused on its handling of children’s data; and another looking at data transfers to China, where the platform’s parent company is based. Neither has yet concluded. Although the kids’ data inquiry looks relatively advanced along the GDPR enforcement rail at this stage — with Ireland having submitted it to other EU regulators for review in September last year.

Per Dixon, a final decision on the TikTok kids’ data case should arrive later this year.

The UK’s data protection watchdog — which now operates outside the EU — has taken some enforcement action in this area already, putting out a provisional finding that TikTok misused children’s data last fall. The ICO went on to issue its final decision on the investigation last month, when it levied a fine of around $15.7M. (Albeit, it’s worth noting it shrunk the size of the fine imposed and narrowed the scope of the final decision, dropping a provisional finding that TikTok had unlawfully used special category data — blaming resource limitations for downgrading the scope of its investigation.)

In remarks to the European Parliament’s civil liberties committee (LIBE) today, which had invited Ireland’s data protection commissioner to talk about TikTok specifically, Dixon signalled an expectation that a decision on the TikTok children’s data probe would be coming this year, making a reference to the company as she told MEPs: “2023 is going to be an even bigger year for GDPR enforcement on foot of DPC large scale investigations.”

Other large scale cases she suggested will result in decisions being handed down this year include a very long-running probe of (TechCrunch’s parent company) Yahoo (née Oath), which was opened by the DPC back in August 2019 — and which she noted is also currently at the Article 60 stage.

She added that there are “many further large scale inquiries travelling closely behind” without offering any detail on which cases she was referring to.

Plenty of Big Tech investigations remain undecided by Ireland — not least major probes into Google’s adtech (opened May 2019) and location tracking (February 2020), to name two. (The former of which has led to the DPC being sued for inaction.) Neither case merited a name-check by Dixon today so presumably — and luckily for Google — aren’t on the slate for completion this year.

Ireland holds an outsized enforcement role for the GDPR on Big Tech owing to how many multinational tech firms choose to locate their regional headquarters in the country (which also offers a corporate tax rate that undercuts those applied by many other EU Member States). Hence why parliamentarians were so keen to hear from Dixon and get her respond to concerns that enforcement of the regulation isn’t holding platform giants to account in any kind of effective timeframe.

One thing was clear from today’s performance: Ireland’s data protection commissioner did not come to appease her critics. Instead Dixon directed a large chunk of the time allocated to her for opening remarks to mount a robust defence of the DPC’s “busy GDPR enforcement”, as she couched it — rejecting attacks on its enforcement record by claiming, contrary to years of critical analysis (by rights groups such as noyb, BEUC and the Irish Council for Civil Liberties), that its legal analysis and infringement findings are “generally accepted in all cases” by fellow regulators who review its draft decisions.

“Differences between the DPC and its fellow supervisory authorities [are] largely confined to marginal issues around the fringes,” she also argued — taking another swipe at what she couched as an “narrative promulgated by some commentators that in many of the cross border cases in which high value fines were levied the DPC was forced to take tougher enforcement action by its fellow supervisory authorities across the EU” that she claimed is “inaccurate”.

Back on the day’s topic of TikTok, she gave MEPs a status update on the data transfers decision — revealing that “a preliminary draft of the draft decision” is now with the company to make its “final submissions”. The GDPR’s procedural track means Ireland must submit its draft decision to other concerned data protection authorities for review (and the chance to raise objections). So there could still be considerable mileage before a final decision lands in this inquiry.

Dixon did not indicate how long it would take the TikTok data transfers inquiry to progress to the next step (aka Article 60), which fires up a cooperation mechanism baked into the GDPR that can itself add many more months to investigation timelines. But it’s worth noting the DPC is trailing a little behind its own recent expectation for the draft decision timeline — back in November, it told TechCrunch it expected to send a draft decision to Article 60 in the first quarter of 2023.

Exports of European users’ data to so-called third countries (outside the bloc), which lack a high level data adequacy agreement with the EU, have been under increased scrutiny since a landmark ruling by the Court of Justice back in July 2020. At that time, as well as striking down a flagship EU-US data transfer deal, EU judges made it clear data protection authorities must scrutinize use of another mechanism, called Standard Contractual Clauses, for transfers to third countries on a case-by-case basis — meaning no such data export could be assumed as safe.

And, just yesterday, a major GDPR data transfer decision did finally emerge out of Ireland — possibly offering a taster of the sort of enforcement that could be coming down the pipe for TikTok’s data transfers in the EU — with Facebook being found to have infringed requirements that Europeans’ information be protected to the same standard as under EU law when it’s taken to the US.

Facebook’s parent company, Meta, was ordered to suspend unlawful data flows within six months and also issued with a record penalty of €1.2 billion for systematic breaches of the rulebook. Meta, meanwhile, has said it will appeal the decision and seek a stay on the implementation of the suspension order.

It’s anyone’s guess when such a decision might land for TikTok’s data transfers to China — a location where digital surveillance concerns are certainly no less alive than they are for the US — but MEP Moritz Körner, of the Free Democratic Party, was one of several LIBE committee MEPs taking issue with the length of time it’s taking for the GDPR to be enforced against another data-mining, data transferring adtech giant.

“It’s good to hear today that you are in the final stage of your [TikTok] investigation but more than four years have gone by!” he emphasized in questions to the Irish commissioner. “And this is an app which millions of our citizens are using — including children and young people… So my question would be does data protection in Europe move quickly enough and what has happened over the past four years?”

Pirate party MEP, Patrick Breyer, had even more pointed remarks for Dixon. He kicked off by calling out her refusal to meet the committee last year — when she had reportedly objected to being asked to appear at a session alongside privacy campaigner, Max Schrems, who had a live legal action open against the DPC related to its enforcement procedures of Meta’s data transfers — which he suggested would have been the appropriate forum for her defence of the DPC’s enforcement record, not a hearing on TikTok specifically. He then went on to hit out at the narrow scoping of the DPC’s investigations into TikTok’s operations — raising broader questions than the regulator is apparently inquiring into — such as over the legality of TikTok’s tracking and profiling of users.

“Hearing that what you are investigating in relation to TikTok is only children’s data and data transfers to China — this addresses only a fraction of what is being criticised and debated about the service and this app,” he argued. “For one thing using TikTok comes with pervasive first party and third party tracking of our every action or every click based on forced consent, which is not necessary for using the service and for providing it. This pervasive tracking has been found to be both a risk to our privacy but also to national security in the case of certain officials. And do you consider this content freely given and valid?”

“Secondly, the app reportedly uses excessive permissions and device information collection, including hourly checking of our location, device mapping, external storage access, access to our contacts, third party apps data collection, none of which is necessary for the app to function. Will you act to protect us from these violations of our privacy?” Breyer continued. “If you remain as inactive as this, as you have been for years, you know this will continue to call into question your competence for [overseeing] the social media companies in Ireland and it will result in more outright bans [by governments on services like TikTok] which is not in the interest of industry either. So I call on you to expand your investigations and to speed them up and cover all these issues of pervasive tracking and excessive surveillance.”

Another MEP, Karolin Braunsberger-Reinhold of the Christian Democratic Union, also touched on the issue of TikTok bans — such as one imposed by the Indian government, back in 2020 — but with apparently less concern about the prospect of a regional ban on the platform than Breyer since she wanted to know what the Dixon was considering “beyond fines”? “Data protection is very important in the European Union so why are we allowing TikTok to send data back to China when we have no information on how that data is being dealt with once it goes back there?” she wondered.

MEPs on the LIBE committee also queried Dixon about what had happened with a TikTok task force set up at the start of 2020, by the European Data Protection Board (EDPB), following earlier concerns raised about privacy and security issues linked to its data collection practices.

Such task forces are typically focused on harmonizing the application of the GDPR in cases where a data processors is not main established in an EU Member State. But TikTok went on — by December 2020 — to be granted main establishment status in Ireland which meant data protection investigations would now be funnelled via Ireland as its lead authority for the GDPR. This revised oversight structure most likely led to a disbanding of the EDPB TikTok task force, since the GDPR contains an established mechanism for cooperation, although Dixon did not provide an obvious response to MEPs on this point.

The clear message from the LIBE committee to Ireland today, in its capacity as TikTok’s lead privacy regulator in the EU, boiled down a simple question: Where is the enforcement?

For her part, Dixon sought to dodge the latest flurry of critical barbs — rejecting accusations (and insinuations) of inaction by arguing that the length of time the DPC is taking to work through the TikTok inquiries is necessary given how much material it’s examining.

She also sought to characterize cross-border GDPR enforcement as “shared” decision-making, as a result of the structure imposed through the regulation’s one-stop-shop mechanism looping concerned authorities into reviewing a lead authority’s draft decisions — also referring to this process as “decision making by committee”. Her point there being that group decision-making inevitably takes longer.

“I do want to assure you we’re working as quickly as we can,” she told MEPs at one point during the session. “We have well over 200 expert staff at the Irish Data Protection Commission. We’re recruiting more. We’re conscious of turning these decisions around… We transmitted that draft decision last October to our concerned authorities. It will be almost a year later now before we have the final decision. That is the form of decision making by committee that the GDPR lays down and it does take time.”

In the case of the TikTok data transfers probe, Dixon leant on the requirement handed down by the CJEU that regulators examine legality on a case by case basis as justifying what she implied was a careful, fact-sifting approach.

“The Court of Justice has obliged us to look at the specific circumstances and the factual backdrop of any specific set of of transfers before we can conclude and so while to some people the answers all seem obvious that’s not the process in which we must engage. We must step, case by case, through on the specifics. And that’s what we have done now and submitted a preliminary draft of our decision to TikTok for submissions,” she argued.

“As I said in my opening statement, we’re far from inactive,” she also asserted, before mounting another fierce defence of the DPC’s record — claiming: “We are by any measure the most active enforcer of data protection law in the EU. Two thirds of all enforcement delivered across the EU/EEA and UK last year was delivered by the Irish Data Protection Commission and that’s verifiable facts.”

Responding to another question from the committee, regarding what sanctions the DPC is looking at if it finds TikTok has infringed the GDPR, Dixon emphasized it has “a whole range of corrective measures up to bans on data processing that we can apply”, not just fines.

“In any investigation we’re open minded in relation to what the applicable and effective measures will be when we conclude an investigation with infringement — so, I can assure you, where we have considered in the [TikTok] case that we’ve already concluded — the children’s data that’s now with our fellow authorities — we have looked across the range of measures available to us in relation to that investigation,” she told MEPs.

The issue of fines that the DPC may (or may not) choose to impose for GDPR breaches is particularly topical — given it’s emerged as a key detail in the aforementioned Meta data transfers enforcement. 

In the Meta transfers case, Dixon and the DPC had not wanted to levy any financial penalty on the tech giant for a multi-year breach affecting hundreds of millions of Europeans. However it was forced to include a fine in the final decision in order to implement a binding decision by the EDPB — which had ordered it to impose a fine of between 20% and 100% of the maximum possible under the GDPR (which is 4% of annual revenue). In the event Ireland opted for the lower bar — setting the penalty at around 1% of Meta’s annual revenue.

In her remarks to MEPs today Dixon defended the DPC’s decision not to propose fining Meta for its illegal transfers — however she offered no substantial argument for why it took such a position.

“As I’m sure you’ll be aware, the DPC respectfully disagreed with the proposal to apply a fine. In our view, a meaningful change, if it was to be delivered, in this area  required the suspension of transfers. No administrative fine could guarantee the kind of change required,” she told MEPs, offering a straw man argument in defence of wanting to let Meta go without any financial sanction which seems to imply there’s an either/or equation for GDPR enforcement — i.e. corrective measures or punishment — when, very clearly, the regulation allows for both (and, indeed, intends that enforcement is dissuasive against future law breaking). Hence the EDPB’s binding decision requiring Ireland to impose a substantial fine on Meta for such a systematic and length infringement of the GDPR.

Instead of elaborating on the rational for choosing not to fine Meta, Dixon switched gears into a swipe of her own — directed at the EDPB — by making an observation that “all” the Board’s binding decisions in cases in which the DPC had acted as lead supervisory authority are subject to annulment proceedings before the Court of Justice of the European Union, before adding (somewhat acidly): “As such the CJEU, rather than the EDPB, will have the final say on the correct interpretation and application of the law.”

Social democrat MEP, Birgit Sippel, picked Dixon up on what she implied was a repeated lack of clarity emanating from the DPC on fines — and flagging a lack of “clear answers” from the Irish commissioner in her remarks to MEPs today on why it had failed to propose any penalty for Meta’s data transfers.

There was no come back from Dixon to that point.

In her questioning, Sippel also wondered whether TikTok was cooperating with the DPC’s investigations — or whether the DPC had adequate access to information from it in order to conduct proper oversight. On this Dixon said the company is cooperating with the two investigations, while noting TikTok has “from time to time” been asking for extensions to submission deadlines which she implied were typically granted as she considered they were merited on account of the amount of volume of material involved — but which provides another small glimpse to put flesh on the bones of GDPR enforcement timeline creep. 

Asked for a response to views expressed by MEPs during the LIBE committee hearing, a TikTok spokesperson told us: “We welcome the Data Protection Commissioner’s acknowledgement that TikTok has been cooperative and responsive with the regulator. As a company we are readily available to meet with lawmakers and regulators to address any concerns.”

In a press release about Dixon’s appearance in front of the committee today, the DPC wrote:

The Data Protection Commission (“the DPC”) was today delighted to be invited to make its first address before the European Parliament’s Committee on Civil Liberties, Justice and Home Affairs (“the LIBE Committee”). The address coincided with the five-year anniversary of the application of the General Data Protection Regulation (“the GDPR”) and covered a wide-range of topics, including the extensive enforcement work of the DPC over the last five years and the progress of some of the large-scale investigations it currently has on-hand; in particular those relating to TikTok.

Today’s address by Commissioner for Data Protection, Helen Dixon, built on the ongoing positive engagement between the DPC and the LIBE Committee, following the visit of a LIBE delegation to the DPC’s offices last September. Welcoming the chance to highlight the successful enforcement work of the DPC to date, Commissioner Dixon reflected on the constructive and useful nature of engagement with the LIBE Committee “as we each, from our respective remits, pursue the drive for fair and effective enforcement of data protection law and protection of fundamental rights.”

Commissioner Dixon was also pleased to answer questions from the MEPs in attendance and provide additional clarity as to the nature and scale of the DPC’s work.

TikTok’s lead privacy regulator in Europe takes heat from MEPs by Natasha Lomas originally published on TechCrunch

Proton launches family subscription plan for privacy app suite starting at $20 per month

Privacy-centric software maker Proton has launched a new family plan starting at $19.99 (€19.99) per month, giving up to six family members access to its entire application suite.

The move comes as the Swiss company looks to bolster its paying subscriber base and lock more people into its growing ecosystem of products.

Founded in 2014, Proton initially arrived on the scene with an encrypted Gmail alternative called Proton Mail, but the company has added a slew of additional encryption-focused apps to the mix, including a VPN, calendar, cloud storage, and — as of last month — a password manager.

Proton’s pricing ethos so far as been centered around the freemium model, with a free tier enabling access to all its products but with some limitations in place (e.g. reduced cloud storage). The company also offers a premium plan for individuals that starts at $7.99 a month, which removes these restrictions.

A family affair

Now, Proton is adopting a tried-and-tested approach in the technology realm to get more people using their products. Part of this involves appealing to those already using them, the idea being that if an already privacy-conscious family member is paying $7.99 a month then they’d be happy to part with an extra $12 to get another five more people on. But of course, the family plan may also be enough to convince someone already considering subscribing to go the whole nine yards and get everyone in their house on board in one fell swoop.

“A family plan has been among our most sought-after services,” Proton product lead David Dudok de Wit said in a statement. “As a parent, I am eager to teach my children the proper ways to approach email, cloud storage, and internet security from the beginning. I know I am not alone in this.”

Proton family plan

Proton family plan Image Credits: Proton

Under the family plan, users get all the usual features that come with Proton’s products including the promise of end-to-end encryption for emails, files, passwords, and events. The company has also multiplied the amount of storage that’s available under the individual plan by six, meaning that family members get to share 3TB of storage across all products, with a bonus 20GB thrown in for good measure each year.

While each $19.99 monthly subscription includes six family members, this also commits them to 24 months — and it means that they’re not actually paying monthly, it’s just what it works out as if they pay the $480 up front for two years’ coverage. Alternatively, they can pay the equivalent of $23.99 per month for a 12-month commitment or if they really just want to pay month-to-month, then they can select the $29.99 plan that can be cancelled at any time.

The plan includes access to all Proton products including Proton Mail, Proton VPN, Proton Drive, and Proton Calendar, though access to the Proton Pass password manager will be limited for now while it remains a beta product.

Proton launches family subscription plan for privacy app suite starting at $20 per month by Paul Sawers originally published on TechCrunch

Discord is testing parental controls that allow for monitoring of friends and servers

New usernames aren’t the only change coming to the popular chat app Discord, now used by 150 million people every month. The company is also testing a suite of parental controls that would allow for increased oversight of Discord’s youngest users, TechCrunch has learned and Discord confirmed. In a live test running in Discord’s iOS app in the U.S., the company introduced a new “Family Center” feature, where parents will be able to configure tools that allow them to see the names and avatars of their teen’s recently added friends, the servers the teen has joined or participated in, and the names and avatars of users they’ve directly messaged or engaged with in group chats.

However, Discord clarifies in an informational screen, parents will not be able to view the content of their teen’s messages or calls in order to respect their privacy.

This approach, which toes a fine line between the need for parental oversight and a minor’s right to privacy, is similar to how Snapchat implemented parental controls in its app last year. Like Discord’s system, Snapchat only allows parents insights into who their teen is talking to and friending, not what they’ve typed or the media they’ve shared.

Users who are part of the Discord test will see the new Family Center hub linked under the app’s User Settings section, below the Privacy & Safety and Profiles sections. From here, parents are able to read an overview of the Family Center features and click a button to “Get Started” when they’re ready to set things up.

Image Credits: Discord screenshot via Watchful.ai

Discord explains on this screen that it “built Family Center to provide you with more content on how your teen uses Discord so you can work together on building positive online behaviors.” It then details the various parental controls, which will allow them to see who their teen is chatting with and friending, and which servers they join and participate in.

Similar to TikTok, parents can scan a QR code provided by the teen to put the account under their supervision.

Image Credits: Discord screenshot via Watchful.ai

The screenshots were discovered by app intelligence firm Watchful.ai. In addition, a handful of users had posted their own screenshots on Twitter when they encountered the new experience earlier this year, or had simply remarked on the feature when coming across it in the app.

We reached Discord for comment on the tests, showing them some screenshots from the test. The company confirmed the development but didn’t offer a firm commitment as to when or if the parent control feature would actually roll out.

“We’re always working to improve our platform and keep users safe, particularly our younger users,” a Discord spokesperson said. “We’ll let you know if and when something comes of this work,” they added.

The company declined to answer our questions about the reach of the tests, or whether it planned to offer the tools outside the U.S., among other things.

Image Credits: Discord screenshot via Watchful.ai

Though Discord today is regularly used by a younger, Gen Z crowd, thanks to its roots in being a home for gamers, it’s often left out of the larger conversation around the harms to teens caused by social media use. Meanwhile, as execs from Facebook, Twitter, Instagram, Snap, YouTube, and TikTok have had to testify before Congress on this topic, Discord has been able to sit on the sidelines.

Hoping to get ahead of expected regulations, most major social media companies have since rolled out parental control features for their apps, if they didn’t already offer such tools. YouTube and Instagram announced plans for parental controls in 2021, and Instagram finally launched them in 2022 with other Meta apps to follow. Snapchat also rolled out parental controls in 2022. And TikTok, which already had parental controls before the Congressional inquiries began, has been beefing them up in recent months.

But with the lack of regulation at the federal level, several U.S. states have begun their own laws around social media use, including new restrictions on social media apps in states like Utah and Montana, as well as broader legislation to protect minors, like California’s Age Appropriate Design Code Act which goes into effect next year.

Discord, so far, has flown under the radar, despite the warnings from child safety experts, law enforcement, and the media about the dangers the app poses to minors, amid reports that groomers and sexual predators have been using the service to target children. The non-profit organization, the National Center on Sexual Exploitation, even added Discord to its “Dirty Dozen List” over its failures to “adequately address the sexually exploitative content and activity on its platform,” it says.

The organization specifically calls out Discord’s lack of meaningful age verification technology, insufficient moderation, and inadequate safety settings.

Today, Discord offers its users access to an online safety center that guides users and parents on how to manage a safe Discord account or server, but it doesn’t go so far as to actually provide parents with tools to monitor their child’s use of the service or block them from joining servers or communicating with unknown persons. The new parental controls won’t address the latter two concerns, however, but they are at least an acknowledgment that some sort of parental controls are needed.

This is a shift from Discord’s earlier position on the matter, as the company told The Wall Street Journal in early 2021 its philosophy was to put users first, not their parents, and said it wasn’t planning on adding such a feature.

Discord is testing parental controls that allow for monitoring of friends and servers by Sarah Perez originally published on TechCrunch

Amazon’s palm-scanning payment tech will now be able to verify ages, too

Amazon One, the retailer’s palm-scanning payment technology, is now gaining new functionality with the addition of age verification services. The company announced today that customers using Amazon One devices will be able to buy adult beverages — like beers at a sports event — just by hovering their palm over the Amazon One device. The first venue to support this feature will be Coors Field, home of the Colorado Rockies MLB team. The technology will roll out to additional venues in the months ahead, Amazon says.

First introduced in 2020, Amazon’s biometric payment technology works by creating a unique palm print for each customer, which Amazon associates with a credit card the customer inserts in the sign-up kiosk upon initial setup, or with a card the customer has configured online in advance. If the customer has an Amazon account, that is also associated with their Amazon One profile information. These palm print images are encrypted and stored in a secure area in the AWS cloud, built for Amazon One, with restricted employee access.

To use the system, customers hold their hand over a reader on the device that identifies multiple aspects of their palm, like lines, ridges, and vein patterns, to make the identification.

By combining customer biometrics with payment card information and Amazon accounts, Amazon has created a tool that could be used to serve highly personalized ads, offers, and recommendations over time, if the company chooses.

In a FAQ, however, Amazon claims it “does not use or sell customer information for advertising, marketing, or any other reasons.”

Amazon has argued that palm reading is a more private form of biometrics because you can’t determine someone’s identity just by looking at their palm images. However, the company isn’t just storing palm images — it’s creating a customer database that matches palm images with other information.

After initially becoming available at Amazon’s own retail locations, like Amazon Go stores and Whole Foods, the Amazon One system has since expanded to various sports stadiums, entertainment venues, convenience stores, and travel retailers like Hudson and CREWS at several U.S. airports, in addition to Panera Bread through a partnership announced in March.

To now make the system capable of identifying someone’s age, customers can opt to update their ID to the Amazon One website. To do so, customers will go to one.amazon.com, upload photos of both the front and back of their government-issued ID, like a driver’s license, then snap a selfie to verify it’s them. Amazon says it doesn’t store these IDs after verification, which is performed by an unnamed ISO 27001–certified identity verification provider — a certification that references an international security standard.

After their age is confirmed, Amazon One customers will be able to purchase their adult beverages without having to pull out their ID from their pocket — they can be ID’d and pay for the drinks with a scan of their palm. When verified, the bartender will see a “21+” message appear on the screen along with the customer’s selfie, which they can then use to match to the customer placing the order. The customer can then scan their palm again to pay for the purchase.

Retailers may appreciate this technology because it could make lines move quicker, upping their potential sales and revenue. But it’s worth remembering that Amazon has not entered this market without secondary goals in mind, beyond simply speeding up checkouts.

The technology has been the subject of privacy concerns since its debut, which already led one early adopter to abandon their plans to use the readers, after receiving pressure from consumer privacy and advocacy groups. Denver Arts and Venues had been planning to leverage Amazon One for ticketless entry at Red Rocks Amphitheater — a big win for Amazon — but it cut ties with the retailer after the publication of an open letter that suggested Amazon could share palmprint data with government agencies and that it could be stolen from the cloud by hackers.

A group of U.S. senators also pressed Amazon for more information about its plans with customer biometrics shortly after the technology’s launch. Plus, Amazon is facing a class action lawsuit over failure to provide proper notice under a NYC biometric surveillance law, related to the use of its Amazon One readers at Amazon Go stores.

Despite these concerns, Amazon is pressing forward with its Amazon One expansion efforts, having recently landed the Panera deal, for instance.

“We are excited to team up with Amazon One to launch their age verification feature at Coors Field,” said Alison Birdwell, president and CEO of Aramark Sports + Entertainment, a provider of food and beverage and retail locations in various North American sports venues, including Coors Field. “Consumer preferences are ever evolving and demand for faster service models continues to grow. Amazon One’s latest capability directly responds to those demands by delivering a new level of convenience to the age verification process, shortening the time it takes to make an alcohol purchase, and improving the overall guest experience at Coors Field,” she said, in a statement.

Amazon One with age verification is available now at Coors Field, Amazon says.

Amazon’s palm-scanning payment tech will now be able to verify ages, too by Sarah Perez originally published on TechCrunch

Meta ordered to suspend Facebook EU data flows as it’s hit with €1.2BN privacy fine

It’s finally happened: Meta, the company formerly known as Facebook, has been hit with a formal suspension order requiring it to stop exporting European Union user data to the US for processing.

The European Data Protection Board (EDPB) confirmed today that Meta has been fined €1.2 billion (close to $1.3BN) — which looks to be a record sum for a penalty under the bloc’s General Data Protection Regulation (GDPR). (The prior record goes to Amazon which was stung for $887M for misusing customers data for ad targeting back in 2021.)

Meta’s sanction is for breaching conditions set out in the pan-EU regulation governing transfers of personal data to so called third countries (in this case the US) without ensuring adequate protections for people’s information.

European judges have previously found US surveillance programs to conflict with EU privacy rights.

In a press release announcing today’s decision the EDPB’s chair, Andrea Jelinek, said:

The EDPB found that Meta IE’s [Ireland’s] infringement is very serious since it concerns transfers that are systematic, repetitive and continuous. Facebook has millions of users in Europe, so the volume of personal data transferred is massive. The unprecedented fine is a strong signal to organisations that serious infringements have far-reaching consequences.

At the time of writing the Irish Data Protection Commission (DPC), the body responsible for implementing the EDPB’s binding decision, had not provided comment. (But its final decision can be found here.)

Meta quickly put out a blog post with its response to the suspension order in which it confirmed it will appeal. It also sought to blame the issue on a conflict between EU and US law, rather than its own privacy practices, with Nick Clegg, president, global affairs and Jennifer Newstead, chief legal officer, writing:

We are appealing these decisions and will immediately seek a stay with the courts who can pause the implementation deadlines, given the harm that these orders would cause, including to the millions of people who use Facebook every day.

Back in April the adtech giant warned investors that around 10% of its global ad revenue would be at risk were an EU data flows suspension to actually be implemented.

Asked ahead of the decision what preparations it’s made for a possible suspension, Meta spokesman Matthew Pollard declined to provide “extra guidance”. Instead he pointed back to an earlier statement in which the company claimed the case relates to a “historic conflict of EU and US law” which it suggested is in the process of being resolved by EU and US lawmakers who are working on a new transatlantic data transfer arrangement. However the rebooted transatlantic data framework Pollard referred to has yet to be adopted.

It’s also worth noting that while today’s fine and suspension order is limited to Facebook, Meta is far from the only company affected by the ongoing legal uncertainty attached to EU-US data transfers.

The decision by the Irish DPC flows from a complaint made against Facebook’s Irish subsidiary almost a decade ago, by privacy campaigner Max Schrems — who has been a vocal critic of Meta’s lead data protection regulator in the EU, accusing the Irish privacy regulator of taking an intentionally long and winding path in order to frustrate effective enforcement of the bloc’s rulebook.

Schrems argues that the only sure-fire way to fix the EU-US data flows doom loop is for the US to grasp the nettle and reform its surveillance practices.

Responding to today’s order in a statement (via his privacy rights not-for-profit, noyb), Schrems said: “We are happy to see this decision after ten years of litigation. The fine could have been much higher, given that the maximum fine is more than 4 billion and Meta has knowingly broken the law to make a profit for ten years. Unless US surveillance laws get fixed, Meta will have to fundamentally restructure its systems.”

The DPC, which oversees multiple tech giants whose regional headquarters are sited in Ireland, routinely rejects criticism that its actions create a bottleneck for enforcement of the GDPR, arguing its processes reflect what’s necessary to perform due diligence on complex cross-border cases. It also often seeks to deflect blame for delays in reaching decisions onto other supervisors authorities that raise objections to its draft decisions.

However it’s notable that objections to DPC draft decisions against Big Tech have led to stronger enforcement being imposed via a cooperation mechanism baked into the GDPR — such as in earlier decisions against Meta and Twitter. This suggests the Irish regulator is routinely under-enforcing the GDPR on the most powerful digital platforms and doing so in a way that creates additional problems for efficient functioning of the regulation since it strings out the enforcement process. (In the Facebook data flows case, for example, objections were raised to the DPC’s draft decision last August — so it’s taken some nine months to get from that draft to a final decision and suspension order now.) And, well, if you string enforcement out for long enough you may allow enough time for the goalposts to be moved politically that enforcement never actually needs to happen. Which makes a mockery of citizens’ rights.

As noted above, with today’s decision, the DPC is actually implementing a binding decision taken by the EDPB last month in order to settle ongoing disagreement over Ireland’s draft decision — so much of the substance of what’s being ordered on Meta today comes, not from Dublin, but from the bloc’s supervisor body for privacy regulators.

This apparently includes the existence of a financial penalty at all — since the Board notes it instructed the DPC to amend its draft to include a penalty, writing:

Given the seriousness of the infringement, the EDPB found that the starting point for calculation of the fine should be between 20% and 100% of the applicable legal maximum. The EDPB also instructed the IE DPA to order Meta IE to bring processing operations into compliance with Chapter V GDPR, by ceasing the unlawful processing, including storage, in the U.S. of personal data of European users transferred in violation of the GDPR, within 6 months after notification of the IE SA’s final decision.

The applicable legal maximum penalty that Meta can be sanctioned with under the GDPR is 4% of its global annual turnover. And since its full year turnover last year was $116.61BN the maximum it could have been fined here would have been over $4BN. So the Irish regulator has opted to fine Meta considerably less than it could have (but still a lot more than it wanted to).

In further public remarks today, Schrems once again hit out at the DPC’s approach — accusing the regulator of essentially working to thwart enforcement of the GDPR. “It took us ten years of litigation against the Irish DPC to get to this result. We had to bring three procedures against the DPC and risked millions of procedural costs. The Irish regulator has done everything to avoid this decision but was consistently overturned by the European Courts and institutions. It is kind of absurd that the record fine will go to Ireland — the EU Member State that did everything to ensure that this fine is not issued,” he said.

So what happens next for Facebook in Europe?

Nothing immediately. The decision provides a transition period before it must suspend data flows — of around six months — so the service will continue to work in the meanwhile.

Meta has also said it will appeal and looks to be seeking to stay implementation while it takes its arguments back to court.

Schrems has previously suggested the company will — ultimately — need to federate Facebook’s infrastructure in order to be able to offer a service to European users which does not require exporting their data to the US for processing. But, in the near term, Meta looks likely to be able to avoid having to suspend EU-US data flows since the transition period in today’s decision should buy it enough time for the aforementioned transatlantic data transfer deal to be adopted. 

Earlier reports have suggested the European Commission could adopt the new EU-US data deal in July, although it has declined to provide a date for this since it says multiple stakeholders are involved in the process.

Such a timeline would mean Meta gets a new escape hatch to avoid having to suspend Facebook’s service in the EU; and can keep relying on this high level mechanism so long as it is stands.

If that’s how the next section of this torturous complaint saga plays out it will mean that a case against Facebook’s illegal data transfers which dates back almost ten years at this point will, once again, be left twisting in the wind — raising questions about whether it’s really possible for Europeans to exercise legal rights set out in the GDPR? (And, indeed, whether deep-pocketed tech giants, whose ranks are packed with well-paid lawyers and lobbyists, can be regulated at all?)

At the same time, legal challenges to the new transatlantic data transfer deal are expected and Schrems gives the EU-US pact a tiny chance of surviving legal review.

So Meta and other US giants whose business models hinge on exporting data for processing over the pond could find themselves back in this doom loop soon enough.

“Meta plans to rely on the new deal for transfers going forward but this is likely not a permanent fix,” Schrems suggested. “In my view, the new deal has maybe a ten percent chance of not being killed by the CJEU. Unless US surveillance laws gets fixed, Meta will likely have to keep EU data in the EU.”

This story is developing — refresh for updates… 

How did we get here?

How indeed.

Schrems was acting in the wake of concerns kicked up back in 2013 after NSA whistleblower Edward Snowden spilled the beans on how US government surveillance programs were hoovering up user data from social media websites (aka PRISM), among myriad revelations about the extent of the mass surveillance practices in what came to be known as the Snowden disclosures.

That’s relevant because European law enshrines protections for personal data which Schrems suspected were being put at risk by US laws prioritizing national security and handing intelligence agencies sweeping powers to snoop on Internet users’ information.

His original complaints actually targeted a number of tech giants over alleged compliance with US intelligence agencies’ PRISM data collection programs. But in July 2013 two of the complaints, against Apple and Facebook, were flicked away by Ireland’s data protection authority as it accepted their registration with an EU-US data adequacy scheme that was in place at the time (Safe Harbor), arguing it dissolved any surveillance-based concerns.

Schrems appealed the regulator’s decision to the Irish High Court which made a referral to the Court of Justice of the EU (CJEU) — and that led, in October 2015, to the bloc’s top court striking down Safe Harbor after the judges ruled the data transfer deal was unsafe, finding it did not provide the required essential equivalence of the EU’s data protection regime for data exports to the US. That ruling came to be known as Schrems I. (Hang in there for Schrems II.)

A couple of months after the CJEU dropped its bombshell, Schrems refiled his complaint against Facebook in Ireland — asking the data protection authority to suspend Facebook’s EU-US data flows in light of what he dubbed the “very clear” judgement on the risk posed by US government surveillance programs.

At the same time, the toppling of Safe Harbor had led to a scramble by EU and US lawmakers to negotiate a replacement data transfer deal, since it wasn’t just Facebook that was implicated — thousands of businesses were affected by the legal uncertainty clouding data exports. And in a remarkably short time the two sides agreed and adopted (by July 2016) the EU-US Privacy Shield, as the replacement adequacy deal was (somewhat unfortunately) christened.

However, as befits a rush job, Privacy Shield was dogged from the get-go by concerns it was essentially just a sticking plaster atop a legal schism. In customary no-nonsense fashion, Schrems offered a more visceral description — branding it “lipstick on a pig“. And, well, to cut a long story short, the CJEU agreed — smashing the Shield to smithereens, in July 2020, in another landmark strike over the core clash between US surveillance law and EU privacy rights.

Thing is, Schrems had not actually challenged Privacy Shield directly. Rather, he’d updated his complaint in Ireland against Facebook’s data exports to target use of another, longer-standing data transfer mechanism, known as Standard Contractual Contracts (SCCs) — asking the Irish DPA to suspend Facebook’s use of SCCs.

The Irish watchdog again declined to do so. Instead it opted for the equivalent of saying ‘hold my beer’: Choosing to go to court to challenge the (general) legality of SCCs, as it said it was now concerned that the entire mechanism was unsafe.

The DPA’s legal challenge to SCCs essentially parked Schrems’ complaint against Facebook’s data flows while action switched to assessment of the whole data transfer mechanism. But, once again, this legal twist ended up blowing the doors off, as the Irish High Court went on to query whether Privacy Shield itself was bona fide in a new referral to the CJEU (April 2018). And, well, you should know what comes next: A couple of years on the answer from the bloc’s top judges was that this second claim of adequacy was deficient and so the mechanism was now also defunct. RIP Privacy Shield. (A sequential result known as Schrems II.)

Ah but Facebook was using SCCs not Privacy Shield to authorize these data transfers, I hear you cry! Thing is, while the CJEU did not invalidate SCCs the judges made it clear that where they are being used to export data to a so-called “third country” (such as the US) then EU data protection authorities have a duty to pay attention to what’s going on and, crucially, step in when they suspect people’s data is not adequately protected in the risky location… So the clear message from the CJEU was that enforcement must happen. Add to that, the fact the court had invalidated Privacy Shield over safety concerns flowing from US surveillance practices it was clear the country where Facebook routinely takes data was marked as unsafe.

This is a special problem for Facebook since the US adtech giant’s business model hinges on access to user data, in order that it can track and profile web users to target them with behavioral ads, so the tech giant was not in a position to apply extra safeguards (such as end-to-end encryption) which might otherwise be able to raise the level of protection on Europeans’ data exported to the US.

The upshot of all this was the issue was now impossible for Ireland to ignore — with US data adequacy vaporised and the alternative mechanism Facebook was relying on under CJEU-ordered scrutiny — and so, in short order (September 2020), news leaked to the press that the Irish DPA had sent Facebook’s parent, Meta, a preliminary order to suspend data flows.

This then kicked off a flurry of fresh legal challenges as Meta obtained a stay on the order and sought to challenge it in court. But these expected legal twists were complicated by yet another odd decision by the Irish regulator — which, at this time, elected to open a second (new) procedure while pausing the original one (i.e. Schrems’ long-standing complaint).

Schrems cried foul, suspecting fresh delaying tactics, and went on to obtain a judicial review of the DPA’s procedures too — which led, in January 2021, to the Irish DPA agreeing to swiftly finalize his complaint.

In May of the same year the Irish courts also booted Meta’s legal challenge to the DPC — lifting the stay on its ability to proceed with the decision-making process. So Ireland now had, er, no excuses not to get on with the job of deciding on Schrems’ complaint. This put the saga back into the standard GDPR enforcement rails, with the DPC working through its investigation over the best part of a year to reach a revised preliminary decision (February 2022) which it then passed to fellow EU DPAs for review.

Objections to its draft decision were duly raised by August 2022. And EU authorities subsequently failing to reach agreement among themselves — meaning it was left to the European Data Protection Board (EDPB) to take a binding decision (April 2023).

That then gave the Irish regulator a hard deadline of one month to produce a final decision — implementing the EDPB’s binding decision. Which means the meat of what’s been decided today can’t be credited to Dublin.

EU-US Data Privacy Framework as Meta escape hatch

That’s not all either. As noted above, there’s another salient detail that looks set to influence what happens in the near term with Meta’s data flows (and potentially lead to a Schrems III in the coming years): Over the past few years EU and US lawmakers have been holding talks aimed at trying to find a way to revive US adequacy following the CJEU’s torpedoing of Privacy Shield by, they claim, tackling the concerns raised by the judges.

At the time of writing, work to put this replacement data transfer deal in place is still ongoing — with adoption of the arrangement slated as possible during the summer — but the path to arrive at the new deal has already proven far more challenging than last time.

Political agreement on the aforementioned EU-U.S. Data Privacy Framework (DPF) was announced in March 2022; followed, in October, by US president Joe Biden signing an executive order on it; and, in December, the Commission announced a draft agreement on the framework. But, as noted above, the EU’s adoption process has not yet completed so there’s no over-arching high level framework in place for Meta to lock on to quite yet.

If/when the DPF does get adopted by the EU it’s a safe bet Meta will sign up and seek to use it as a new rubberstamp for its EU-US data flows. So this is one near-term route for Facebook to avoid having to act on the suspension order regardless of what happens with its legal appeal. (And, indeed, the company’s blog post today highlights its expectations for smooth running under the incoming framework, with Meta writing: “We are pleased that the DPC also confirmed in its decision that there will be no suspension of the transfers or other action required of Meta, such as a requirement to delete EU data subjects’ data once the underlying conflict of law has been resolved. This will mean that if the DPF comes into effect before the implementation deadlines expire, our services can continue as they do today without any disruption or impact on users.”)

But the legality of the DPF is almost certain to be challenged (if not by Schrems himself there are plenty of digital rights groups who might want to wade in.) And, if that happens it’s certainly possible the CJEU will, once again, find a lack of necessary safeguards — given we have not seen substantial reforms of US surveillance law since they last checked in, while various concerns have been raised by data protection experts about the reworked proposal.

The Commission claims the two sides have worked hard to address the CJEU’s concerns — pointing, for example, to the inclusion of new language they suggest will limit US surveillance agencies’ activity (to what’s “necessity and proportionality”), along with a promise of enhanced oversight and, for individual redress, a so-called “Data Protection Review Court”.

However, on the flip side, data protection experts query whether US spooks will really be working to the same definition of necessity and proportionality as EU law upholds, not least as some bulk collection remains possible under the framework. They also argued redress for individuals still looks difficult since decisions by the body that’s being framed as a court will be secret (nor is it as strictly independent from political influence as an actual legal court, they suggest).

And, as we’ve reported, Schrems himself remains sceptical. “We don’t think that the current framework is going to work,” he told journalists in a recent briefing ahead of the five year anniversary of the GDPR being applied. “We think that’s going to go back to the Court of Justice and will be another element that just generates a lot of tension between the different layers [of enforcement].” He also suggested that a comparison between the executive order Biden signed for the new arrangement and the earlier presidential policy directive, by president Obama, that was reviewed by the Court of Justice when they considered the legality of Privacy Shield, doesn’t show a lot of change, suggesting they’re “pretty much identical”.

“There are some new elements in the new technical order, also some improvements. But most of the stuff that is floated in press releases and public debate, that is new is actually not new. But has been there before,” he also argued. “So we oftentimes don’t really understand how that should change much but we’ll go back to the courts the next year or two, and we’ll then probably get to Court of Justice and we’ll have a third decision that will either tell us that everything is not cool and wonderful and we can move on or that we just are going to be stuck in that for longer.”

So, while — if you listen to the high level mood music — the framework contains substantial revisions to fix the legal schism. But we’ll only really know if that’s true if/when the CJEU gets to weigh in again in a few years’ time.

That means it’s certainly possible that EU-US adequacy could come unstuck again in the not too distant future. And that would fire up Facebook’s data transfer problem once again — thanks to the intrusive reality of US surveillance practices and the sweeping licence afforded to matters of national security over the pond which trample all over foreign (European) concepts of privacy and data protection.

The requirement for EU adequacy of essential equivalence to the bloc’s data protection regime represents a hard stop where a fudge won’t be able to stick forever. (And, well, the prospect of Donald Trump being elected US president again, in 2024, adds extra precariousness to DPF survival calculations.) But, well, that’s a story for the months and years ahead.

Ireland’s GDPR enforcement “bottleneck”

Returning to Schrems’ near-decade long battle for a decision on his complaint, as a case-study in delayed data protection enforcement this one is hard to beat. Indeed, it may represent a record for how long an individual has waited (at least if you ignore all the complaints where no action was taken by the regulator at all).

But it’s important to emphasize that the Irish DPC’s record on GDPR enforcement is under more general attack than the slings and arrows it’s received as a result of this particularly tortuous data flows saga. (Which even Schrems sounds like he’d quite like to see the back of at this point.)

Analysis on five years of the GDPR, put out earlier this month by the Irish Council for Civil Liberties (ICCL), dubs the enforcement situation a “crisis” — warning: “Europe’s failure to enforce the GDPR exposes everyone to acute hazard in the digital age and fingering Ireland’s DPA as a leading cause of enforcement failure against Big Tech.”

And the ICCL points the finger of blame squarely at Ireland’s DPC. 

Ireland continues to be the bottleneck of enforcement: It delivers few draft decisions on major cross-border cases, and when it does eventually do so other European enforcers routinely vote by majority to force it to take tougher enforcement action,” the report argues — before pointing out that: Uniquely, 75% of Ireland’s GDPR investigation decisions in major EU cases were overruled by majority vote of its European counterparts at the EDPB, who demand tougher enforcement action.”

The ICCL also highlights that nearly all (87%) of cross-border GDPR complaints to Ireland repeatedly involve the same handful of Big Tech companies: Google, Meta (Facebook, Instagram, WhatsApp), Apple, TikTok, and Microsoft. But says many complaints against these tech giants never even get a full investigation — thereby depriving complaints of the ability to exercise their rights.

The analysis points out that the Irish DPC chooses “amicable resolution” to conclude the vast majority (83%) of cross-border complaints it receives (citing the oversight body’s own statistics) — further noting: “Using amicable resolution for repeat offenders, or for matters likely to impact many people, contravenes European Data Protection Board guidelines.”

The DPC was contacted for a response to the analysis but declined comment.

The ICCL has called for Commission to step in and tackle the GDPR enforcement crisis, warning: “The Commission’s forthcoming proposal to improve how DPAs cooperate may help but much more is required to fix GDPR enforcement. The ultimate responsibility for this crisis rests with the European Commissioner for Justice, Didier Reynders. We urge him to take serious action.”

Today’s final decision on Facebook’s data flows flopping out of Ireland, after almost a decade of tortuous procedural dilly-dallying — which, let’s not forget, has claimed the scalps of not one but two high level EU-US data deals thus far — won’t do anything to quell criticism of the Ireland as a GDPR enforcement bottleneck (regardless of helpful press leaks last week, ahead of today’s Facebook data flows decision, seeking to frame a positive narrative for the regulator with talk of a “record” fine but no mention of the EDPB’s role in binding the enforcement).

Indeed, the lasting legacy of the Facebook data flows saga, and other painstakingly extracted DPC under-enforcements against Big Tech’s systematic privacy abuses, is already writ large in the centalized oversight role of Big Tech that the Commission has taken on itself for the Digital Services Act and Digital Markets Act — a development that recognizes the importance of regulating platform power for securing the future of the European project.

Image credits: ICCL report: “5 years: GDPR’s crisis point: ICCL report on EEA data protection authorities”

All that said, Ireland’s data protection authority obviously can’t carry the can for all the myriad enforcement issues attached to the GDPR.

The reality is a patchwork of problems frustrate effective enforcement across the bloc as you might expect with decentralized oversight structure which factors in linguistic and culture differences across 27 Member States and varying opinions on how best to approach oversight atop big (and very personal) concepts like privacy which may mean very different things to different people.

Schrems’ privacy rights not-for-profit, noyb, has been collating information on this patchwork of GDPR enforcement issues — which include things like under-resourcing of smaller agencies and a general lack of in-house expertise to deal with digital issues; transparency problems and information blackholes for complainants; cooperation issues and legal barriers frustrating cross-border complaints; and all sorts of ‘creative’ interpretations of complaints “handling” — meaning nothing being done about a complaint still remains a common outcome — to name just a few of the issues it’s encountered.

“The reality is we have to tell people, in many cases, you have a right to complain, but the chances are that this is not going to help you and not going to fix your problem. And that is fundamentally an issue if we say we have a fundamental right to privacy, and there are all these authorities and we pump millions of Euros into them. And the answer we have to give to people is to say you can give it a try but very likely it’s not going to help you — and that is my biggest worry after five years of the GDPR that unfortunately that’s still the answer we have to give people,” says Schrems.

However Ireland does play an oversized role in GDPR enforcement on Big Tech — which in turn has an outsized impact on web users’ rights — which means the decisions it drafts and shapes (or, indeed, elects not to take) impact hundreds of millions of European consumers. So the level of scrutiny on Dublin is well merited.

 

Meta ordered to suspend Facebook EU data flows as it’s hit with €1.2BN privacy fine by Natasha Lomas originally published on TechCrunch

The government can’t seize your data — but it can buy it

When the Biden administration proposed new protections earlier this month to prevent law enforcement from demanding reproductive healthcare data from companies, they took a critical first step in protecting our personal data. But there remains a different, serious gap in data privacy that Congress needs to address.

While the Constitution prevents the government from compelling companies to turn over your sensitive data without due process, there are no laws or regulations stopping them from just buying it.

And the U.S. government has been buying private data for years.

Despite Supreme Court rulings that the government can’t acquire sensitive personal data like your location without a warrant, sketchy data brokers can and do sell this information directly to the government. They’re exploiting a loophole in our constitutional protections against surveillance: Because this data is sold on the open market, the government doesn’t need to compel anyone to provide it. They can simply purchase it without any oversight or legal ramifications.

Congress needs to ban the government from buying up sensitive geolocation data entirely — not just preventing its seizure. So long as agencies and law enforcement can legally purchase this sensitive information from data brokers, constitutional limits on the government’s ability to seize this data mean next to nothing.

The FBI, the military, and several other government and regulatory agencies are frequent customers of these data brokers both domestically and abroad. The data they purchase gives them alarming amounts of power to gather sensitive information from large groups of people.

The biggest victims of government surveillance are often marginalized communities. For example, the military purchased sensitive geolocation data gathered from a Muslim prayer app and a Muslim dating app in order to track users. The broker that sold the user geolocation data, Locate X, boasts of being “widely used” by the military, intelligence agencies and several levels of law enforcement. Immigration and Customs Enforcement (ICE) has also drawn criticism for purchasing user data in order to track migrants and skirt sanctuary laws.

Even the National Guard is taking advantage of the surveillance loophole, purchasing data to target and recruit high schoolers. And the FBI buys geolocation data to track millions of phones — all without needing a warrant.

We’ve already seen the potentially damaging consequences of private data and tracking being used by employers and landlords to the detriment of workers or tenants. Putting that information in the hands of government bodies whose decisions hold the weight of law presents an even more serious issue.

Government agencies are often resistant to disclosing how they use privately purchased data for law enforcement operations, but such purchases allow them to skirt surveillance restrictions put in place to protect our civil rights. Police departments have been circumventing facial recognition bans by going to third-party vendors for their facial recognition search results. In 2018, ICE claimed to make an arrest and deportation from a “routine traffic stop,” but had also coincidentally purchased specific cellular phone tower data that could have helped make the arrest.

To make matters worse, the data being sold to these powerful institutions may not even be accurate. In addition to being vulnerable to hacking, information from data brokers often relies on discriminatory algorithms and biases and can still often mistake your age, ethnicity and religion when creating profiles. When the government takes law enforcement action based on inaccurate data, the rights of all Americans are put at risk.

Biden’s proposed protection for reproductive health data is a step in the right direction, but it’s not nearly enough on its own. Congress must ban the government purchase of sensitive location data and address this huge loophole in data privacy — because so long as your information is up for sale, the government will keep buying it.

The government can’t seize your data — but it can buy it by Walter Thompson originally published on TechCrunch