Clearview AI told to stop processing UK data as ICO warns of possible fine

Controversial facial recognition company Clearview AI is facing a potential fine in the UK.

It has also been handed a provisional notice to stop further processing of UK citizens’ data and to delete any data it already holds as a result of what the Information Commissioner’s Office (ICO) described as “alleged serious breaches” of national data protection law.

The ICO has been looking into the tech company — which sells AI-powered identity matching to law enforcement and other paying customers via a facial recognition platform that it trained covertly on photos harvested from Internet sources (like social media platforms) — in a joint investigation with the Australian Information Commissioner (OAIC).

The OAIC already, earlier this month, issued an order to Clearview to delete data after finding it broke national laws down under. So the ICO has been the laggard of the two regulators.

But today it issued the notification of a provisional intention to fine Clearview over £17 million (~$22.6M) — citing a range of suspected breaches.

Among the raft of violations the ICO suspects — following what it describes as “preliminary enquiries with Clearview AI” — are failures to process people’s information fairly or in a way they expect, in line with requirements to have a valid legal basis for processing personal data and to provide adequate information to those whose data is processed; along with a failure to have a process in place to prevent data being retained indefinitely; a failure to meet higher standards required for processing biometric data — which is considered special category data under the European standard (the GDPR) that’s transposed into UK law; and also for applying problematic processes when people object to its processing of their information — such as asking for more personal data (“including photographs”) in response to such objections.

Clearview was contacted for comment on the ICO’s provisional findings.

A spokesperson sent this statement (below), attributed to its London based attorney, Kelly Hagedorn (a partner at Jenner & Block London LLP) — who describes the ICO’s provisional finding as “factually and legally incorrect”; says Clearview is considering an appeal and “further action”; and claims the company does not do business in the UK (nor have any UK customers currently).

Here’s Clearview’s statement in full:

“The UK ICO Commissioner’s assertions are factually and legally incorrect. The company is considering an appeal and further action. Clearview AI provides publicly available information from the internet to law enforcement agencies. To be clear, Clearview AI does not do business in the UK, and does not have any UK customers at this time.”
Whether the ICO’s preliminary sanction will go the distance and turn into an actual fine and data processing cessation order against Clearview remains to be seen.

For one thing, the ICO’s notification is timed a few weeks ahead of the departure of sitting commissioner, Elizabeth Denham, who is set to be replaced by New Zealand’s privacy commissioner John Edwards in January.

So a new broom will be in charge of deciding whether the provisional findings hold up in the face of Clearview’s objections (and potential legal action).

In its statement today, the ICO is careful to note that Clearview will have the opportunity to make representations — which it says it will consider before any final decision is reached, and which it furthermore suggests may not happen until mid-2022.

Under Denham, it’s also notable that the ICO has substantially shrunk a number of provisional penalties it handed out in relation to other breach investigations (such as those to British Airways; and Marriott).

The ICO also settled with Facebook over the Cambridge Analytica scandal after the tech giant appealed its provisional sanction.

And while Facebook agreed to pay the ICO’s £500k fine in full in that case it did so without admitting any liability and also got the ICO to agree to sign a non-disclosure agreement over the arrangement (which has limited what the commissioner can say in public about its correspondence with Facebook). So, in all, that ended up looking like a sweet deal for Facebook — agreed to by a regulator apparently concerned at being challenged in the courts over its decision-making processes.

There is fresh complexity on the horizon around enforcement of the UK’s data protection regime too, now — in that the government is in the process of consulting on making changes to national law that could see ministers reduce protections wrapping people’s data — such as by removing or altering requirements around transparency, fairness and what constitutes a valid legal basis for processing people’s data — as part of a claimed ‘simplification‘ of the current laws.

So the ICO’s caveat on its provisional “view to fine” Clearview — which it specifies may be “subject to change or no further formal action” — looks like more than just a reminder to recall its own recent history of enforcements not standing up to its earlier convictions.

Why is it acting at all now if there’s a risk of ministers moving the goalposts? Denham may have an eye on amplifying her legacy as she departs for pastures new. Or she may hope to try and bind the hands of her successor — and limit the reformist zeal of DCMS to downgrade UK data protection — or, indeed, a little of all of the above.

In a statement, the outgoing commissioner said: “I have significant concerns that personal data was processed in a way that nobody in the UK will have expected. It is therefore only right that the ICO alerts people to the scale of this potential breach and the proposed action we’re taking. UK data protection legislation does not stop the effective use of technology to fight crime but to enjoy public trust and confidence in their products technology providers must ensure people’s legal protections are respected and complied with.”

“Clearview AI Inc’s services are no longer being offered in the UK. However, the evidence we’ve gathered and analysed suggests Clearview AI Inc were and may be continuing to process significant volumes of UK people’s information without their knowledge. We therefore want to assure the UK public that we are considering these alleged breaches and taking them very seriously,” she added.

On the investigation findings itself, the regulator’s press release on its provisional view and potential fine offers only tentative conclusions, with the ICO writing that: “The images in Clearview AI Inc’s database are likely to include the data of a substantial number of people from the UK and may have been gathered without people’s knowledge from publicly available information online, including social media platforms.”

It adds that it “understands that the service provided by Clearview AI Inc was used on a free trial basis by a number of UK law enforcement agencies”, and further specifying that this trial “was discontinued and Clearview AI Inc’s services are no longer being offered in the UK” — without offering any details on when the tech was being used and when usage stopped.

Clearview has faced regulatory pushback elsewhere around the world too.

Earlier this year Canada’s privacy watchdog concluded its own investigation of the AI firm — finding multiple breaches of national law and also ordering it to cease processing citizens’ data.

Clearview rejected the findings — but also said it no longer offered the service to Canadian law enforcement.

Update: The company has now sent an additional statement on the ICO’s provisional findings, attributed to CEO Hoan Ton-That, in which he expresses “deep” disappointment at what he claims is a misinterpretation of the technology — and goes on to imply that Clearview AI might have been useful for UK law enforcement investigations into child sexual abuse (an area where the UK government is currently spending taxpayer money to try to encourage the development of novel detection technologies).

Here’s Ton-That’s statement [emphasis his]:

“I grew up in Australia and have long viewed the UK as an important, majestic place—one about which I have the deepest respect.  I am deeply disappointed that the UK Information Commissioner  has misinterpreted my technology and intentions. I created the consequential facial recognition technology known the world over.  My company and I have acted in the best interests of the UK and their people by assisting law enforcement in solving heinous crimes against children, seniors, and other victims of unscrupulous acts. It breaks my heart that Clearview AI has been unable to assist when receiving urgent requests from UK law enforcement agencies seeking to use this technology to investigate cases of severe sexual abuse of children in the UK. We collect only public data from the open internet and comply with all standards of privacy and law.  I am disheartened by the misinterpretation of Clearview AI’s technology to society.  I would welcome the opportunity to engage in conversation with leaders and lawmakers so the true value of this technology which has proven so essential to law enforcement can continue to make communities safe.”

Italy fines Apple and Google for ‘aggressive’ data practices

Apple and Google have been fined €10 million apiece by Italy’s competition and market authority (AGCM) which has found they did not provide their users with clear enough information on commercial uses of their data — in violation of the country’s consumer code.

The regulator also accuses the pair of deploying “aggressive” practices to push users to accept the commercial processing.

Apple and Google were both contacted for a response to the ACGM’s sanction. Both said they will appeal.

Google is accused of omitting relevant information at the account creation phase and as consumers are using its services — information the regulator says should be providing in order for people to decide whether or not to consent to its use of their data for commercial ends.

The AGCM has also accused Apple of failing to immediately provide users with clear information on how it uses their information commercially when they create an Apple ID or access its digital stores, such as the App Store.

It’s the rather more surprising sanction — given Apple’s carefully cultivated image as a champion of consumer privacy (not to mention the premium its devices and services tend to command vs cheaper, ad-supported alternatives, such as stuff made by Google).

The Italian regulator lumps both companies’ practices together in a press release announcing the sanctions — accusing each one of being especially aggressive in pushing self-serving commercial terms on their respective users, especially at the account creation phase.

For Google, the ACGM notes that it pre-sets user acceptance of commercial processing — and also notes that the adtech giant fails to provide a clear way for users to revoke consent for these data transfers later or otherwise change their choice after the account step has been completed.

It also takes the view that Apple’s approach denies users the ability to properly exercise choice over its commercial use of their data, with the regulator arguing the iPhone maker’s data acquisition practices and architecture essentially “condition” the consumer to accept its commercial terms.

It’s an awkward accusation for a company that splashes major marketing cash on suggesting its devices and software are superior to alternatives (such as tech made by Google) exactly because it claims to put user privacy at the core of what it does.

In a statement, Apple rejected the ACGM’s finding — writing:

“We believe the Authority’s view is wrong and will be appealing the decision. Apple has a long-standing commitment to the privacy of our users and we work incredibly hard to design products and features that protect customer data. We provide industry-leading transparency and control to all users so they can choose what information to share or not, and how it is used.”

A Google spokeswoman also disagreed with the findings, sending this statement:

“We have transparent and fair practices in order to provide our users with helpful tools and clear information about their usage. We give people simple controls to manage their information and limit the use of personal data, and we work hard to be fully compliant with the consumer protection rules. We disagree with the Authority’s decision and we will appeal.”

The full text of the ACGM’s decisions can be found here: For Apple and Google.

The Italian regulator has had a busy few days slapping big tech: Earlier this week it issued a $230M fine (total) for Apple and Amazon over alleged collusion around the sale of Apple kit on Amazon’s Italian marketplace.

It has also been stepping up investigations of tech giants over a period of years — earlier this year it fined Facebook over similar issues with its commercial use of people’s data, while this summer it hit Google with a $123M fine related to Android Auto. It also has an open probe into Google’s displaying advertising business.

Other fines from the ACGM in recent years include one for Apple related to misleading iPhone users about the device’s water resistance and another for Apple and Samsung for slowing devices.

Google agrees with UK’s CMA to deeper oversight of Privacy Sandbox

As part of an ongoing antitrust investigation into Google’s Privacy Sandbox by the UK’s competition regulator, the adtech giant has agreed to an expanded set of commitments related to oversight of its planned migration away from tracking cookies, the regulator announced today.

Google has also put out its own blog post on the revisions — which it says are intended to “underline our commitment to ensuring that the changes we make in Chrome will apply in the same way to Google’s ad tech products as to any third party, and that the Privacy Sandbox APIs will be designed, developed and implemented with regulatory oversight and input from the CMA [Competition and Markets Authority] and the ICO [Information Commissioner’s Office]”.

Google announced its intention to deprecate support for the third party tracking cookies that are used for targeting ads at individuals in its Chrome browser all the way back in 2019 — and has been working on a stack of what it claims are less intrusive alternative ad-targeting technologies (aka, the “Privacy Sandbox”) since then.

The basic idea is to shift away from ads being targeted at individuals (which is horrible for Internet users’ privacy) to targeting methods that put Internet users in interest-based buckets and serve ads to so-called “cohorts” of users (aka, FloCs) which may be less individually intrusive — however it’s important to note that Google’s proposed alternative still has plenty of critics (the EFF, for example, has suggested it could even amplify problems like discrimination and predatory ad targeting).

And many privacy advocates would argue that pure-play contextual targeting poses the least risk to Internet users’ rights while still offering advertisers the ability to reach relevant audiences and publishers to monetize their content.

Google’s Sandbox plan has attracted the loudest blow-back from advertisers and publishers, who will be directly affected by the changes. Some of whom have raised concerns that the shift away from tracking cookies will simply increase Google’s market power — hence the Competition and Markets Authority (CMA) opening an antitrust investigation into the plan in January.

As part of that probe, the CMA had already secured one set of commitments from Google around how it would go about the switch, including that it would agree to halt any move to deprecate cookies if the regulator was not satisfied the transition could take place in a way that respects both competition and privacy; and agreements on self-preferencing, among others.

A market consultation on the early set of commitments drew responses from more than 40 third parties — including, TechCrunch understands, input from international regulators (some of who are also investigating Google’s Sandbox, such as the European Commission, which opened its own probe of Google’s adtech in June) .

Following that, the first set of proposed commitments has been expanded and beefed up with additional requirements (see below for a summary; and here for fuller detail from the CMA’s “Notice of intent to accept the modified commitments”).

The CMA will now consult on the expanded set — with a deadline of 5pm on December 17, 2021, to take fresh feedback.

It will then make a call on whether the beefed up bundle bakes in enough checks-and-balances to ensure that Google carries out the move away from tracking cookies with the least impact on competition and the least harm to user privacy (although it will be the UK’s ICO that’s ultimately responsible for oversight of the latter piece).

If the CMA is happy with responses to the revised commitments, it would then close the investigation and move to a new phase of active oversight, as set out in the detail of what it’s proposing to agree with Google.

A potential timeline for this to happen is early 2022 — but nothing is confirmed as yet.

Commenting in a statement, CMA CEO Andrea Coscelli said:

“We have always been clear that Google’s efforts to protect user’s privacy cannot come at the cost of reduced competition.

That’s why we have worked with the Information Commissioner’s Office, the CMA’s international counterparts and parties across this sector throughout this process to secure an outcome that works for everyone.

We welcome Google’s co-operation and are grateful to all the interested parties who engaged with us during the consultation.

If accepted, the commitments we have obtained from Google become legally binding, promoting competition in digital markets, helping to protect the ability of online publishers to raise money through advertising and safeguarding users’ privacy.”

More market reassurance

In general, the expanded commitments look intended to offer a greater level of reassurance to the market that Google will not be able to exploit loopholes in regulatory oversight of the Sandbox to undo the intended effect of addressing competition risks and privacy concerns.

Notably, Google has agreed to appoint a CMA approved monitoring trustee — as one of the additional measures it’s suggesting to improve the provisions around reporting and compliance.

It will also dial up reporting requirements, agreeing to ensure that the CMA’s role and the regulator’s ongoing process — which the CMA now suggests should continue for a period of six years — are mentioned in its “key public announcements”; and to regular (quarterly) reporting to the CMA on how it is taking account of third party views as it continues building out the tech bundle.

Transparency around testing is also being beefed up.

On that, there have been instances, in recent months, where Google staffers have not been exactly fulsome in articulating the details of feedback related to the Origin Trial of its FloCs technology to the market, for example. So it’s notable that another highlighted change requires Google to instruct its staff not to make claims to customers which contradict the commitments.

Another concern reflected in the revisions is the worry of market participants of Google removing functionality or information before the full Privacy Sandbox changes are implemented — hence it has offered to delay enforcement of its Privacy Budget proposal and offered commitments around the introduction of measures to reduce access to IP addresses. 

We understand that concerns from market participants also covered Google removing other functionality — such as the user agent string — and that strengthened commitments are intended to address those wider worries too.

Self-preferencing requirements have also been dialled up. And the revised commitments include clarifications on the internal limits on the data that Google can use — and monitoring those elements will be a key focus for the trustee.

The period of active oversight by the CMA has also been extended vs the earlier plan — to six years from the date of any decision to accept Google’s modified commitments (up from around five).

This means that if the CMA agrees to the commitments next year they could be in place until 2028. And by then the UK expects to have reformed competition rules wrapping tech giant — as

In its own blog post, Google condenses the revised commitments thus:

  1. Monitoring and reporting. We have offered to appoint an independent Monitoring Trustee who will have the access and technical expertise needed to ensure compliance.
  2. Testing and consultation. We have offered the CMA more extensive testing commitments, along with a more transparent process to take market feedback on the Privacy Sandbox proposals.
  3. Further clarity on our use of data. We are underscoring our commitment not to use Google first-party personal data to track users for targeting and measurement of ads shown on non-Google websites. Our commitments would also restrict the use of Chrome browsing history and Analytics data to do this on Google or non-Google websites.

As with the earlier set of pledges, it has agreed to apply the additional commitments globally — assuming the package gets accepted by the UK regulator.

So the UK regulator continues playing a key role in shaping how key web infrastructure evolves.

Google’s blog most also makes reference to an opinion published yesterday by the UK’s information commission — which urged the adtech industry of the need to move away from current tracking and profiling methods of ad targeting.

“We also support the objectives set out yesterday in the ICO’s Opinion on Data protection and privacy expectations for online advertising proposals, including the importance of supporting and developing privacy-safe advertising tools that protect people’s privacy and prevent covert tracking,” Google noted.

This summer Google announced a delay to its earlier timeline for the deprecation of tracking cookies — saying support wouldn’t start being phased out in Chrome until the second half of 2023.

There is no suggestion from the tech giant as this point of any additional delay to that timeline — assuming it gets the regulatory greenlight to go ahead.

UK privacy watchdog warns adtech the end of tracking is nigh

It’s been well over two years since the UK’s data protection watchdog warned the behavioural advertising industry it’s wildly out of control.

The ICO hasn’t done anything to stop the systematic unlawfulness of the tracking and targeting industry abusing Internet users’ personal data to try to manipulate their attention — not in terms of actually enforcing the law against offenders and stopping what digital rights campaigners have described as the biggest data breach in history.

Indeed, it’s being sued over inaction against real-time-bidding’s misuse of personal data by complainants who filed a petition on the issue all the way back in September 2018.

But today the UK’s (outgoing) information commissioner, Elizabeth Denham, published an opinion — in which she warns the industry that its old unlawful tricks simply won’t do in the future.

New methods of advertising must be compliant with a set of what she describes as “clear data protection standards” in order to safeguard people’s privacy online, she writes.

Among the data protection and privacy “expectations” Denham suggests she wants to see from the next wave of online ad technologies are:

• engineer data protection requirements by default into the design of the initiative;

• offer users the choice of receiving adverts without tracking, profiling or targeting based on personal data;

• be transparent about how and why personal data is processed across the ecosystem and who is responsible for that processing;

• articulate the specific purposes for processing personal data and demonstrate how this is fair, lawful and transparent;

• address existing privacy risks and mitigate any new privacy risks that their proposal introduces

Denham says the goal of the opinion is to provide “further regulatory clarity” as new ad technologies are developed, further specifying that she welcomes efforts that propose to:

• move away from the current methods of online tracking and profiling practices;

• improve transparency for individuals and organisations;

• reduce existing frictions in the online experience;

• provide individuals with meaningful control and choice over the processing of device information and personal data;

• ensure valid consent is obtained where required;

• ensure there is demonstrable accountability across the supply chain;

The timing of the opinion is interesting — given an impending decision by Belgium’s data protection agency on a flagship ad industry consent gathering tool. (And current UK data protection rules share the same foundation as the rest of the EU, as the country transposed the General Data Protection Regulation into national law prior to Brexit.)

Earlier this month the IAB Europe warned that it expects to be found in breach of the EU’s General Data Protection Regulation, and that its so-called ‘transparency and consent’ framework (TCF) hasn’t managed to achieve either of the things claimed on the tin.

But this is also just the latest ‘reform’ missive from the ICO to rule-breaking adtech.

And Denham is merely restating requirements that are derived from standards that already exist in UK law — and wouldn’t need reiterating had her office actually enforced the law against adtech breache(r)s. But this is the regulatory dance she has preferred.

This latest ICO salvo looks more like an attempt by the outgoing commissioner to claim credit for wider industry shifts as she prepares to leave office — such as Google’s slow-mo shift toward phasing out support for third party cookies (aka, it’s ‘Privacy Sandbox’ proposal, which is actually a response to evolving web standards such as competing browsers baking in privacy protections; rising consumer concern about online tracking and data breaches; and a big rise in attention on digital matters from lawmakers) — than it is about actually moving the needle on unlawful tracking.

If Denham wanted to do that she could have taken actual enforcement action long ago.

Instead the ICO has opted for — at best — a partial commentary on embedded adtech’s systematic compliance problem. And, essentially, to stand by as the breach continues; and wait/hope for future compliance.

 

Change may be coming regardless of regulatory inaction, however.

And, notably, Google’s ‘Privacy Sandbox’ proposal (which claims ‘privacy safe’ ad targeting of cohorts of users, rather than microtargeting of individual web users) gets a significant call-out in the ICO’s remarks — with Denham’s office writing in a press release that it is: “Currently, one of the most significant proposals in the online advertising space is the Google Privacy Sandbox, which aims to replace the use of third party cookies with alternative technologies that still enable targeted digital advertising.”

“The ICO has been working with the Competition and Markets Authority (CMA) to review how Google’s plans will safeguard people’s personal data while, at the same time, supporting the CMA’s mission of ensuring competition in digital markets,” the ICO goes on, giving a nod to ongoing regulatory oversight, led by the UK’s competition watchdog, which has the power to prevent Google’s Privacy Sandbox ever being implemented — and therefore to stop Google phasing out support for tracking cookies in Chrome — if the CMA decides the tech giant can’t do it in a way that meets competition and privacy criteria.

So this reference is also a nod to a dilution of the ICO’s own regulatory influence in a core adtech-related arena — one that’s of market-reforming scale and import.

The backstory here is that the UK government has been working on a competition reform that will bring in bespoke rules for platform giants considered to have ‘strategic market status’ (and therefore the power to damage digital competition); with a dedicated Digital Markets Unit already established and up and running within the CMA to lead the work (but which is still pending being empowered by incoming UK legislation).

So the question of what happens to ‘old school’ regulatory silos (and narrowly-focused regulatory specialisms) is a key one for our data-driven digital era.

Increased cooperation between regulators like the ICO and the CMA may give way to oversight that’s even more converged or even merged — to ensure powerful digital technologies don’t fall between regulatory cracks — and therefore that the ball isn’t so spectacularly dropped on vital issues like ad tracking in the future.

Intersectional digital oversight FTW?

As for the ICO itself, there is a further sizeable caveat in that Denham is not only on the way out (ergo her “opinion” naturally has a short shelf life) but the UK government is busy consulting on ‘reforms’ to the UK’s data protection rules.

Said reforms could see a major downgrading of domestic privacy and data protections; and even legitimize abusive ad tracking — if ministers, who seem more interested in vacuous soundbites (about removing barriers to “innovation”), end up ditching legal requirements to ask Internet users for consent to do stuff like track and profile them in the first place, per some of the proposals.

So the UK’s next information commissioner, John Edwards, may have a very different set of ‘data rules’ to apply.

And — if that’s the case — Denham will, in her roundabout way, have helped make sliding standards happen.

 

Europe offers tepid set of political ads transparency rules

It’s been almost a year since the EU’s executive announced it would propose rules for political ads transparency in response to concern about online microtargeting and big data techniques making mincemeat of democratic integrity and accountability.

Today it’s come out with its proposal. But frankly it doesn’t look like the wait was worth it.

The Commission’s PR claims the proposal will introduce “strict conditions for targeting and amplifying” political advertising using digital tools — including what it describes as a ban on targeting and amplification that use or infer “sensitive personal data, such as ethnic origin, religious beliefs or sexual orientation”.

However the claimed ‘ban’ does not apply if “explicit consent” is obtained from the person whose sensitive data is to be exploited to better target them with propaganda — and online ‘consents’ to ad targeting are already a total trashfire of non-compliance in the region.

So it’s not clear why the Commission believes politically vested interests hell-bent on influencing elections are going to play by a privacy rule-book that almost no online advertisers operating in the region currently do, even the ones that are only trying to get people to buy useless plastic trinkets or ‘detox’ teas.

In a Q&A offering further detail on the proposal, the Commission lists a set of requirements that it says anyone making use of political targeting and amplification will need to comply with, which includes having an internal policy on the use of such techniques; maintaining records of the targeting and use of personal data; and recording the source of said personal data — so at best it seems to be hoping to burden propagandists with the need to create and maintain a plausible paper trail.

Because it is also allowing a further carve-out to allow for political targeting — writing: “Targeting could also be allowed in the context of legitimate activities of foundations, associations or not-for-profit bodies with a political, philosophical, religious or trade union aim, when it targets their own members.”

This is incredibly vague. A “foundation” or an “association” with a political “aim” sounds like something any campaign group or vested interest could set up — i.e. to carry on the “legitimate” activity of (behaviorally?) targeting propaganda at voters.

In short, the scope for loopholes for political microtargeting — including via the dissemination of disinformation — looks massive.

On scope, the Commission says it wants the incoming rules to apply to “ads by, for or on behalf of a political actor” as well as “so called” issue-based ads — aka politically charged issues that can be a potent proxy to sway voters — which it notes are “liable to influence the outcome of an election or referendum, a legislative or regulatory process or voting behaviour”.

But how exactly the regulation will define ads that fall in and out of scope remains to be seen.

Perhaps the most substantial measure of a very thin proposal is around transparency — where the Commission has proposed “transparency labels” for paid political ads.

It says these must be “clearly labelled” and provide “a set of key information” — including the name of the sponsor “prominently displayed and an easily retrievable transparency notice”; along with the amount spent on the political advertisement; the sources of the funds used; and a link between the advertisement and the relevant elections or referenda.

However, again, the Commission appears to be hoping that a few transparency requirements will enforce a sea change on an infamously opaque and fraud-filled industry — one that has been fuelled by rampant misuse and unlawful exploitation of people’s data. Rather than cutting off the head of the hydra by actually curbing targeting — such as by limiting political targeting to broad-brush contextual buckets.

Hence it writes: “All political advertising services, from adtech that intermediate the placement of ads, to consultancies and advertising agencies producing the advertising campaigns, will have to retain the information they have access to through the provision of their service about the ad, the sponsor and the dissemination of the ad. They will have to transfer this information to the publisher of the political ad — this can be the website or app where the ad is seen by an individual, a newspaper, a TV broadcaster, a radio station, etc. The publisher will need to make the information available to the individual who sees the ad.”

“Transparency of political advertising will help people understand when they see a paid political advertisement,” the Commission further suggests, adding: “With the proposed rules, every political advertisement – whether on Twitter, Facebook or any other online platform – will have to be clearly marked as political advertisement as well as include the identity of the sponsor and a transparency notice with the wider context of the political advertisement and its aims, or a clear indication of where it can be easily retrieved.”

It’s a nice theory but for one thing plenty of election interference originates from outside a region where the election itself is taking place.

On that the Commission says it will require organisations that provide political advertising services in the EU but do not have a physical presence there to designate a legal representative in a Member States where the services are offered, suggesting: “This will ensure more transparency and accountability of services providers acting from outside the Union.”

How exactly it will require (and enforce) that stipulation isn’t clear.

Another problem is that all these transparency obligations will only apply to “political advertising services”.

Propaganda that gets uploaded to online platforms like Facebook by a mere “user” — aka an entity that does not self-identify as a political advertising service — will apparently escape the need for any transparency accountability at all.

Even if they’re — y’know — working out of a Russian trollfarm that’s actively trying to destabilize the European Union… Just so long as they claim to be ‘Hans, 32, Berliner, loves cats, hates the CSU’.

Now if platforms like Facebook were perfectly great at identifying, reporting and purging inauthentic activity, fake accounts and shadey influence ops in their own backyards it might not be such a problem to leave the door open for “a user” to post unaccountable political propaganda. But a whole clutch of whistleblowers have pointed out, in excruciating detail, that Facebook at least is very much not that.

So that looks like another massive loophole — one which underlines why the only genuine way to fix the problem of online disinformation and election interference is to put an end to behavioral targeting period, rather than just fiddling around the edges. Not least because by fiddly with some tepid measures that will offer only a flawed, partial transparency you risk lulling people into a false sense of security — as well as further normalizing exploitative manipulation (just so long as you have a ‘policy’ in place).

Once online ads and content can be targeted at individuals based on tracking their digital activity and harvesting their personal data for profiling, it’s open season for opaque InfluenceOps and malicious interests to workaround whatever political ads transparency rules you try to layer on top of the cheap, highly scalable tools offered by advertising giants like Facebook to keep spreading their propaganda — at the expense of your free and fair elections.

Really what this regulation proposes is to create a large admin burden for advertisers who intend to run genuinely public/above board political campaigns — leaving the underbelly of paid mud slingers, hate spreaders and disinformation peddlers to exploit its plentiful loopholes to run mass manipulation campaigns right through it.

So it will be interesting to see whether the European Parliament takes steps to school the Commission by adding some choice amendments to its draft — as MEPs have been taking a stronger line against microtargeting in recent months.

On penalties, for now, under the Commission proposal, ‘official’ advertising services could be fined for breaking things like the transparency and record-keeping requirements but how much will be determined locally, by Member States — at a level the Commission says should be “effective, proportionate and dissuasive”.

What might that mean? Well under the proposal, national Data Protection Authorities (DPAs) will be responsible for monitoring the use of personal data in political targeting and for imposing fines — so, ultimately, for determining the level of fines that domestic rule-breaking political operators might face.

Which does not exactly inspire a whole lot of confidence. DPAs are, after all, resourced by the same set of political entities — or whichever flavor happens to be in government.

The UK’s ICO carried out an extensive audit of political parties data processing activities following the 2018 Cambridge Analytica Facebook data misuse scandal — and in 2020 it reported finding a laundry list of failures across the political spectrum.

So what did the EU’s (at the time) best resourced DPA do about all these flagrant breaches by UK political parties?

The ICO’s enforcement action at that point consisted of — checks notes — issuing a series of recommendations.

There was also a warning that it might take further action in the future. And this summer the ICO did issue one fine: Slapping the Conservative Party with a £10,000 penalty for spamming voters. Which doesn’t really sound very dissuasive tbh.

Earlier this month another of these UK political data offenders, the Labour Party, was forced to fess up to what it dubbed a “data incident” — involving an unnamed third party data processor. It remains to be seen what sanction it may face for failing to protect supporters’ information in that (post-ICO-audit) instance.

Adtech generally has also faced very little enforcement from EU DPAs — despite scores of complaints against its privacy-eviscerating targeting methods — and despite the ICO saying back in 2019 that its methods are rampantly unlawful under existing data protection law.

Vested interests in Europe have been incredibly successful at stymieing regulatory enforcement against invasive ad targeting.

And, apparently, also derailing progress by defanging incoming EU rules — so they won’t do anything much to stop the big-data ‘sausage-factory’ of (in this case) political microtargeting from keeping on slicing ‘n’ dicing up the eyeballs of the citizenry.

European Parliament’s IMCO backs limits on tech giants’ ability to run tracking ads

In what looks like bad news for adtech giants like Facebook and Google, MEPs in the European Parliament have voted for tougher restrictions on how Internet users’ data can be combined for ad targeting purposes — backing a series of amendments to draft legislation that’s set to apply to the most powerful platforms on the web.

The Internal Market and Consumer Protection Committee (IMCO) today voted overwhelmingly to support beefed up consent requirements on the use of personal data for ad targeting within the Digital Markets Act (DMA); and for a complete prohibition on the biggest platforms being able to process the personal data of minors for commercial purposes — such as marketing, profiling or behaviorally targeted ads — to be added to the draft legislation.

The original Commission proposal for the DMA was notably weak in the area of surveillance business models — with the EU’s executive targeting the package of measures at other types of digital market abuse, such as self-preferencing and unfair T&Cs for platform developers, which its central competition authority was more familiar with.

“The text says that a gatekeeper shall, ‘for its own commercial purposes, and the placement of third-party advertising in its own services, refrain from combining personal data for the purpose of delivering targeted or micro-targeted advertising’, except if there is a ‘clear, explicit, renewed, informed consent’, in line with the General Data Protection Regulation,” IMCO writes in a press release. “In particular, personal data of minors shall not be processed for commercial purposes, such as direct marketing, profiling and behaviourally targeted advertising.”

It’s fair to say that adtech giants are masters of manipulating user consent at scale — through the use of techniques like A/B testing and dark pattern design — so beefed up consent requirements (for adults) aren’t likely to offer as much of a barrier against ad-targeting abuse as the committee seems to think they might.

Although if Facebook was finally forced to offer an actual opt-out of tracking ads that would still be a major win (as it doesn’t currently give users any choice over being surveilled and profiled for ads).

However the stipulation that children should be totally protected from commercial stuff like profiling and behavioral ads is potentially a lot more problematic for the likes of Facebook and Google — given the general lack of robust age assurance across the entire Internet.

It suggests that if this partial prohibition makes it into EU law, adtech platforms may end up deciding it’s less legally risky to turn off tracking-based ads altogether (in favor of using alternatives that don’t require processing users’ personal data, such as contextual targeting) vs trying to correctly age verify their entire user base in order to firewall only minors’ eyeballs from behavioral ads.

At the very least, such a ban could present big (ad)tech with a compliance headache — and more work for their armies of in-house lawyers — though MEPs have not proposed to torpedo their entire surveillance business model at this juncture.

In recent months a number of parliamentarians have been pushing for just that: An outright ban on tracking-based advertising period to be included, as an amendment, to another pan-EU digital regulation that’s yet to be voted on by the committee (aka the Digital Services Act; DSA).

However IMCO does not look likely to go so far in amending either legislative package — despite a call this week by the European Data Protection Board for the bloc to move towards a total ban on behavioral ads given the risks posed to citizens fundamental rights.

Digital Markets Act

The European Parliament is in the process of finalizing its negotiating mandate on one of the aforementioned digital reforms — aka, the DMA — which is set to apply to Internet platforms that have amassed market power by occupying a so-called ‘gatekeeping’ role as online intermediaries, typically giving them a high degree of market leverage over consumers and other digital businesses.

Critics argue this can lead to abusive behaviors that negatively impact consumers (in areas like privacy) — while also chilling fair competition and impeding genuine innovation (including in business models).

For this subset of powerful platforms, the DMA — which was presented as a legislative proposal at the end of last year — will apply a list of pre-emptive ‘dos and don’ts’ in an attempt to rebalance digital markets that have become dominated by a handful of (largely) US-based giants.

EU lawmakers argue the regulation is necessary to respond to evidence that digital markets are prone to tipping and unfair practices as a result of asymmetrical dynamics such as network effects, big data and ‘winner takes all’ investor strategies.

Under the EU’s co-legislative process, once the Commission proposes legislation the European Parliament (consisting of directly elected MEPs) and the Council (the body that represents Member States’ governments) must adopt their own negotiating mandates — and then attempt to reach consensus — meaning there’s always scope for changes to the original draft, as well as a long period where lobbying pressure can be brought to bear to try to influence the final shape of the law.

The IMCO committee vote this morning will be followed by a plenary vote in the European Parliament next month to confirm MEPs’ negotiating mandate — before the baton passes to the Council next year. There trilogue negotiations, between the Parliament, Commission and Member States’ governments, are slated to start under the French presidency in the first semester of 2022. Which means more jockeying, horse-trading and opportunities for corporate lobbying lie ahead. And (likely) many months before any vote to approve a final DMA text.

Still, MEPs’ push to strengthen the tech giant-targeting package is notable nonetheless.

A second flagship digital update, the DSA, which will apply more broadly to digital services — dealing with issues like illegal content and algorithmic recommendations — is still being debated by MEPs and committee votes like IMCO’s remain outstanding.

So the DMA has passed through parliamentary debate relatively quickly (vs the DSA), suggesting there’s political consensus (and appetite) to rein in tech giants.

In its press release summarizing the DMA amendments, rapporteur Andreas Schwab (of the EPP and DE political grouping) made this point, loud and clear, writing: “The EU stands for competition on the merits, but we do not want bigger companies getting bigger and bigger without getting any better and at the expense of consumers and the European economy. Today, it is clear that competition rules alone cannot address all the problems we are facing with tech giants and their ability to set the rules by engaging in unfair business practices. The Digital Markets Act will rule out these practices, sending a strong signal to all consumers and businesses in the Single Market: rules are set by the co-legislators, not private companies!”

In other interesting tweaks, the committee has voted to expand the scope of the DMA — to cover not just online intermediation services, social networks, search engines, operating systems, online advertising services, cloud computing, and video-sharing services (i.e. where those platforms meet the relevant criteria to be designated “gatekeepers”) — but also add in web browsers (hi Google Chrome!), virtual assistants (Ok Google; hey Siri!) and connected TV (hi, Android TV) too.

On gatekeeper criteria, MEPs backed an increase in the quantitative thresholds for a company to fall under scope — to €8 billion in annual turnover in the European Economic Area; and a market capitalisation of €80 billion.

The sorts of tech giants who would qualify — based on that turnover and market cap alone (NB: other criteria would also apply) — include the usual suspects of Apple, Amazon, Meta (Facebook), Google, Microsoft etc but also — potentially — the European booking platform, Booking.com.

Although the raised threshold may keep another European gatekeeper, music streaming giant Spotify, out of scope.

MEPs supported the additional criteria for a platform to qualify as a gatekeeper and fall under scope of the DMA of: Namely, providing a “core platform service” in at least three EU countries; having at least 45M monthly end users and 10,000+ business users. The committee also noted their support that these thresholds do not prevent the Commission from designating other companies as gatekeepers — “when they meet certain conditions”.

In other changes, the committee backed adding new provisions around the interoperability of services, such as for number-independent interpersonal communication services and social network services.

And — making an intervention on so-called ‘killer acquisitions’ — MEPs voted for the Commission to have powers to impose “structural or behavioural remedies” where gatekeepers have engaged in systematic non-compliance.

“The approved text foresees in particular the possibility for the Commission to restrict gatekeepers from making acquisitions in areas relevant to the DMA in order to remedy or prevent further damage to the internal market. Gatekeepers would also be obliged to inform the Commission of any intended concentration,” they note on that.

The committee backed a centralized enforcement role for the Commission — while adding some clarifications around the role of national competition authorities.

Failures of enforcement have been a major bone of contention around the EU’s flagship data protection regime, the GDPR, which allows for enforcement to be devolved to Member States but also for forum shopping and gaming of the system — as a couple of EU countries have outsized concentrations of tech giants on their soil and have been critized as bottlenecks to effective GDPR enforcement.

(Only today, for example, Ireland’s Data Protection Commission has been hit with a criminal complaint accusing it of procedural blackmail in an attempt to gag complainants in a way that benefits tech giants like Facebook… )

On sanctions for gatekeepers which break the DMA rules, MEPs want the Commission to impose fines of “not less than 4% and not exceeding 20%” of total worldwide turnover in the preceding financial year — which, in the case of adtech giants Facebook’s and Google’s full year 2020 revenue would allow for theoretical sanctions in the $3.4BN-$17.2BN and $7.2BN-$36.3BN range, respectively.

Which would be a significant step up on the sorts of regulatory sanctions tech giants have faced to date in the EU.

Facebook has yet to face any fines under GDPR, for example — over three years since it came into application, despite facing numerous complaints. (Although Facebook-owned WhatsApp was recently fined $267M for transparency failures.)

While Google received an early $57M GDPR from France before it moved users to fall under Ireland’s legal jurisdiction — where its adtech has been under formal investigation since 2019 (without any decisions/sanctions as yet).

Mountain View has also faced a number of penalties elsewhere in Europe, though — with France again leading the charge and slapping Google with a $120M fine for dropping tracking cookies without consent (under the EU ePrivacy Directive) last year.

Its competition watchdog has also gone after Google — issuing a $268M penalty this summer for adtech abuses and a $592M sanction (also this summer) related to requirements to negotiate licensing fees with news publishers over content reuse.

It’s interesting to imagine such stings as a mere amuse-bouche compared to the sanctions EU lawmakers want to be able to hand out under the DMA.

European Parliament’s IMCO backs limits on tech giants’ ability to run tracking ads

In what looks like bad news for adtech giants like Facebook and Google, MEPs in the European Parliament have voted for tougher restrictions on how Internet users’ data can be combined for ad targeting purposes — backing a series of amendments to draft legislation that’s set to apply to the most powerful platforms on the web.

The Internal Market and Consumer Protection Committee (IMCO) today voted overwhelmingly to support beefed up consent requirements on the use of personal data for ad targeting within the Digital Markets Act (DMA); and for a complete prohibition on the biggest platforms being able to process the personal data of minors for commercial purposes — such as marketing, profiling or behaviorally targeted ads — to be added to the draft legislation.

The original Commission proposal for the DMA was notably weak in the area of surveillance business models — with the EU’s executive targeting the package of measures at other types of digital market abuse, such as self-preferencing and unfair T&Cs for platform developers, which its central competition authority was more familiar with.

“The text says that a gatekeeper shall, ‘for its own commercial purposes, and the placement of third-party advertising in its own services, refrain from combining personal data for the purpose of delivering targeted or micro-targeted advertising’, except if there is a ‘clear, explicit, renewed, informed consent’, in line with the General Data Protection Regulation,” IMCO writes in a press release. “In particular, personal data of minors shall not be processed for commercial purposes, such as direct marketing, profiling and behaviourally targeted advertising.”

It’s fair to say that adtech giants are masters of manipulating user consent at scale — through the use of techniques like A/B testing and dark pattern design — so beefed up consent requirements (for adults) aren’t likely to offer as much of a barrier against ad-targeting abuse as the committee seems to think they might.

Although if Facebook was finally forced to offer an actual opt-out of tracking ads that would still be a major win (as it doesn’t currently give users any choice over being surveilled and profiled for ads).

However the stipulation that children should be totally protected from commercial stuff like profiling and behavioral ads is potentially a lot more problematic for the likes of Facebook and Google — given the general lack of robust age assurance across the entire Internet.

It suggests that if this partial prohibition makes it into EU law, adtech platforms may end up deciding it’s less legally risky to turn off tracking-based ads altogether (in favor of using alternatives that don’t require processing users’ personal data, such as contextual targeting) vs trying to correctly age verify their entire user base in order to firewall only minors’ eyeballs from behavioral ads.

At the very least, such a ban could present big (ad)tech with a compliance headache — and more work for their armies of in-house lawyers — though MEPs have not proposed to torpedo their entire surveillance business model at this juncture.

In recent months a number of parliamentarians have been pushing for just that: An outright ban on tracking-based advertising period to be included, as an amendment, to another pan-EU digital regulation that’s yet to be voted on by the committee (aka the Digital Services Act; DSA).

However IMCO does not look likely to go so far in amending either legislative package — despite a call this week by the European Data Protection Board for the bloc to move towards a total ban on behavioral ads given the risks posed to citizens fundamental rights.

Digital Markets Act

The European Parliament is in the process of finalizing its negotiating mandate on one of the aforementioned digital reforms — aka, the DMA — which is set to apply to Internet platforms that have amassed market power by occupying a so-called ‘gatekeeping’ role as online intermediaries, typically giving them a high degree of market leverage over consumers and other digital businesses.

Critics argue this can lead to abusive behaviors that negatively impact consumers (in areas like privacy) — while also chilling fair competition and impeding genuine innovation (including in business models).

For this subset of powerful platforms, the DMA — which was presented as a legislative proposal at the end of last year — will apply a list of pre-emptive ‘dos and don’ts’ in an attempt to rebalance digital markets that have become dominated by a handful of (largely) US-based giants.

EU lawmakers argue the regulation is necessary to respond to evidence that digital markets are prone to tipping and unfair practices as a result of asymmetrical dynamics such as network effects, big data and ‘winner takes all’ investor strategies.

Under the EU’s co-legislative process, once the Commission proposes legislation the European Parliament (consisting of directly elected MEPs) and the Council (the body that represents Member States’ governments) must adopt their own negotiating mandates — and then attempt to reach consensus — meaning there’s always scope for changes to the original draft, as well as a long period where lobbying pressure can be brought to bear to try to influence the final shape of the law.

The IMCO committee vote this morning will be followed by a plenary vote in the European Parliament next month to confirm MEPs’ negotiating mandate — before the baton passes to the Council next year. There trilogue negotiations, between the Parliament, Commission and Member States’ governments, are slated to start under the French presidency in the first semester of 2022. Which means more jockeying, horse-trading and opportunities for corporate lobbying lie ahead. And (likely) many months before any vote to approve a final DMA text.

Still, MEPs’ push to strengthen the tech giant-targeting package is notable nonetheless.

A second flagship digital update, the DSA, which will apply more broadly to digital services — dealing with issues like illegal content and algorithmic recommendations — is still being debated by MEPs and committee votes like IMCO’s remain outstanding.

So the DMA has passed through parliamentary debate relatively quickly (vs the DSA), suggesting there’s political consensus (and appetite) to rein in tech giants.

In its press release summarizing the DMA amendments, rapporteur Andreas Schwab (of the EPP and DE political grouping) made this point, loud and clear, writing: “The EU stands for competition on the merits, but we do not want bigger companies getting bigger and bigger without getting any better and at the expense of consumers and the European economy. Today, it is clear that competition rules alone cannot address all the problems we are facing with tech giants and their ability to set the rules by engaging in unfair business practices. The Digital Markets Act will rule out these practices, sending a strong signal to all consumers and businesses in the Single Market: rules are set by the co-legislators, not private companies!”

In other interesting tweaks, the committee has voted to expand the scope of the DMA — to cover not just online intermediation services, social networks, search engines, operating systems, online advertising services, cloud computing, and video-sharing services (i.e. where those platforms meet the relevant criteria to be designated “gatekeepers”) — but also add in web browsers (hi Google Chrome!), virtual assistants (Ok Google; hey Siri!) and connected TV (hi, Android TV) too.

On gatekeeper criteria, MEPs backed an increase in the quantitative thresholds for a company to fall under scope — to €8 billion in annual turnover in the European Economic Area; and a market capitalisation of €80 billion.

The sorts of tech giants who would qualify — based on that turnover and market cap alone (NB: other criteria would also apply) — include the usual suspects of Apple, Amazon, Meta (Facebook), Google, Microsoft etc but also — potentially — the European booking platform, Booking.com.

Although the raised threshold may keep another European gatekeeper, music streaming giant Spotify, out of scope.

MEPs supported the additional criteria for a platform to qualify as a gatekeeper and fall under scope of the DMA of: Namely, providing a “core platform service” in at least three EU countries; having at least 45M monthly end users and 10,000+ business users. The committee also noted their support that these thresholds do not prevent the Commission from designating other companies as gatekeepers — “when they meet certain conditions”.

In other changes, the committee backed adding new provisions around the interoperability of services, such as for number-independent interpersonal communication services and social network services.

And — making an intervention on so-called ‘killer acquisitions’ — MEPs voted for the Commission to have powers to impose “structural or behavioural remedies” where gatekeepers have engaged in systematic non-compliance.

“The approved text foresees in particular the possibility for the Commission to restrict gatekeepers from making acquisitions in areas relevant to the DMA in order to remedy or prevent further damage to the internal market. Gatekeepers would also be obliged to inform the Commission of any intended concentration,” they note on that.

The committee backed a centralized enforcement role for the Commission — while adding some clarifications around the role of national competition authorities.

Failures of enforcement have been a major bone of contention around the EU’s flagship data protection regime, the GDPR, which allows for enforcement to be devolved to Member States but also for forum shopping and gaming of the system — as a couple of EU countries have outsized concentrations of tech giants on their soil and have been critized as bottlenecks to effective GDPR enforcement.

(Only today, for example, Ireland’s Data Protection Commission has been hit with a criminal complaint accusing it of procedural blackmail in an attempt to gag complainants in a way that benefits tech giants like Facebook… )

On sanctions for gatekeepers which break the DMA rules, MEPs want the Commission to impose fines of “not less than 4% and not exceeding 20%” of total worldwide turnover in the preceding financial year — which, in the case of adtech giants Facebook’s and Google’s full year 2020 revenue would allow for theoretical sanctions in the $3.4BN-$17.2BN and $7.2BN-$36.3BN range, respectively.

Which would be a significant step up on the sorts of regulatory sanctions tech giants have faced to date in the EU.

Facebook has yet to face any fines under GDPR, for example — over three years since it came into application, despite facing numerous complaints. (Although Facebook-owned WhatsApp was recently fined $267M for transparency failures.)

While Google received an early $57M GDPR from France before it moved users to fall under Ireland’s legal jurisdiction — where its adtech has been under formal investigation since 2019 (without any decisions/sanctions as yet).

Mountain View has also faced a number of penalties elsewhere in Europe, though — with France again leading the charge and slapping Google with a $120M fine for dropping tracking cookies without consent (under the EU ePrivacy Directive) last year.

Its competition watchdog has also gone after Google — issuing a $268M penalty this summer for adtech abuses and a $592M sanction (also this summer) related to requirements to negotiate licensing fees with news publishers over content reuse.

It’s interesting to imagine such stings as a mere amuse-bouche compared to the sanctions EU lawmakers want to be able to hand out under the DMA.

Facebook’s lead EU privacy supervisor hit with corruption complaint

Facebook’s problems with European privacy law could be about to get a whole lot worse. But ahead of what may soon be a major (and long overdue) regulatory showdown over the legality of its surveillance-based business model, Ireland’s Data Protection Commission (DPC) is facing a Facebook-shaped problem of its own: It’s now the subject of a criminal complaint alleging corruption and even bribery in the service of covering its own backside (we paraphrase) and shrinking the public understand of the regulatory problems facing Facebook’s business.

European privacy campaign group noyb has filed the criminal complaint against the Irish DPC, which is Facebook’s lead regulator in the EU for data protection.

noyb is making the complaint under Austrian law — reporting the Irish regulator to the Austrian Office for the Prosecution of Corruption (aka WKStA) after the DPC sought to use what noyb terms “procedural blackmail” to try to gag it and prevent it from publishing documents related to General Data Protection Regulation (GDPR) complaints against Facebook.

The not-for-profit alleges that the Irish regulator sought to pressure it to sign an “illegal” non-disclosure agreement (NDA) in relation to a public procedure — its complaint argues there is no legal basis for such a requirement — accusing the DPC of seeking to coerce it into silence, as Facebook would surely wish, by threatening not to comply with its regulatory duty to hear the complainant unless noyb signed the NDA. Which is quite the (alleged) quid-pro-quo.

“The DPC acknowledges that it has a legal duty to hear us but it now engaged in a form of ‘procedural coercion’,” said noyb chair, Max Schrems, in a statement. “The right to be heard was made conditional on us signing an agreement, to the benefit of the DPC and Facebook. It is nothing but an authority demanding to give up the freedom of speech in exchange for procedural rights.”

The regulator has also demanded noyb remove documents it has previously made public — related to the DPC’s draft decision of a GDPR complaint against Facebook — again without clarifying what legal basis it has to make such a demand.

As noyb points out, it is based in Austria, not Ireland — so is subject to Austrian law, not Irish law. But, regardless, even under Irish law it argues there’s no legal duty for parties to keep documents confidential — pointing out that Section 26 of the Irish Data Protection Act, which was cited by the DPC in this matter, only applies to DPC staff (“relevant person”), not to parties.

“Generally we have very good and professional relationships with authorities. We have not taken this step lightly, but the conduct of the DPC has finally crossed all red lines. The basically deny us all our rights to a fair procedure unless we agree to shut up,” added Schrems.

He went on to warn that “Austrian corruption laws are far reaching” — and to further emphasize: “When an official requests the slightest benefit to conduct a legal duty, the corruption provisions may be triggered. Legally there is no difference between demanding an unlawful agreement or a bottle of wine.”

All of which looks exceptionally awkward for the Irish regulator. Which already, let’s not forget — at the literal start of this year — agreed to “swiftly” finalize another fractious complaint made by Schrems, this one relating to Facebook’s EU-US data transfers, and which dates all the way back to 2013, following noyb bringing a legal procedure.

(But of course there’s still no sign of a DPC resolution of that Facebook complaint either… So, uhhh, ‘Siri: Show me regulatory capture’… )

Last month noyb published a draft decision by the DPC in relation to another (slightly less vintage) complaint against Facebook — which suggested the tech giant’s lead EU data regulator intended not to challenge Facebook’s attempt to use an opaque legal switch to bypass EU rules (by claiming that users are actually in a contract with it receive targeted ads, ergo GDPR consent requirements do not apply).

The DPC had furthermore suggested a wrist-slap penalty of $36M — for Facebook failing transparency requirements over the aforementioned ‘ad contract’.

That decision remains to be finalized because — under the GDPR’s one-stop-shop mechanism, for deciding cross-border complaints — other EU DPAs have a right to object to a lead supervisor’s preliminary decision and can ratchet out a different outcome. Which is what noyb is suggesting may be about to happen vis-a-vis this particular Facebook complaint saga.

Winding back slightly, despite the EU’s GDPR being well over three years old (in technical application terms), the DPC has yet to make a single final finding against Facebook proper.

So far it’s only managed one decision against Facebook-owned WhatsApp — which resulted in an inflated financial penalty for transparency failures by the messaging platform after other EU DPAs intervened to object to a (similarly) low-ball draft sanction Ireland had initially suggested. In the end WhatsApp was hit with a fine of $267M — also for breaching GDPR transparency obligations. A notable increase on the DPC’s offer of a fine of up to $56M.

The tech giant is appealing that penalty — but has also said it will be tweaking its privacy policy in Europe in the meanwhile. So it’s a (hard won) win for European privacy advocates — for now.

The WhatsApp GDPR complaint is just the tip, of course. The DPC has been sitting, hen-like, on a raft of data protection complaints against Facebook and other Facebook-owned platforms — including several filed by noyb on the very the day the regulation came into technical application all the way back in May 2018.

These ‘forced consent’ complaints by noyb strike at the heart of the headlock Facebook applies to users by not offering them an opt-out from tracking based advertising. Instead the ‘deal’ Facebook (now known as Meta) offers is a take-it or leave-it ‘choice’ — either accept ads or delete your account — despite the GDPR setting a robust standard for what can legally constitute consent that states it must be specific, informed and freely given.

Arm twisting is not allowed. Yet Facebook has been twisting European’s arms before and since the GDPR, all the same.

So the ‘forced consent’ complaints — if they do ever actually get enforced — have the potential to purge the tech giant’s surveillance-based business model once and for all. As, perhaps, does the vintage EU-US data transfers issue. (Certainly it would crank up Facebook’s operational costs if it had to federate its service so that Europeans’ data was stored and processed within the EU to fix the risk of US government mass surveillance.)

However, per the draft DPC decision on the forced consent issue, published (by noyb) last month, the Irish regulator appeared to be preparing to (at best) sidestep the crux question of the the legality of Facebook’s data mining, writing in a summary: “There is no obligation on Facebook to seek to rely solely on consent for the purposes of legitimising personal data processing where it is offering a contract to a user which some users might assess as one that primarily concerns the processing of personal data. Nor has Facebook purported to rely on consent under the GDPR.”

noyb has previously accused the DPC of holding secret meetings with Facebook around the time it came up with the claimed consent bypass and just as the GDPR was about come into application — implying the regulator was seeking to support Facebook in finding a workaround for EU law.

The not-for-profit also warned last month that if Facebook’s relabelling “trick” (i.e. switching a claim of ‘consent’ to a claim of ‘contract’) were to be accepted by EU regulators it would undermine the whole of the GDPR — making the much lauded data protection regime trivially easy for data-mining giants to bypass.

Likewise, noyb argues, had it signed the DPC’s demanded NDA it would have “greatly benefited Facebook”.

It would also have helped the DPC by keeping a lid on the awkward detail of lengthy and labyrinthine proceedings — at a time when the regulator is facing rising heat over its inaction against big tech, including from lawmakers on home soil. (Some of which are now pushing for reform of the Commission — including the suggestion that more commissioners should be recruited to remove sole decision-making power from the current incumbent, Helen Dixon.)

“The DPC is continuously under fire by other DPAs, in public inquiries and the media. If an NDA would hinder noyb’s freedom of speech, the DPC’s reputational damage could be limited,” noyb suggests in a press release, before going on to note that had it been granted a benefit by signing an NDA (“in direct exchange for the DPC to conduct its legal duties”) its own staff could have potentially committed a crime under the Austrian Criminal Act.

The not-for-profit instead opted to dial up publicity — and threaten a little disinfecting sunlight — by filing a criminal complaint with the Austrian Office for the Prosecution of Corruption.

It’s essentially telling the DPC to put up a legal defence of its procedural gagging attempts — or, well, shut up.

Here’s Schrems again: “We very much hope that Facebook or the DPC will file legal proceedings against us, to finally clarify that freedom of speech prevails over the scare tactics of a multinational and its taxpayer-funded minion. Unfortunately we must expect that they know themselves that they have no legal basis to take any action, which is why they reverted to procedural blackmail in the first place.”

Nor is noyb alone in receiving correspondence from the DPC that’s seeking to apply swingeing confidentiality clauses to complainants. TechCrunch has reviewed correspondence sent to the regulator earlier this fall by another complainant who writes to query its legal basis for a request to gag disclosure of correspondence and draft reports.

Despite repeated requests for clarification, the DPC appears to have entirely failed — over the course of more than a month — to reply to the request for its legal basis for making such a request.

This suggests noyb’s experience of scare tactics without legal substance is not unique and backs up its claim that the DPC has questions to answer about how it conducts its office.

We’ll be reaching out to the DPC for comment on the allegations it’s facing.

But what about Facebook? noyb’s press release goes on to predict a “tremendous commercial problem” looming for the data-mining giant — as it says DPC correspondence “shows that other European DPAs have submitted ‘relevant and reasoned objections’ and oppose the DPC’s view” [i.e. in the consent bypass complaint against Facebook].

“If the other DPAs have a majority and ultimately overturn the DPC’s draft decision, Facebook could face a legal disaster, as most commercial use of personal data in the EU since 2018 would be retroactively declared illegal,” noyb suggests, adding: “Given that the other DPAs passed Guidelines in 2019 that are very unfavourable to Facebook’s position, such a scenario is highly likely.”

The not-for-profit has more awkward revelations for the DPC and Facebook in the pipe, too.

It says it’s preparing fresh document releases in the coming weeks — related to correspondence from the DPC and/or Facebook — as a “protest” against attempts to gag it and to silence democratic debate about public procedures.

“On each Sunday in advent, noyb will publish another document, together with a video explaining the documents and an analysis why the use of these documents is fully compliant with all applicable laws,” it notes, adding that what it’s billing as the “advent reading” will be published on noyb.eu“so tune in!”.

So looks like the next batch of ‘Facebook Papers‘ that Meta would really rather you didn’t see will be dropping soon…

via GIPHY

 

Gift Guide: The smart home starter kit

A year ago I accidentally turned my house into a smart home. What started out as an easy (and lazy, let’s be honest) way to switch off the radio in the kitchen without getting up from the couch quickly became an obsession to remotely control and automate as much of my house as possible.

What makes a smart home? In my house the lights, outlets and window blinds can be controlled from my phone, at home or anywhere in the world, but you can extend it to other things, like air conditioners, sprinklers and garage door openers. Or thermostats, speakers, security cameras and just about anything electrical. And by adding smart home tech that can detect temperature, humidity or motion, you can automate your house to turn the light up at sunset, set the sprinklers on when the weather is dry, turn on the air conditioning when it’s warm or alert you if the doors open when you’re not at home.

The novelty of switching your living room lights off and off from your phone might quickly wear off, but it can be reassuring knowing that you can get a sense of what’s going on at home even when you’re not there — or automatically adjust the climate and lighting when you are.

You’re probably thinking, is this guy serious? Why would I want even more of my home connected to the internet? The Internet of Things (IoT) doesn’t have the best reputation historically with security, but modern smart home devices can be certified to the far higher standards set by the Big Tech giants like Apple, Amazon and Google. That said, no technology is ever perfectly secure, though efforts to create a common secure smart home standard is paying off with Matter, a protocol endorsed by some of the biggest tech companies and smart home device makers.

It helps to join a smart home ecosystem that you’re comfortable with. I use a Mac and an iPhone, so Apple’s HomeKit makes the most sense for me. Apple does not collect a ton of data like other smart home ecosystems and is probably a better fit for the privacy minded. For this guide we’ll focus on HomeKit but much will broadly apply if you use another ecosystem. For Android users, Google Home would make more sense, or Amazon Alexa if you’re so inclined. Many modern devices are compatible with other smart home platforms anyway, including newer standards like Thread. But for best results, pick an ecosystem and make sure the add-ons you’re buying are compatible.

If you’re after convenience or routine — or like in my case you just want to tinker — there’s a lot you can do with what you already have but a lot more you can do without breaking the bank.

This article contains links to affiliate partners where available. When you buy through these links, TechCrunch may earn an affiliate commission.  Looking for more ideas? Find our other gift guides here.

First, you need (or may already have) a HomeKit hub

A HomePod mini or Apple TV will work as a hub. Image Credits: Brian Heater/TechCrunch

HomeKit devices rely on a hub to communicate with. It’s through this hub that your other smart home devices connect to the internet, letting you access your devices from your phone in the outside world. Good news if you have an Apple TV or a HomePod (or HomePod mini) since these will serve as your HomeKit hub out of the box and generally don’t require any configuration. If you have more than one of these devices in your home, they can all serve as failover hubs if one becomes unavailable.

Some tech, like Philips Hue or Samsung SmartThings, will require their own separate hub (sometimes called a bridge) before they will appear in your smart home. Hubs often connect directly into the router, so keep available ports and wireless range in mind.

Control your regular devices with smart plugs

Smart plugs can be used to control regular devices with physical power switches. Image Credits: TechCrunch

Smart plugs are a great way of connecting conventional electrical and appliances to your smart home. A smart plug fits between your regular appliance plug and the wall outlet and can be told to switch on and off at your command. It’s worth noting that smart plugs only work with devices and appliances with a physical power switch that stays in place and won’t work with devices with an auto-shutoff switch, like a kettle. (You probably shouldn’t rely on a smart plug for anything mission critical, like medical equipment or big household appliances.)

WeMo Wi-Fi Smart Plugs by Belkin have a small form factor compared to other, bulkier smart plugs and work reliably. There’s also a physical button on the side, in case you want or need to toggle it by hand. Eve Energy plugs are a little more expensive and also come with power management features in the Eve app but no physical button for backup.

Smart bulbs will light your home the way you want it

You may be better off finding smart bulbs that fit your existing fixtures, rather than the most recommended brand. Image Credits: TechCrunch

Just like regular bulbs, smart bulbs come in all shapes, sizes and colors so you’re likely going to need to find a brand that suits your lights and fixtures. You may also need to mix and match brands as you expand your smart home. Most bulbs are dimmable and some offer granular temperature controls to get the warmth of the room right. Nanoleaf also offers bulbs that connect to HomeKit over Wi-Fi. Philips Hue is a popular favorite but requires a separate hub to talk to your Home app. LIFX also has a broad range of bulbs and an accompanying app offering a few more features. Furniture giant IKEA has a diverse range of smart bulbs. A common criticism of smart bulbs is that they often aren’t as bright as regular filament or LED bulbs, so keep that in mind if your home is particularly prone to low light.

Set the mood with smart strip lighting

Strip lighting can be themed, colorful and animated. Image Credits: Zack Whittaker/TechCrunch

Adding color light strips to your smart home can really brighten up a space. Smart light strips are like regular strip lighting for under cabinets and shelves, but they can be controlled from your phone. These adhesive-backed strips contain dozens or hundreds of LEDs that let you customize their color and brightness, and often also their patterns and animations to bring reactive light to your rooms. You’ll need to keep in mind that most will require a power source, more often than not a wall outlet if not USB — many modern television sets should have one spare. We like the Nanoleaf Essentials Lightstrip Starter Kit to start with, or LIFX’s Lightstrip also does a great job; there’s one in my living room.

You can retrofit some of your older tech

Image Credits: SOMA

One of the best things about smart homes is how much you can do with your existing fittings. There are a handful of window blinds and shades made by well-known brands like IKEA (with a separate hub) or baked-in like Lutron’s Serena that are natively compatible with HomeKit. Or if you have existing blinds, there are options to let you retrofit your existing window blinds with a HomeKit-enabled controller.

My house has vertical blinds with a tilt handle you have to rotate to close the blinds and so we rely on Soma, an Estonia-based smart home device maker, which makes a number of blind controllers that fit into existing chains or tilt mechanisms. You hook them up to your blinds, remove the peel from the adhesive on the device, stick the box to the wall and you’re done — no skills required. These controllers run on battery or can be powered through the stick-on solar panel or plugged into an outlet. These controllers connect to HomeKit through a separate Raspberry Pi-like hub, the Soma Connect. While my experience with Soma has been flawless, others at TechCrunch report having troubles with it — consider our reviews here “mixed.”

Blinds are just one area that can be retrofitted to work with HomeKit. Other tech exists to connect garage doors, ceiling fans and even radiators to your smart home, too.

Get automating your home with sensors

Sensors allow you to set up more automations for your smart home. Image Credits: Eve Home

Now you have your lights and blinds hooked up to your Home app, you can start to add sensors to the mix. Sensors can let you automate your home by detecting light, movement, temperature, or if a window or door opens, and then turning on fans when it gets warm or setting the lights only when someone is home at sunset before it gets dark. Some might want to use these for security. They can be useful for power saving by shutting off lights when no motion is detected and adjusting the temperature when the weather changes.

Onvis Motion is a great, cheap, entry-level motion detector that also packs in a thermostat and hygrometer, so you can instantly get the temperature and humidity of the room that it’s in. Weather stations are more expensive but pack in a lot more features. Eve Door & Window are small stick-on battery-powered sensors that can trigger an alert or any other connected HomeKit device when a door or window is opened. And, if you’re after a privacy-friendly camera, Logitech’s Circle View works exclusively on HomeKit’s end-to-end encrypted video and doesn’t rely on a third-party cloud, if that’s a deal breaker for the privacy-minded.

Your Home app is your smart home dashboard

The Home app, which lets you control your HomeKit smart home from anywhere in the world. Image Credits: TechCrunch

One of the benefits to picking one ecosystem and sticking with it: For the most part, the most important features of each device will be aggregated into one app. With HomeKit, that’s the Home app.

Your Home app consolidates your HomeKit-enabled smart home tech in one place and lets you connect your sensors to automate the rest of your home. With window sensors you can turn on all of your lights as soon as you open the door, and you can set your strip lighting to light up a hallway when a sensor detects motion between sunset and sunrise.

The Home app also packs in some automation features, like switching on fans or opening blinds at certain times of the day when you know it’s going to be bright outside or at a time you know you’ll want privacy. Most devices also come with a corresponding app, often with a lot more settings, features and the ability to update the device’s firmware.

And that is your starter HomeKit smart home!

Some things to think about:

  • Wi-Fi network range is critical to a functioning smart home. HomeKit works on the 2.4 GHz wireless connection because it has a longer range than 5 GHz. There’s a good chance your router allows you to use both, but make sure you’re on the right network when setting up your HomeKit devices. Network coverage might be fine in a one-bedroom apartment, but larger homes may require Wi-Fi range extenders or a mesh network.
  • If you have a router that’s set up for self-organizing networking, which lets you move seamlessly between the different Wi-Fi bands, you might want to switch that setting off since it can interfere with your smart home. Some routers may have a dedicated Wi-Fi network for IoT devices built in.
  • Sometimes things will break and it’s not always clear why. Sometimes it’s just a moment of poor connectivity, sometimes in the more frustrating cases you might need to reset your devices. Each device comes with its own scannable QR code for getting set up. Keep these codes safe since you’ll need them if you ever have to reset a device and especially helpful when you can’t easily access or scan your HomeKit devices.
  • You can customize your Home screen and room backgrounds. HomePaper is a simple app that does exactly that, and it’s designed to gradually fade the background so it’s not intrusive or make the device panels difficult to read. Home wallpapers don’t sync across your Apple devices, so you have to manually add wallpapers to each Home app.

Read more:

TechCrunch Gift Guide 2021

Facebook to delay full E2EE rollout until ‘sometime in 2023’

The company formerly known as Facebook is delaying a rollout of end-to-end encryption across all its services until “sometime in 2023”, according to Meta’s global head of safety, Antigone Davis, penning an op-ed in the British newspaper, the Telegraph this weekend.

While Facebook-owned WhatsApp has had E2EE everywhere since 2016, most of the tech giant’s services do not ensure only the user holds keys for decrypting messaging data. Meaning those services can be subpoenaed or hit with a warrant to provide messaging data to public authorities.

But back in 2019 — in the wake of global attention to the Cambridge Analytica data misuse scandal — founder Mark Zuckerberg announced the company would work towards universally implementing end-to-end encryption across all its services as part of a claimed ‘pivot to privacy’.

Zuckerberg did not give a firm timeline for completing the rollout but, earlier this year, Facebook suggested it would complete the rollout during 2022.

Now the tech giant is saying it won’t get this done until “sometime” the following year. Which sounds distinctly like a can being kicked down the road.

Davis said the delay is the result of the social media giant wanting to take it time to ensure it can implement the technology safely — in the sense of being able to retain the ability to be able to pass information to law enforcement to assist in child safety investigations.

“As we do so, there’s an ongoing debate about how tech companies can continue to combat abuse and support the vital work of law enforcement if we can’t access your messages. We believe people shouldn’t have to choose between privacy and safety, which is why we are building strong safety measures into our plans and engaging with privacy and safety experts, civil society and governments to make sure we get this right,” she writes, saying it will use “proactive detection technology” to ID suspicious patterns of activity, along with enhanced controls for users and the ability for users to report problems.

Western governments, including the UK’s, have been leaning hard on Facebook to delay or abandon its plan to blanket services in the strongest level of encryption altogether — ever since it made the public announcement of its intention to ‘e2ee all the things’ over two years ago.

The UK has been an especially vocal critic of Facebook on this front, with Home Secretary Priti Patel very publicly (and repeatedly) warning Facebook that its plan to expand e2ee would hamper efforts to combat online child abuse — casting the tech giant as an irresponsible villain in the fight against the production and distribution of child sexual abuse material (CSAM).

So Meta’s op-ed appearing in the favored newspaper of the British government looks no accident.

“As we roll out end-to-end encryption we will use a combination of non-encrypted data across our apps, account information and reports from users to keep them safe in a privacy-protected way while assisting public safety efforts,” Davis also writes in the Telegraph, adding: “This kind of work already enables us to make vital reports to child safety authorities from WhatsApp.”

She goes on to suggest that Meta/Facebook has reviewed a number of historic cases — and concluded that it “would still have been able to provide critical information to the authorities, even if those services had been end-to-end encrypted” — adding: “While no systems are perfect, this shows that we can continue to stop criminals and support law enforcement.”

How exactly might Facebook be able to pass data on users even if all comms on its services were end-to-end encrypted?

Users are not privy to the exact detail on how Facebook/Meta joins the dots of their activity across its social empire — but while Facebook’s application of e2ee on WhatsApp covers messaging/comms content, for example, it does not extend to metadata (which can provide plenty of intel on its own).

The tech giant also routinely links accounts and account activity across its social media empire — passing data like a WhatsApp user’s mobile phone number to its eponymous service, following a controversial privacy U-turn back in 2016. This links a user’s (public) social media activity on Facebook (if they have or have had an account there) with the more bounded form of socializing that typifies activity on WhatsApp (i.e. one-to-one comms, or group chats in a private e2ee channel).

Facebook can thus leverage its vast scale (and historical profiling of users) to flesh out a WhatsApp user’s social graph and interests — based on things like who they are speaking to; who they’re connected to; what they’ve liked and done across all its services (most of which aren’t yet e2ee) — despite WhatsApp messaging/comms content itself being end-to-end encrypted.

(Or as Davis’ op-ed puts it: “As we roll out end-to-end encryption we will use a combination of non-encrypted data across our apps, account information and reports from users to keep them safe in a privacy-protected way while assisting public safety efforts. This kind of work already enables us to make vital reports to child safety authorities from WhatsApp.”)

Earlier this fall, Facebook was stung with a major fine in the European Union related to WhatsApp transparency obligations — with DPAs finding it had failed to properly inform users what it was doing with their data, including in relation to how it passes information between WhatsApp and Facebook.

Facebook is appealing against the GDPR sanction but today it announced a tweak to the wording of the privacy policy shown to WhatsApp users in Europe in response to the regulatory enforcement — although it claimed it has not made any changes to how it processes user data.

Returning to e2ee specifically, last month Facebook whistleblower Frances Haugen raised concerns over the tech giant’s application of the technology — arguing that since it’s a proprietary (i.e. rather than open source) implementation users must take Facebook/Meta’s security claims on trust, as independent third parties are unable to verify the code does what it claims.

She also suggested there is no way for outsiders to know how Facebook interprets e2ee — adding that for this reason she’s concerned about its plan to expand the use of e2ee — “because we have no idea what they’re going to do”, as she put it.

“We don’t know what it means, we don’t know if people’s privacy is actually protected,” Haugen told lawmakers in the UK parliament, further warning: “It’s super nuanced and it’s also a different context. On the open source end-to-end encryption product that I like to use there is no directory where you can find 14 year olds, there is no directory where you can go and find the Uighur community in Bangkok. On Facebook it is trivially easy to access vulnerable populations and there are national state actors that are doing this.”

Haugen was careful to speak up in support of e2ee — saying she’s a supporter of open source implementations of the security technology, i.e. where external experts can robustly interrogate code and claims.

But in the case of Facebook, where its e2ee implementation is not open to anyone to verify, she suggested regulatory oversight is needed to avoid the risk of the tech giant making misleading claims about how much privacy (and therefore safety from potentially harmful surveillance, such as by an authoritarian state) users actually have.

 

Davis’ op-ed — which is headlined “we’ll protect privacy and prevent harm” — sounds intended to soothe UK policymakers that they can ‘have their cake and eat it’; concluding with a promise that Meta will “continue engaging with outside experts and developing effective solutions to combat abuse”.

“We’re taking our time to get this right and we don’t plan to finish the global rollout of end-to-end encryption by default across all our messaging services until sometime in 2023,” Davis adds, finishing with another detail-light soundbite that it is “determined to protect people’s private communications and keep people safe online”.

While the UK government will surely be delighted with the line-toeing quality of Facebook’s latest public missives on a very thorny topic, its announcement that it’s delaying e2ee in order to “get this right” — following sustained pressure from ministers like Patel — is only likely to increase concerns about what “right” means in such a privacy sensitive context.

Certainly the wider community of digital rights advocates and security experts will be closely watching what Meta does here.  

The UK government recently splashed almost half a million of taxpayer’s money on five projects to develop scanning/filtering technologies that could be applied to e2ee services — to detect, report or block the creation of child sexual abuse material (CSAM) — after ministers said they wanted to encourage innovation around “tech safety” through the development of “alternative solutions” (i.e. which would not require platforms not to use e2ee but instead to embed some form of scanning/filtering technology into the encrypted systems to detect/combat CSAM).

So the UK’s preferred approach appears to be to use the political cudgel of concern for child safety — which it’s also legislating for in the Online Safety Bill — to push platforms to implement spyware that allows for encrypted content to be scanned on users’ devices regardless of any claim of e2ee.

Whether such baked in scanner systems essentially sum to a backdoor in the security of robust encryption (despite ministers claims otherwise) will surely be the topic of close scrutiny and debate in the months/years ahead.

Here it’s instructive to look at Apple’s recent proposal to add a CSAM detection system to its mobile OS — where the technology was slated to scan content on a user’s device prior to it being uploaded to its iCloud storage service.

Apple initially took a bullish stance on the proactive move — claiming it had developed “the technology that can balance strong child safety and user privacy”.

However after a storm of concern from privacy and security experts — as well as those warning that such systems, once established, would inexorably face ‘feature creep’ (whether from commercial interests to scan for copyrighted content; or from hostile states to target political dissidents living under authoritarian regimes) — Apple backtracked, saying after less than a month that it would delay implementing the system.

It’s not clear when/whether Apple might revive the on-device scanner.

While the iPhone maker has built a reputation (and very lucrative business) as a privacy-centric company, Facebook’s ad empire is the opposite beast: Synonymous with surveillance for profit. So expecting the social media behemoth — whose founder (and all-powerful potentate) has presided over a string of scandals attached to systematically privacy-hostile decisions — to hold the line in the face of sustained political pressure to bake spyware into its products would be for Facebook to deny its own DNA.

Its recent corporate rebranding to Meta looks a whole lot more superficial than that.