Google agrees with UK’s CMA to deeper oversight of Privacy Sandbox

As part of an ongoing antitrust investigation into Google’s Privacy Sandbox by the UK’s competition regulator, the adtech giant has agreed to an expanded set of commitments related to oversight of its planned migration away from tracking cookies, the regulator announced today.

Google has also put out its own blog post on the revisions — which it says are intended to “underline our commitment to ensuring that the changes we make in Chrome will apply in the same way to Google’s ad tech products as to any third party, and that the Privacy Sandbox APIs will be designed, developed and implemented with regulatory oversight and input from the CMA [Competition and Markets Authority] and the ICO [Information Commissioner’s Office]”.

Google announced its intention to deprecate support for the third party tracking cookies that are used for targeting ads at individuals in its Chrome browser all the way back in 2019 — and has been working on a stack of what it claims are less intrusive alternative ad-targeting technologies (aka, the “Privacy Sandbox”) since then.

The basic idea is to shift away from ads being targeted at individuals (which is horrible for Internet users’ privacy) to targeting methods that put Internet users in interest-based buckets and serve ads to so-called “cohorts” of users (aka, FloCs) which may be less individually intrusive — however it’s important to note that Google’s proposed alternative still has plenty of critics (the EFF, for example, has suggested it could even amplify problems like discrimination and predatory ad targeting).

And many privacy advocates would argue that pure-play contextual targeting poses the least risk to Internet users’ rights while still offering advertisers the ability to reach relevant audiences and publishers to monetize their content.

Google’s Sandbox plan has attracted the loudest blow-back from advertisers and publishers, who will be directly affected by the changes. Some of whom have raised concerns that the shift away from tracking cookies will simply increase Google’s market power — hence the Competition and Markets Authority (CMA) opening an antitrust investigation into the plan in January.

As part of that probe, the CMA had already secured one set of commitments from Google around how it would go about the switch, including that it would agree to halt any move to deprecate cookies if the regulator was not satisfied the transition could take place in a way that respects both competition and privacy; and agreements on self-preferencing, among others.

A market consultation on the early set of commitments drew responses from more than 40 third parties — including, TechCrunch understands, input from international regulators (some of who are also investigating Google’s Sandbox, such as the European Commission, which opened its own probe of Google’s adtech in June) .

Following that, the first set of proposed commitments has been expanded and beefed up with additional requirements (see below for a summary; and here for fuller detail from the CMA’s “Notice of intent to accept the modified commitments”).

The CMA will now consult on the expanded set — with a deadline of 5pm on December 17, 2021, to take fresh feedback.

It will then make a call on whether the beefed up bundle bakes in enough checks-and-balances to ensure that Google carries out the move away from tracking cookies with the least impact on competition and the least harm to user privacy (although it will be the UK’s ICO that’s ultimately responsible for oversight of the latter piece).

If the CMA is happy with responses to the revised commitments, it would then close the investigation and move to a new phase of active oversight, as set out in the detail of what it’s proposing to agree with Google.

A potential timeline for this to happen is early 2022 — but nothing is confirmed as yet.

Commenting in a statement, CMA CEO Andrea Coscelli said:

“We have always been clear that Google’s efforts to protect user’s privacy cannot come at the cost of reduced competition.

That’s why we have worked with the Information Commissioner’s Office, the CMA’s international counterparts and parties across this sector throughout this process to secure an outcome that works for everyone.

We welcome Google’s co-operation and are grateful to all the interested parties who engaged with us during the consultation.

If accepted, the commitments we have obtained from Google become legally binding, promoting competition in digital markets, helping to protect the ability of online publishers to raise money through advertising and safeguarding users’ privacy.”

More market reassurance

In general, the expanded commitments look intended to offer a greater level of reassurance to the market that Google will not be able to exploit loopholes in regulatory oversight of the Sandbox to undo the intended effect of addressing competition risks and privacy concerns.

Notably, Google has agreed to appoint a CMA approved monitoring trustee — as one of the additional measures it’s suggesting to improve the provisions around reporting and compliance.

It will also dial up reporting requirements, agreeing to ensure that the CMA’s role and the regulator’s ongoing process — which the CMA now suggests should continue for a period of six years — are mentioned in its “key public announcements”; and to regular (quarterly) reporting to the CMA on how it is taking account of third party views as it continues building out the tech bundle.

Transparency around testing is also being beefed up.

On that, there have been instances, in recent months, where Google staffers have not been exactly fulsome in articulating the details of feedback related to the Origin Trial of its FloCs technology to the market, for example. So it’s notable that another highlighted change requires Google to instruct its staff not to make claims to customers which contradict the commitments.

Another concern reflected in the revisions is the worry of market participants of Google removing functionality or information before the full Privacy Sandbox changes are implemented — hence it has offered to delay enforcement of its Privacy Budget proposal and offered commitments around the introduction of measures to reduce access to IP addresses. 

We understand that concerns from market participants also covered Google removing other functionality — such as the user agent string — and that strengthened commitments are intended to address those wider worries too.

Self-preferencing requirements have also been dialled up. And the revised commitments include clarifications on the internal limits on the data that Google can use — and monitoring those elements will be a key focus for the trustee.

The period of active oversight by the CMA has also been extended vs the earlier plan — to six years from the date of any decision to accept Google’s modified commitments (up from around five).

This means that if the CMA agrees to the commitments next year they could be in place until 2028. And by then the UK expects to have reformed competition rules wrapping tech giant — as

In its own blog post, Google condenses the revised commitments thus:

  1. Monitoring and reporting. We have offered to appoint an independent Monitoring Trustee who will have the access and technical expertise needed to ensure compliance.
  2. Testing and consultation. We have offered the CMA more extensive testing commitments, along with a more transparent process to take market feedback on the Privacy Sandbox proposals.
  3. Further clarity on our use of data. We are underscoring our commitment not to use Google first-party personal data to track users for targeting and measurement of ads shown on non-Google websites. Our commitments would also restrict the use of Chrome browsing history and Analytics data to do this on Google or non-Google websites.

As with the earlier set of pledges, it has agreed to apply the additional commitments globally — assuming the package gets accepted by the UK regulator.

So the UK regulator continues playing a key role in shaping how key web infrastructure evolves.

Google’s blog most also makes reference to an opinion published yesterday by the UK’s information commission — which urged the adtech industry of the need to move away from current tracking and profiling methods of ad targeting.

“We also support the objectives set out yesterday in the ICO’s Opinion on Data protection and privacy expectations for online advertising proposals, including the importance of supporting and developing privacy-safe advertising tools that protect people’s privacy and prevent covert tracking,” Google noted.

This summer Google announced a delay to its earlier timeline for the deprecation of tracking cookies — saying support wouldn’t start being phased out in Chrome until the second half of 2023.

There is no suggestion from the tech giant as this point of any additional delay to that timeline — assuming it gets the regulatory greenlight to go ahead.

UK privacy watchdog warns adtech the end of tracking is nigh

It’s been well over two years since the UK’s data protection watchdog warned the behavioural advertising industry it’s wildly out of control.

The ICO hasn’t done anything to stop the systematic unlawfulness of the tracking and targeting industry abusing Internet users’ personal data to try to manipulate their attention — not in terms of actually enforcing the law against offenders and stopping what digital rights campaigners have described as the biggest data breach in history.

Indeed, it’s being sued over inaction against real-time-bidding’s misuse of personal data by complainants who filed a petition on the issue all the way back in September 2018.

But today the UK’s (outgoing) information commissioner, Elizabeth Denham, published an opinion — in which she warns the industry that its old unlawful tricks simply won’t do in the future.

New methods of advertising must be compliant with a set of what she describes as “clear data protection standards” in order to safeguard people’s privacy online, she writes.

Among the data protection and privacy “expectations” Denham suggests she wants to see from the next wave of online ad technologies are:

• engineer data protection requirements by default into the design of the initiative;

• offer users the choice of receiving adverts without tracking, profiling or targeting based on personal data;

• be transparent about how and why personal data is processed across the ecosystem and who is responsible for that processing;

• articulate the specific purposes for processing personal data and demonstrate how this is fair, lawful and transparent;

• address existing privacy risks and mitigate any new privacy risks that their proposal introduces

Denham says the goal of the opinion is to provide “further regulatory clarity” as new ad technologies are developed, further specifying that she welcomes efforts that propose to:

• move away from the current methods of online tracking and profiling practices;

• improve transparency for individuals and organisations;

• reduce existing frictions in the online experience;

• provide individuals with meaningful control and choice over the processing of device information and personal data;

• ensure valid consent is obtained where required;

• ensure there is demonstrable accountability across the supply chain;

The timing of the opinion is interesting — given an impending decision by Belgium’s data protection agency on a flagship ad industry consent gathering tool. (And current UK data protection rules share the same foundation as the rest of the EU, as the country transposed the General Data Protection Regulation into national law prior to Brexit.)

Earlier this month the IAB Europe warned that it expects to be found in breach of the EU’s General Data Protection Regulation, and that its so-called ‘transparency and consent’ framework (TCF) hasn’t managed to achieve either of the things claimed on the tin.

But this is also just the latest ‘reform’ missive from the ICO to rule-breaking adtech.

And Denham is merely restating requirements that are derived from standards that already exist in UK law — and wouldn’t need reiterating had her office actually enforced the law against adtech breache(r)s. But this is the regulatory dance she has preferred.

This latest ICO salvo looks more like an attempt by the outgoing commissioner to claim credit for wider industry shifts as she prepares to leave office — such as Google’s slow-mo shift toward phasing out support for third party cookies (aka, it’s ‘Privacy Sandbox’ proposal, which is actually a response to evolving web standards such as competing browsers baking in privacy protections; rising consumer concern about online tracking and data breaches; and a big rise in attention on digital matters from lawmakers) — than it is about actually moving the needle on unlawful tracking.

If Denham wanted to do that she could have taken actual enforcement action long ago.

Instead the ICO has opted for — at best — a partial commentary on embedded adtech’s systematic compliance problem. And, essentially, to stand by as the breach continues; and wait/hope for future compliance.

 

Change may be coming regardless of regulatory inaction, however.

And, notably, Google’s ‘Privacy Sandbox’ proposal (which claims ‘privacy safe’ ad targeting of cohorts of users, rather than microtargeting of individual web users) gets a significant call-out in the ICO’s remarks — with Denham’s office writing in a press release that it is: “Currently, one of the most significant proposals in the online advertising space is the Google Privacy Sandbox, which aims to replace the use of third party cookies with alternative technologies that still enable targeted digital advertising.”

“The ICO has been working with the Competition and Markets Authority (CMA) to review how Google’s plans will safeguard people’s personal data while, at the same time, supporting the CMA’s mission of ensuring competition in digital markets,” the ICO goes on, giving a nod to ongoing regulatory oversight, led by the UK’s competition watchdog, which has the power to prevent Google’s Privacy Sandbox ever being implemented — and therefore to stop Google phasing out support for tracking cookies in Chrome — if the CMA decides the tech giant can’t do it in a way that meets competition and privacy criteria.

So this reference is also a nod to a dilution of the ICO’s own regulatory influence in a core adtech-related arena — one that’s of market-reforming scale and import.

The backstory here is that the UK government has been working on a competition reform that will bring in bespoke rules for platform giants considered to have ‘strategic market status’ (and therefore the power to damage digital competition); with a dedicated Digital Markets Unit already established and up and running within the CMA to lead the work (but which is still pending being empowered by incoming UK legislation).

So the question of what happens to ‘old school’ regulatory silos (and narrowly-focused regulatory specialisms) is a key one for our data-driven digital era.

Increased cooperation between regulators like the ICO and the CMA may give way to oversight that’s even more converged or even merged — to ensure powerful digital technologies don’t fall between regulatory cracks — and therefore that the ball isn’t so spectacularly dropped on vital issues like ad tracking in the future.

Intersectional digital oversight FTW?

As for the ICO itself, there is a further sizeable caveat in that Denham is not only on the way out (ergo her “opinion” naturally has a short shelf life) but the UK government is busy consulting on ‘reforms’ to the UK’s data protection rules.

Said reforms could see a major downgrading of domestic privacy and data protections; and even legitimize abusive ad tracking — if ministers, who seem more interested in vacuous soundbites (about removing barriers to “innovation”), end up ditching legal requirements to ask Internet users for consent to do stuff like track and profile them in the first place, per some of the proposals.

So the UK’s next information commissioner, John Edwards, may have a very different set of ‘data rules’ to apply.

And — if that’s the case — Denham will, in her roundabout way, have helped make sliding standards happen.

 

Europe offers tepid set of political ads transparency rules

It’s been almost a year since the EU’s executive announced it would propose rules for political ads transparency in response to concern about online microtargeting and big data techniques making mincemeat of democratic integrity and accountability.

Today it’s come out with its proposal. But frankly it doesn’t look like the wait was worth it.

The Commission’s PR claims the proposal will introduce “strict conditions for targeting and amplifying” political advertising using digital tools — including what it describes as a ban on targeting and amplification that use or infer “sensitive personal data, such as ethnic origin, religious beliefs or sexual orientation”.

However the claimed ‘ban’ does not apply if “explicit consent” is obtained from the person whose sensitive data is to be exploited to better target them with propaganda — and online ‘consents’ to ad targeting are already a total trashfire of non-compliance in the region.

So it’s not clear why the Commission believes politically vested interests hell-bent on influencing elections are going to play by a privacy rule-book that almost no online advertisers operating in the region currently do, even the ones that are only trying to get people to buy useless plastic trinkets or ‘detox’ teas.

In a Q&A offering further detail on the proposal, the Commission lists a set of requirements that it says anyone making use of political targeting and amplification will need to comply with, which includes having an internal policy on the use of such techniques; maintaining records of the targeting and use of personal data; and recording the source of said personal data — so at best it seems to be hoping to burden propagandists with the need to create and maintain a plausible paper trail.

Because it is also allowing a further carve-out to allow for political targeting — writing: “Targeting could also be allowed in the context of legitimate activities of foundations, associations or not-for-profit bodies with a political, philosophical, religious or trade union aim, when it targets their own members.”

This is incredibly vague. A “foundation” or an “association” with a political “aim” sounds like something any campaign group or vested interest could set up — i.e. to carry on the “legitimate” activity of (behaviorally?) targeting propaganda at voters.

In short, the scope for loopholes for political microtargeting — including via the dissemination of disinformation — looks massive.

On scope, the Commission says it wants the incoming rules to apply to “ads by, for or on behalf of a political actor” as well as “so called” issue-based ads — aka politically charged issues that can be a potent proxy to sway voters — which it notes are “liable to influence the outcome of an election or referendum, a legislative or regulatory process or voting behaviour”.

But how exactly the regulation will define ads that fall in and out of scope remains to be seen.

Perhaps the most substantial measure of a very thin proposal is around transparency — where the Commission has proposed “transparency labels” for paid political ads.

It says these must be “clearly labelled” and provide “a set of key information” — including the name of the sponsor “prominently displayed and an easily retrievable transparency notice”; along with the amount spent on the political advertisement; the sources of the funds used; and a link between the advertisement and the relevant elections or referenda.

However, again, the Commission appears to be hoping that a few transparency requirements will enforce a sea change on an infamously opaque and fraud-filled industry — one that has been fuelled by rampant misuse and unlawful exploitation of people’s data. Rather than cutting off the head of the hydra by actually curbing targeting — such as by limiting political targeting to broad-brush contextual buckets.

Hence it writes: “All political advertising services, from adtech that intermediate the placement of ads, to consultancies and advertising agencies producing the advertising campaigns, will have to retain the information they have access to through the provision of their service about the ad, the sponsor and the dissemination of the ad. They will have to transfer this information to the publisher of the political ad — this can be the website or app where the ad is seen by an individual, a newspaper, a TV broadcaster, a radio station, etc. The publisher will need to make the information available to the individual who sees the ad.”

“Transparency of political advertising will help people understand when they see a paid political advertisement,” the Commission further suggests, adding: “With the proposed rules, every political advertisement – whether on Twitter, Facebook or any other online platform – will have to be clearly marked as political advertisement as well as include the identity of the sponsor and a transparency notice with the wider context of the political advertisement and its aims, or a clear indication of where it can be easily retrieved.”

It’s a nice theory but for one thing plenty of election interference originates from outside a region where the election itself is taking place.

On that the Commission says it will require organisations that provide political advertising services in the EU but do not have a physical presence there to designate a legal representative in a Member States where the services are offered, suggesting: “This will ensure more transparency and accountability of services providers acting from outside the Union.”

How exactly it will require (and enforce) that stipulation isn’t clear.

Another problem is that all these transparency obligations will only apply to “political advertising services”.

Propaganda that gets uploaded to online platforms like Facebook by a mere “user” — aka an entity that does not self-identify as a political advertising service — will apparently escape the need for any transparency accountability at all.

Even if they’re — y’know — working out of a Russian trollfarm that’s actively trying to destabilize the European Union… Just so long as they claim to be ‘Hans, 32, Berliner, loves cats, hates the CSU’.

Now if platforms like Facebook were perfectly great at identifying, reporting and purging inauthentic activity, fake accounts and shadey influence ops in their own backyards it might not be such a problem to leave the door open for “a user” to post unaccountable political propaganda. But a whole clutch of whistleblowers have pointed out, in excruciating detail, that Facebook at least is very much not that.

So that looks like another massive loophole — one which underlines why the only genuine way to fix the problem of online disinformation and election interference is to put an end to behavioral targeting period, rather than just fiddling around the edges. Not least because by fiddly with some tepid measures that will offer only a flawed, partial transparency you risk lulling people into a false sense of security — as well as further normalizing exploitative manipulation (just so long as you have a ‘policy’ in place).

Once online ads and content can be targeted at individuals based on tracking their digital activity and harvesting their personal data for profiling, it’s open season for opaque InfluenceOps and malicious interests to workaround whatever political ads transparency rules you try to layer on top of the cheap, highly scalable tools offered by advertising giants like Facebook to keep spreading their propaganda — at the expense of your free and fair elections.

Really what this regulation proposes is to create a large admin burden for advertisers who intend to run genuinely public/above board political campaigns — leaving the underbelly of paid mud slingers, hate spreaders and disinformation peddlers to exploit its plentiful loopholes to run mass manipulation campaigns right through it.

So it will be interesting to see whether the European Parliament takes steps to school the Commission by adding some choice amendments to its draft — as MEPs have been taking a stronger line against microtargeting in recent months.

On penalties, for now, under the Commission proposal, ‘official’ advertising services could be fined for breaking things like the transparency and record-keeping requirements but how much will be determined locally, by Member States — at a level the Commission says should be “effective, proportionate and dissuasive”.

What might that mean? Well under the proposal, national Data Protection Authorities (DPAs) will be responsible for monitoring the use of personal data in political targeting and for imposing fines — so, ultimately, for determining the level of fines that domestic rule-breaking political operators might face.

Which does not exactly inspire a whole lot of confidence. DPAs are, after all, resourced by the same set of political entities — or whichever flavor happens to be in government.

The UK’s ICO carried out an extensive audit of political parties data processing activities following the 2018 Cambridge Analytica Facebook data misuse scandal — and in 2020 it reported finding a laundry list of failures across the political spectrum.

So what did the EU’s (at the time) best resourced DPA do about all these flagrant breaches by UK political parties?

The ICO’s enforcement action at that point consisted of — checks notes — issuing a series of recommendations.

There was also a warning that it might take further action in the future. And this summer the ICO did issue one fine: Slapping the Conservative Party with a £10,000 penalty for spamming voters. Which doesn’t really sound very dissuasive tbh.

Earlier this month another of these UK political data offenders, the Labour Party, was forced to fess up to what it dubbed a “data incident” — involving an unnamed third party data processor. It remains to be seen what sanction it may face for failing to protect supporters’ information in that (post-ICO-audit) instance.

Adtech generally has also faced very little enforcement from EU DPAs — despite scores of complaints against its privacy-eviscerating targeting methods — and despite the ICO saying back in 2019 that its methods are rampantly unlawful under existing data protection law.

Vested interests in Europe have been incredibly successful at stymieing regulatory enforcement against invasive ad targeting.

And, apparently, also derailing progress by defanging incoming EU rules — so they won’t do anything much to stop the big-data ‘sausage-factory’ of (in this case) political microtargeting from keeping on slicing ‘n’ dicing up the eyeballs of the citizenry.

European Parliament’s IMCO backs limits on tech giants’ ability to run tracking ads

In what looks like bad news for adtech giants like Facebook and Google, MEPs in the European Parliament have voted for tougher restrictions on how Internet users’ data can be combined for ad targeting purposes — backing a series of amendments to draft legislation that’s set to apply to the most powerful platforms on the web.

The Internal Market and Consumer Protection Committee (IMCO) today voted overwhelmingly to support beefed up consent requirements on the use of personal data for ad targeting within the Digital Markets Act (DMA); and for a complete prohibition on the biggest platforms being able to process the personal data of minors for commercial purposes — such as marketing, profiling or behaviorally targeted ads — to be added to the draft legislation.

The original Commission proposal for the DMA was notably weak in the area of surveillance business models — with the EU’s executive targeting the package of measures at other types of digital market abuse, such as self-preferencing and unfair T&Cs for platform developers, which its central competition authority was more familiar with.

“The text says that a gatekeeper shall, ‘for its own commercial purposes, and the placement of third-party advertising in its own services, refrain from combining personal data for the purpose of delivering targeted or micro-targeted advertising’, except if there is a ‘clear, explicit, renewed, informed consent’, in line with the General Data Protection Regulation,” IMCO writes in a press release. “In particular, personal data of minors shall not be processed for commercial purposes, such as direct marketing, profiling and behaviourally targeted advertising.”

It’s fair to say that adtech giants are masters of manipulating user consent at scale — through the use of techniques like A/B testing and dark pattern design — so beefed up consent requirements (for adults) aren’t likely to offer as much of a barrier against ad-targeting abuse as the committee seems to think they might.

Although if Facebook was finally forced to offer an actual opt-out of tracking ads that would still be a major win (as it doesn’t currently give users any choice over being surveilled and profiled for ads).

However the stipulation that children should be totally protected from commercial stuff like profiling and behavioral ads is potentially a lot more problematic for the likes of Facebook and Google — given the general lack of robust age assurance across the entire Internet.

It suggests that if this partial prohibition makes it into EU law, adtech platforms may end up deciding it’s less legally risky to turn off tracking-based ads altogether (in favor of using alternatives that don’t require processing users’ personal data, such as contextual targeting) vs trying to correctly age verify their entire user base in order to firewall only minors’ eyeballs from behavioral ads.

At the very least, such a ban could present big (ad)tech with a compliance headache — and more work for their armies of in-house lawyers — though MEPs have not proposed to torpedo their entire surveillance business model at this juncture.

In recent months a number of parliamentarians have been pushing for just that: An outright ban on tracking-based advertising period to be included, as an amendment, to another pan-EU digital regulation that’s yet to be voted on by the committee (aka the Digital Services Act; DSA).

However IMCO does not look likely to go so far in amending either legislative package — despite a call this week by the European Data Protection Board for the bloc to move towards a total ban on behavioral ads given the risks posed to citizens fundamental rights.

Digital Markets Act

The European Parliament is in the process of finalizing its negotiating mandate on one of the aforementioned digital reforms — aka, the DMA — which is set to apply to Internet platforms that have amassed market power by occupying a so-called ‘gatekeeping’ role as online intermediaries, typically giving them a high degree of market leverage over consumers and other digital businesses.

Critics argue this can lead to abusive behaviors that negatively impact consumers (in areas like privacy) — while also chilling fair competition and impeding genuine innovation (including in business models).

For this subset of powerful platforms, the DMA — which was presented as a legislative proposal at the end of last year — will apply a list of pre-emptive ‘dos and don’ts’ in an attempt to rebalance digital markets that have become dominated by a handful of (largely) US-based giants.

EU lawmakers argue the regulation is necessary to respond to evidence that digital markets are prone to tipping and unfair practices as a result of asymmetrical dynamics such as network effects, big data and ‘winner takes all’ investor strategies.

Under the EU’s co-legislative process, once the Commission proposes legislation the European Parliament (consisting of directly elected MEPs) and the Council (the body that represents Member States’ governments) must adopt their own negotiating mandates — and then attempt to reach consensus — meaning there’s always scope for changes to the original draft, as well as a long period where lobbying pressure can be brought to bear to try to influence the final shape of the law.

The IMCO committee vote this morning will be followed by a plenary vote in the European Parliament next month to confirm MEPs’ negotiating mandate — before the baton passes to the Council next year. There trilogue negotiations, between the Parliament, Commission and Member States’ governments, are slated to start under the French presidency in the first semester of 2022. Which means more jockeying, horse-trading and opportunities for corporate lobbying lie ahead. And (likely) many months before any vote to approve a final DMA text.

Still, MEPs’ push to strengthen the tech giant-targeting package is notable nonetheless.

A second flagship digital update, the DSA, which will apply more broadly to digital services — dealing with issues like illegal content and algorithmic recommendations — is still being debated by MEPs and committee votes like IMCO’s remain outstanding.

So the DMA has passed through parliamentary debate relatively quickly (vs the DSA), suggesting there’s political consensus (and appetite) to rein in tech giants.

In its press release summarizing the DMA amendments, rapporteur Andreas Schwab (of the EPP and DE political grouping) made this point, loud and clear, writing: “The EU stands for competition on the merits, but we do not want bigger companies getting bigger and bigger without getting any better and at the expense of consumers and the European economy. Today, it is clear that competition rules alone cannot address all the problems we are facing with tech giants and their ability to set the rules by engaging in unfair business practices. The Digital Markets Act will rule out these practices, sending a strong signal to all consumers and businesses in the Single Market: rules are set by the co-legislators, not private companies!”

In other interesting tweaks, the committee has voted to expand the scope of the DMA — to cover not just online intermediation services, social networks, search engines, operating systems, online advertising services, cloud computing, and video-sharing services (i.e. where those platforms meet the relevant criteria to be designated “gatekeepers”) — but also add in web browsers (hi Google Chrome!), virtual assistants (Ok Google; hey Siri!) and connected TV (hi, Android TV) too.

On gatekeeper criteria, MEPs backed an increase in the quantitative thresholds for a company to fall under scope — to €8 billion in annual turnover in the European Economic Area; and a market capitalisation of €80 billion.

The sorts of tech giants who would qualify — based on that turnover and market cap alone (NB: other criteria would also apply) — include the usual suspects of Apple, Amazon, Meta (Facebook), Google, Microsoft etc but also — potentially — the European booking platform, Booking.com.

Although the raised threshold may keep another European gatekeeper, music streaming giant Spotify, out of scope.

MEPs supported the additional criteria for a platform to qualify as a gatekeeper and fall under scope of the DMA of: Namely, providing a “core platform service” in at least three EU countries; having at least 45M monthly end users and 10,000+ business users. The committee also noted their support that these thresholds do not prevent the Commission from designating other companies as gatekeepers — “when they meet certain conditions”.

In other changes, the committee backed adding new provisions around the interoperability of services, such as for number-independent interpersonal communication services and social network services.

And — making an intervention on so-called ‘killer acquisitions’ — MEPs voted for the Commission to have powers to impose “structural or behavioural remedies” where gatekeepers have engaged in systematic non-compliance.

“The approved text foresees in particular the possibility for the Commission to restrict gatekeepers from making acquisitions in areas relevant to the DMA in order to remedy or prevent further damage to the internal market. Gatekeepers would also be obliged to inform the Commission of any intended concentration,” they note on that.

The committee backed a centralized enforcement role for the Commission — while adding some clarifications around the role of national competition authorities.

Failures of enforcement have been a major bone of contention around the EU’s flagship data protection regime, the GDPR, which allows for enforcement to be devolved to Member States but also for forum shopping and gaming of the system — as a couple of EU countries have outsized concentrations of tech giants on their soil and have been critized as bottlenecks to effective GDPR enforcement.

(Only today, for example, Ireland’s Data Protection Commission has been hit with a criminal complaint accusing it of procedural blackmail in an attempt to gag complainants in a way that benefits tech giants like Facebook… )

On sanctions for gatekeepers which break the DMA rules, MEPs want the Commission to impose fines of “not less than 4% and not exceeding 20%” of total worldwide turnover in the preceding financial year — which, in the case of adtech giants Facebook’s and Google’s full year 2020 revenue would allow for theoretical sanctions in the $3.4BN-$17.2BN and $7.2BN-$36.3BN range, respectively.

Which would be a significant step up on the sorts of regulatory sanctions tech giants have faced to date in the EU.

Facebook has yet to face any fines under GDPR, for example — over three years since it came into application, despite facing numerous complaints. (Although Facebook-owned WhatsApp was recently fined $267M for transparency failures.)

While Google received an early $57M GDPR from France before it moved users to fall under Ireland’s legal jurisdiction — where its adtech has been under formal investigation since 2019 (without any decisions/sanctions as yet).

Mountain View has also faced a number of penalties elsewhere in Europe, though — with France again leading the charge and slapping Google with a $120M fine for dropping tracking cookies without consent (under the EU ePrivacy Directive) last year.

Its competition watchdog has also gone after Google — issuing a $268M penalty this summer for adtech abuses and a $592M sanction (also this summer) related to requirements to negotiate licensing fees with news publishers over content reuse.

It’s interesting to imagine such stings as a mere amuse-bouche compared to the sanctions EU lawmakers want to be able to hand out under the DMA.

European Parliament’s IMCO backs limits on tech giants’ ability to run tracking ads

In what looks like bad news for adtech giants like Facebook and Google, MEPs in the European Parliament have voted for tougher restrictions on how Internet users’ data can be combined for ad targeting purposes — backing a series of amendments to draft legislation that’s set to apply to the most powerful platforms on the web.

The Internal Market and Consumer Protection Committee (IMCO) today voted overwhelmingly to support beefed up consent requirements on the use of personal data for ad targeting within the Digital Markets Act (DMA); and for a complete prohibition on the biggest platforms being able to process the personal data of minors for commercial purposes — such as marketing, profiling or behaviorally targeted ads — to be added to the draft legislation.

The original Commission proposal for the DMA was notably weak in the area of surveillance business models — with the EU’s executive targeting the package of measures at other types of digital market abuse, such as self-preferencing and unfair T&Cs for platform developers, which its central competition authority was more familiar with.

“The text says that a gatekeeper shall, ‘for its own commercial purposes, and the placement of third-party advertising in its own services, refrain from combining personal data for the purpose of delivering targeted or micro-targeted advertising’, except if there is a ‘clear, explicit, renewed, informed consent’, in line with the General Data Protection Regulation,” IMCO writes in a press release. “In particular, personal data of minors shall not be processed for commercial purposes, such as direct marketing, profiling and behaviourally targeted advertising.”

It’s fair to say that adtech giants are masters of manipulating user consent at scale — through the use of techniques like A/B testing and dark pattern design — so beefed up consent requirements (for adults) aren’t likely to offer as much of a barrier against ad-targeting abuse as the committee seems to think they might.

Although if Facebook was finally forced to offer an actual opt-out of tracking ads that would still be a major win (as it doesn’t currently give users any choice over being surveilled and profiled for ads).

However the stipulation that children should be totally protected from commercial stuff like profiling and behavioral ads is potentially a lot more problematic for the likes of Facebook and Google — given the general lack of robust age assurance across the entire Internet.

It suggests that if this partial prohibition makes it into EU law, adtech platforms may end up deciding it’s less legally risky to turn off tracking-based ads altogether (in favor of using alternatives that don’t require processing users’ personal data, such as contextual targeting) vs trying to correctly age verify their entire user base in order to firewall only minors’ eyeballs from behavioral ads.

At the very least, such a ban could present big (ad)tech with a compliance headache — and more work for their armies of in-house lawyers — though MEPs have not proposed to torpedo their entire surveillance business model at this juncture.

In recent months a number of parliamentarians have been pushing for just that: An outright ban on tracking-based advertising period to be included, as an amendment, to another pan-EU digital regulation that’s yet to be voted on by the committee (aka the Digital Services Act; DSA).

However IMCO does not look likely to go so far in amending either legislative package — despite a call this week by the European Data Protection Board for the bloc to move towards a total ban on behavioral ads given the risks posed to citizens fundamental rights.

Digital Markets Act

The European Parliament is in the process of finalizing its negotiating mandate on one of the aforementioned digital reforms — aka, the DMA — which is set to apply to Internet platforms that have amassed market power by occupying a so-called ‘gatekeeping’ role as online intermediaries, typically giving them a high degree of market leverage over consumers and other digital businesses.

Critics argue this can lead to abusive behaviors that negatively impact consumers (in areas like privacy) — while also chilling fair competition and impeding genuine innovation (including in business models).

For this subset of powerful platforms, the DMA — which was presented as a legislative proposal at the end of last year — will apply a list of pre-emptive ‘dos and don’ts’ in an attempt to rebalance digital markets that have become dominated by a handful of (largely) US-based giants.

EU lawmakers argue the regulation is necessary to respond to evidence that digital markets are prone to tipping and unfair practices as a result of asymmetrical dynamics such as network effects, big data and ‘winner takes all’ investor strategies.

Under the EU’s co-legislative process, once the Commission proposes legislation the European Parliament (consisting of directly elected MEPs) and the Council (the body that represents Member States’ governments) must adopt their own negotiating mandates — and then attempt to reach consensus — meaning there’s always scope for changes to the original draft, as well as a long period where lobbying pressure can be brought to bear to try to influence the final shape of the law.

The IMCO committee vote this morning will be followed by a plenary vote in the European Parliament next month to confirm MEPs’ negotiating mandate — before the baton passes to the Council next year. There trilogue negotiations, between the Parliament, Commission and Member States’ governments, are slated to start under the French presidency in the first semester of 2022. Which means more jockeying, horse-trading and opportunities for corporate lobbying lie ahead. And (likely) many months before any vote to approve a final DMA text.

Still, MEPs’ push to strengthen the tech giant-targeting package is notable nonetheless.

A second flagship digital update, the DSA, which will apply more broadly to digital services — dealing with issues like illegal content and algorithmic recommendations — is still being debated by MEPs and committee votes like IMCO’s remain outstanding.

So the DMA has passed through parliamentary debate relatively quickly (vs the DSA), suggesting there’s political consensus (and appetite) to rein in tech giants.

In its press release summarizing the DMA amendments, rapporteur Andreas Schwab (of the EPP and DE political grouping) made this point, loud and clear, writing: “The EU stands for competition on the merits, but we do not want bigger companies getting bigger and bigger without getting any better and at the expense of consumers and the European economy. Today, it is clear that competition rules alone cannot address all the problems we are facing with tech giants and their ability to set the rules by engaging in unfair business practices. The Digital Markets Act will rule out these practices, sending a strong signal to all consumers and businesses in the Single Market: rules are set by the co-legislators, not private companies!”

In other interesting tweaks, the committee has voted to expand the scope of the DMA — to cover not just online intermediation services, social networks, search engines, operating systems, online advertising services, cloud computing, and video-sharing services (i.e. where those platforms meet the relevant criteria to be designated “gatekeepers”) — but also add in web browsers (hi Google Chrome!), virtual assistants (Ok Google; hey Siri!) and connected TV (hi, Android TV) too.

On gatekeeper criteria, MEPs backed an increase in the quantitative thresholds for a company to fall under scope — to €8 billion in annual turnover in the European Economic Area; and a market capitalisation of €80 billion.

The sorts of tech giants who would qualify — based on that turnover and market cap alone (NB: other criteria would also apply) — include the usual suspects of Apple, Amazon, Meta (Facebook), Google, Microsoft etc but also — potentially — the European booking platform, Booking.com.

Although the raised threshold may keep another European gatekeeper, music streaming giant Spotify, out of scope.

MEPs supported the additional criteria for a platform to qualify as a gatekeeper and fall under scope of the DMA of: Namely, providing a “core platform service” in at least three EU countries; having at least 45M monthly end users and 10,000+ business users. The committee also noted their support that these thresholds do not prevent the Commission from designating other companies as gatekeepers — “when they meet certain conditions”.

In other changes, the committee backed adding new provisions around the interoperability of services, such as for number-independent interpersonal communication services and social network services.

And — making an intervention on so-called ‘killer acquisitions’ — MEPs voted for the Commission to have powers to impose “structural or behavioural remedies” where gatekeepers have engaged in systematic non-compliance.

“The approved text foresees in particular the possibility for the Commission to restrict gatekeepers from making acquisitions in areas relevant to the DMA in order to remedy or prevent further damage to the internal market. Gatekeepers would also be obliged to inform the Commission of any intended concentration,” they note on that.

The committee backed a centralized enforcement role for the Commission — while adding some clarifications around the role of national competition authorities.

Failures of enforcement have been a major bone of contention around the EU’s flagship data protection regime, the GDPR, which allows for enforcement to be devolved to Member States but also for forum shopping and gaming of the system — as a couple of EU countries have outsized concentrations of tech giants on their soil and have been critized as bottlenecks to effective GDPR enforcement.

(Only today, for example, Ireland’s Data Protection Commission has been hit with a criminal complaint accusing it of procedural blackmail in an attempt to gag complainants in a way that benefits tech giants like Facebook… )

On sanctions for gatekeepers which break the DMA rules, MEPs want the Commission to impose fines of “not less than 4% and not exceeding 20%” of total worldwide turnover in the preceding financial year — which, in the case of adtech giants Facebook’s and Google’s full year 2020 revenue would allow for theoretical sanctions in the $3.4BN-$17.2BN and $7.2BN-$36.3BN range, respectively.

Which would be a significant step up on the sorts of regulatory sanctions tech giants have faced to date in the EU.

Facebook has yet to face any fines under GDPR, for example — over three years since it came into application, despite facing numerous complaints. (Although Facebook-owned WhatsApp was recently fined $267M for transparency failures.)

While Google received an early $57M GDPR from France before it moved users to fall under Ireland’s legal jurisdiction — where its adtech has been under formal investigation since 2019 (without any decisions/sanctions as yet).

Mountain View has also faced a number of penalties elsewhere in Europe, though — with France again leading the charge and slapping Google with a $120M fine for dropping tracking cookies without consent (under the EU ePrivacy Directive) last year.

Its competition watchdog has also gone after Google — issuing a $268M penalty this summer for adtech abuses and a $592M sanction (also this summer) related to requirements to negotiate licensing fees with news publishers over content reuse.

It’s interesting to imagine such stings as a mere amuse-bouche compared to the sanctions EU lawmakers want to be able to hand out under the DMA.

Facebook’s lead EU privacy supervisor hit with corruption complaint

Facebook’s problems with European privacy law could be about to get a whole lot worse. But ahead of what may soon be a major (and long overdue) regulatory showdown over the legality of its surveillance-based business model, Ireland’s Data Protection Commission (DPC) is facing a Facebook-shaped problem of its own: It’s now the subject of a criminal complaint alleging corruption and even bribery in the service of covering its own backside (we paraphrase) and shrinking the public understand of the regulatory problems facing Facebook’s business.

European privacy campaign group noyb has filed the criminal complaint against the Irish DPC, which is Facebook’s lead regulator in the EU for data protection.

noyb is making the complaint under Austrian law — reporting the Irish regulator to the Austrian Office for the Prosecution of Corruption (aka WKStA) after the DPC sought to use what noyb terms “procedural blackmail” to try to gag it and prevent it from publishing documents related to General Data Protection Regulation (GDPR) complaints against Facebook.

The not-for-profit alleges that the Irish regulator sought to pressure it to sign an “illegal” non-disclosure agreement (NDA) in relation to a public procedure — its complaint argues there is no legal basis for such a requirement — accusing the DPC of seeking to coerce it into silence, as Facebook would surely wish, by threatening not to comply with its regulatory duty to hear the complainant unless noyb signed the NDA. Which is quite the (alleged) quid-pro-quo.

“The DPC acknowledges that it has a legal duty to hear us but it now engaged in a form of ‘procedural coercion’,” said noyb chair, Max Schrems, in a statement. “The right to be heard was made conditional on us signing an agreement, to the benefit of the DPC and Facebook. It is nothing but an authority demanding to give up the freedom of speech in exchange for procedural rights.”

The regulator has also demanded noyb remove documents it has previously made public — related to the DPC’s draft decision of a GDPR complaint against Facebook — again without clarifying what legal basis it has to make such a demand.

As noyb points out, it is based in Austria, not Ireland — so is subject to Austrian law, not Irish law. But, regardless, even under Irish law it argues there’s no legal duty for parties to keep documents confidential — pointing out that Section 26 of the Irish Data Protection Act, which was cited by the DPC in this matter, only applies to DPC staff (“relevant person”), not to parties.

“Generally we have very good and professional relationships with authorities. We have not taken this step lightly, but the conduct of the DPC has finally crossed all red lines. The basically deny us all our rights to a fair procedure unless we agree to shut up,” added Schrems.

He went on to warn that “Austrian corruption laws are far reaching” — and to further emphasize: “When an official requests the slightest benefit to conduct a legal duty, the corruption provisions may be triggered. Legally there is no difference between demanding an unlawful agreement or a bottle of wine.”

All of which looks exceptionally awkward for the Irish regulator. Which already, let’s not forget — at the literal start of this year — agreed to “swiftly” finalize another fractious complaint made by Schrems, this one relating to Facebook’s EU-US data transfers, and which dates all the way back to 2013, following noyb bringing a legal procedure.

(But of course there’s still no sign of a DPC resolution of that Facebook complaint either… So, uhhh, ‘Siri: Show me regulatory capture’… )

Last month noyb published a draft decision by the DPC in relation to another (slightly less vintage) complaint against Facebook — which suggested the tech giant’s lead EU data regulator intended not to challenge Facebook’s attempt to use an opaque legal switch to bypass EU rules (by claiming that users are actually in a contract with it receive targeted ads, ergo GDPR consent requirements do not apply).

The DPC had furthermore suggested a wrist-slap penalty of $36M — for Facebook failing transparency requirements over the aforementioned ‘ad contract’.

That decision remains to be finalized because — under the GDPR’s one-stop-shop mechanism, for deciding cross-border complaints — other EU DPAs have a right to object to a lead supervisor’s preliminary decision and can ratchet out a different outcome. Which is what noyb is suggesting may be about to happen vis-a-vis this particular Facebook complaint saga.

Winding back slightly, despite the EU’s GDPR being well over three years old (in technical application terms), the DPC has yet to make a single final finding against Facebook proper.

So far it’s only managed one decision against Facebook-owned WhatsApp — which resulted in an inflated financial penalty for transparency failures by the messaging platform after other EU DPAs intervened to object to a (similarly) low-ball draft sanction Ireland had initially suggested. In the end WhatsApp was hit with a fine of $267M — also for breaching GDPR transparency obligations. A notable increase on the DPC’s offer of a fine of up to $56M.

The tech giant is appealing that penalty — but has also said it will be tweaking its privacy policy in Europe in the meanwhile. So it’s a (hard won) win for European privacy advocates — for now.

The WhatsApp GDPR complaint is just the tip, of course. The DPC has been sitting, hen-like, on a raft of data protection complaints against Facebook and other Facebook-owned platforms — including several filed by noyb on the very the day the regulation came into technical application all the way back in May 2018.

These ‘forced consent’ complaints by noyb strike at the heart of the headlock Facebook applies to users by not offering them an opt-out from tracking based advertising. Instead the ‘deal’ Facebook (now known as Meta) offers is a take-it or leave-it ‘choice’ — either accept ads or delete your account — despite the GDPR setting a robust standard for what can legally constitute consent that states it must be specific, informed and freely given.

Arm twisting is not allowed. Yet Facebook has been twisting European’s arms before and since the GDPR, all the same.

So the ‘forced consent’ complaints — if they do ever actually get enforced — have the potential to purge the tech giant’s surveillance-based business model once and for all. As, perhaps, does the vintage EU-US data transfers issue. (Certainly it would crank up Facebook’s operational costs if it had to federate its service so that Europeans’ data was stored and processed within the EU to fix the risk of US government mass surveillance.)

However, per the draft DPC decision on the forced consent issue, published (by noyb) last month, the Irish regulator appeared to be preparing to (at best) sidestep the crux question of the the legality of Facebook’s data mining, writing in a summary: “There is no obligation on Facebook to seek to rely solely on consent for the purposes of legitimising personal data processing where it is offering a contract to a user which some users might assess as one that primarily concerns the processing of personal data. Nor has Facebook purported to rely on consent under the GDPR.”

noyb has previously accused the DPC of holding secret meetings with Facebook around the time it came up with the claimed consent bypass and just as the GDPR was about come into application — implying the regulator was seeking to support Facebook in finding a workaround for EU law.

The not-for-profit also warned last month that if Facebook’s relabelling “trick” (i.e. switching a claim of ‘consent’ to a claim of ‘contract’) were to be accepted by EU regulators it would undermine the whole of the GDPR — making the much lauded data protection regime trivially easy for data-mining giants to bypass.

Likewise, noyb argues, had it signed the DPC’s demanded NDA it would have “greatly benefited Facebook”.

It would also have helped the DPC by keeping a lid on the awkward detail of lengthy and labyrinthine proceedings — at a time when the regulator is facing rising heat over its inaction against big tech, including from lawmakers on home soil. (Some of which are now pushing for reform of the Commission — including the suggestion that more commissioners should be recruited to remove sole decision-making power from the current incumbent, Helen Dixon.)

“The DPC is continuously under fire by other DPAs, in public inquiries and the media. If an NDA would hinder noyb’s freedom of speech, the DPC’s reputational damage could be limited,” noyb suggests in a press release, before going on to note that had it been granted a benefit by signing an NDA (“in direct exchange for the DPC to conduct its legal duties”) its own staff could have potentially committed a crime under the Austrian Criminal Act.

The not-for-profit instead opted to dial up publicity — and threaten a little disinfecting sunlight — by filing a criminal complaint with the Austrian Office for the Prosecution of Corruption.

It’s essentially telling the DPC to put up a legal defence of its procedural gagging attempts — or, well, shut up.

Here’s Schrems again: “We very much hope that Facebook or the DPC will file legal proceedings against us, to finally clarify that freedom of speech prevails over the scare tactics of a multinational and its taxpayer-funded minion. Unfortunately we must expect that they know themselves that they have no legal basis to take any action, which is why they reverted to procedural blackmail in the first place.”

Nor is noyb alone in receiving correspondence from the DPC that’s seeking to apply swingeing confidentiality clauses to complainants. TechCrunch has reviewed correspondence sent to the regulator earlier this fall by another complainant who writes to query its legal basis for a request to gag disclosure of correspondence and draft reports.

Despite repeated requests for clarification, the DPC appears to have entirely failed — over the course of more than a month — to reply to the request for its legal basis for making such a request.

This suggests noyb’s experience of scare tactics without legal substance is not unique and backs up its claim that the DPC has questions to answer about how it conducts its office.

We’ll be reaching out to the DPC for comment on the allegations it’s facing.

But what about Facebook? noyb’s press release goes on to predict a “tremendous commercial problem” looming for the data-mining giant — as it says DPC correspondence “shows that other European DPAs have submitted ‘relevant and reasoned objections’ and oppose the DPC’s view” [i.e. in the consent bypass complaint against Facebook].

“If the other DPAs have a majority and ultimately overturn the DPC’s draft decision, Facebook could face a legal disaster, as most commercial use of personal data in the EU since 2018 would be retroactively declared illegal,” noyb suggests, adding: “Given that the other DPAs passed Guidelines in 2019 that are very unfavourable to Facebook’s position, such a scenario is highly likely.”

The not-for-profit has more awkward revelations for the DPC and Facebook in the pipe, too.

It says it’s preparing fresh document releases in the coming weeks — related to correspondence from the DPC and/or Facebook — as a “protest” against attempts to gag it and to silence democratic debate about public procedures.

“On each Sunday in advent, noyb will publish another document, together with a video explaining the documents and an analysis why the use of these documents is fully compliant with all applicable laws,” it notes, adding that what it’s billing as the “advent reading” will be published on noyb.eu“so tune in!”.

So looks like the next batch of ‘Facebook Papers‘ that Meta would really rather you didn’t see will be dropping soon…

via GIPHY

 

EU’s data protection advisor latest to call for ban on tracking ads

The European Data Protection Board (EDPB), an expert steering body which advises EU lawmakers on how to interpret rules wrapping citizen’s personal data, has warned the bloc’s legislators that a package of incoming digital regulations risks damaging people’s fundamental rights — without “decisive action” to amend the suite of proposals.

The reference is to draft rules covering digital platform governance and accountability (the Digital Services Act; DSA); proposals for ex ante rules for Internet gatekeepers (the Digital Markets Act; DMA), the Data Governance Act (DGA), which aims to encourage data reuse as an engine for innovation and AI; and the Regulation on a European approach for Artificial Intelligence (AIR), which sets out a risk-based framework for regulating applications of AI. 

The EDPB’s analysis further suggests that the package of pan-EU digital rules updates will be hampered by fragmented oversight and legal inconsistencies — potentially conflicting with existing EU data protection law unless clarified to avoid harmfully inconsistent interpretations.

Most notably, in a statement published today following a plenary meeting yesterday, the EDPB makes a direct call for EU legislators for implement stricter regulations on targeted advertising in favor of alternatives that do not require the tracking and profiling of Internet users — going on to call for lawmakers to consider “a phase-out leading to a prohibition of targeted advertising on the basis of pervasive tracking”.

Furthermore, the EDPB statement urges that the profiling of children for ad targeting should “overall be prohibited”.

As it happens, the European Parliament’s internal market and consumer protection (IMCO) committee was today holding a hearing to discuss targeted advertising, as MEPs consider amendments to a wider digital regulation package known as the Digital Services Act (DSA).

There has been a push by a number of MEPs for an outright ban on tracking-based ads to be added to the DSA package — given rising concern about the myriad harms flowing from surveillance-based ads, from ad fraud to individual manipulation and democratic erosion (to name a few).

However MEPs speaking during the IMCO committee hearing today suggested there would not be overall support in the Parliament to ban tracking ads — despite compelling testimony from a range of speakers articulating the harms of surveillance-based advertising and calling out the adtech industry for misleading lobbying on the issue by seeking to conflate targeting and tracking.

While retail lobbyist, Ilya Bruggeman, did speak up for tracking and profiling — parroting the big adtech platforms’ claim that SMEs rely on privacy invasive ads — other speakers at the committee session aligned with civil society in challenging the line.

Johnny Ryan, a former adtech industry insider (now a fellow at the ICCL) — who has filed numerous GDPR complaints against real-time bidding (RTB)’s rampant misuse of personal data, dubbing it the biggest security breach in history — kicked off his presentation with a pointed debunking of industry spin, telling MEPs that the issue isn’t, as the title of the session had it, “targeted ads”; rather the problem boils down to “tracking-based ads”.

“You can have targeting, without having tracking,” he told MEPs, warning: “The industry that makes money from tracking wants you to think otherwise. So let’s correct that.”

The direction of travel of the European Parliament on behavioral ads (i.e. tracking-based targeting) in relation to another key digital package, the gatekeeper-targeting DMA, also looks like it will eschew a ban for general users in favor of beefing up consent requirements. Which sounds like great news for purveyors of dark pattern design.

That said, MEPs do appear to be considering a prohibition on tracking and profiling of minors for ad targeting — which raises questions of how that could be implemented without robust age verification also being implemented across all Internet services… Which, er, is not at all the case currently — nor in most people’s favored versions of the Internet. (The UK government might like it though.)

So, if that ends up making it into the final version of the DMA, one way for services to comply/shrink their risk (i.e. of being accused of ad-targeting minors) could be for them to switch off tracking ads for all users by default — unless they really have robustly age-verified a specific user is an adult. (So maybe adtech platforms like Facebook would start requiring users to upload a national ID to use their ‘free’ services in this version of the future… )

In light of MEPs’ tentativeness, the EDPB’s intervention looks significant — although the body does not have lawmaking powers itself.

But by urging EU co-legislators to take “decisive action” it’s firing a clear shot across the Council, Parliament and Commission’s bows to screw their courage to the sticking place and avoid the bear-pit of lobbying self-interest; and remember that alternative forms of (contextually targeted) online advertising are available. And profitable.

“Our concerns consist of three categories: (1) lack of protection of individuals’ fundamental rights and freedoms; (2) fragmented supervision; and (3) risks of inconsistencies,” the Board writes in the statement, going on to warn that it “considers that, without further amendments, the proposals will negatively impact the fundamental rights and freedoms of individuals and lead to significant legal uncertainty that would undermine both the existing and future legal framework”.

“As such, the proposals may fail to create the conditions for innovation and economic growth envisaged by the proposals themselves,” it also warns.

The EDPB’s concerns for citizens’ fundamental rights also encompass the Commission’s proposal to regulate high risk applications of artificial intelligence, with the body saying the draft AI Regulation does not go far enough to prevent the development of AI systems that are intended to categorize individuals — such as by using their biometrics (e.g. facial recognition) and according to ethnicity, gender, and/or political or sexual orientation, or other prohibited grounds of discrimination.

“The EDPB considers that such systems should be prohibited in the EU and calls on the co-legislators to include such a ban in the AIR,” it writes. “Furthermore, the EDPB considers that the use of AI to infer emotions of a natural person is highly undesirable and should be prohibited, except for certain well-specified use-cases, namely for health or research purposes, subject to appropriate safeguards, conditions and limits.”

The Board has also reiterated its earlier call for a ban on the use of AI for remote biometric surveillance in public places — following an joint statement with the European Data Protection Supervisor back in June.

MEPs have also previously voted for a ban on remote biometric surveillance.

The Commission proposal offered a very tepid, caveated restriction which has been widely criticized as insufficient.

“[G]iven the significant adverse effect for individuals’ fundamental rights and freedoms, the EDPB reiterates that the AIR should include a ban on any use of AI for an automated recognition of human features in publicly accessible spaces — such as of faces but also of gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals — in any context,” the Board writes in the statement.

“The proposed AIR currently allows for the use of real-time remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement in certain cases. The EDPB welcomes the recently adopted EP Resolution where the significant risks are highlighted.”

On oversight, the EDBP sounds concerned about data protection bodies being bypassed by the bloc’s ambitious flotilla of digital regulations updates — urging “complementarity in oversight” to enhance legal certainty, as well as emphasizing the need for DPAs to be provided with “sufficient resources to perform these additional tasks”. (A perennial problem in an age of ever bigger data.)

Legal certainty would also be improved by including explicit references to existing data protection legislation (such as the GDPR and ePrivacy Directive), it argues, to avoid the risk of incoming data packages weakening core concepts of the GDPR such as consent to data processing.

“It also creates the risk that certain provisions could be read as deviating from the GDPR or the ePrivacy Directive. Consequently, certain provisions could easily be interpreted in a manner that is inconsistent with the existing legal framework and subsequently lead to legal uncertainty,” the Board warns.

So far from the EU’s much vaunted digital regulation reboot strengthening protections for citizens — to boost their trust in data-driven services — there is, failing some very key amendments, a risk of death by a thousand cuts (and/or regulatory complexity) to foundational fundamental rights, with potentially ruinous consequences for the bloc’s much proclaimed ‘European values’.

 

For those who want to turn travel into dollars, Fora can help start your next career

Travel is back, at least according to experts who say travel bookings for over the Thanksgiving holiday are showing to be even higher than pre-pandemic levels.

Anyone who loves travel or is that friend who always plans the best vacations, Fora has a tool for you. The company was co-founded by onefinestay’s Evan Frank, Virtuoso agency owner Henley Vazquez and entrepreneur Jake Peters, to redefine what we think of the travel agency and enable people who want to sell travel as a career, or in their spare time, a way to do it.

Frank and Vazquez had known each other for five years when the idea for Fora came up.

“We always talked about doing something together, but this past March, we connected again,” Vazquez said. “I was in Costa Rica and talking about opportunities for women sitting on sidelines due to the pandemic and looking for interesting work. Content creators are circulating around the travel world, and we saw a big opportunity for technology to modernize the industry.”

Travel agencies are a $100 billion market in the U.S., but it is still largely one that works with fax machines and phone calls. Fora is one of the latest startups seeing venture capital attraction to push more of the travel industry into the digital age and tapping into the creator economy. Another is Thatch, which took in $3 million in August to enable travel creators to monetize their recommendations.

However, Fora has a unique approach compared to some of its competitors: It is actually teaching people to sell travel through training sessions, content authoring tools and negotiated rates at the world’s top hotels.

“A lot of companies sell content or expertise, but we saw a bigger opportunity to empower more people to sell travel,” Frank said. “The booking of travel without a system or knowledge is janky and hard. We are focused on tooling to help our advisors be successful themselves.”

Since its soft launch in August, Fora has already booked $2 million in travel, and today, the company announced $5 million in a seed round. It was led by Forerunner Ventures with participation from Heartcore Capital, Uncommon Capital and individual investors, including Katrina Lake, Gokul Rajaram, Ran Makavy and Ben Rubin.

The company decided to go after venture capital to help build out its technology more quickly. Fora plans to use the new investment to onboard new advisors, product development and go-to-market. Its advisor waitlist has grown to more than 2,000 people, and the company will also spend the time onboarding and training them over the next few months.

Meanwhile, Brian O’Malley, general partner at Forerunner Ventures, said that his firm was looking at the creator space and found that what will make Fora successful is having the advisor with a circle of friends and a passion for travel.

“In our eyes, that fits the entrepreneur next store versus the person on Instagram getting free stuff,” he added. “The cost of customer acquisition is skyrocketing, and Fora is coming in as a ‘Trojan horse’ by getting networks that already existed. It’s also a next-generation platform that is getting people back into the workforce.”

Cross-selling startup Carro secures $20M to connect brands with each other, influencers

Brands are always looking for new ways to get in front of potential customers, but though influencer marketing has grown, it’s still not so easy for brands to find the right person.

That’s where Carro comes in. David Perry and Jason Goldberg co-founded the company in 2015 and built a cross-store selling and influencer commerce platform that is used by more than 7 million influencers and 30,000 brands, like Blendjet, Arizona Iced Tea and Chubbies Shorts.

Carro can be installed from the Shopify App store and enables brand partnerships to sell each other’s products without having to purchase inventory or manage logistics and returns. It also has a feature for brands to discover influencers who already love their products so they can build and maintain a relationship.

“When a customer purchases a bike, for example, they will most likely need a helmet, gloves and other accessories at the same time,” CEO Perry told TechCrunch. “We get them to see what things can go together with what you sell. For example, you sell makeup, but not brushes, so you could add them to your store without having to purchase them for your inventory.”

Goldberg noted that Carro got started when influencer marketing was a relatively new term and people had large followings, but not an easy way to get connected to brands. That has now changed: the global influencer marketing market has more than doubled since 2019 and is poised to be valued at $13.8 billion in 2021.

That growth impacted Carro, too. It grew by over 500% in the last year, Perry said. To meet that demand, the company closed on $20 million of Series B funding. It was led by Alpha Edison, which was joined by PayPal Ventures, GC1 Ventures, Corazon Capital and a group of individuals who use Carro.

Nate Redmond, managing partner at Alpha Edison, said in a written statement that the company “presents a compelling new way for brands to grow, by building direct relationships with customers without spending all of their profits on online advertising. We’ve been impressed with the pace at which the Carro team innovates and utilizes deep customer insights to deliver optimal results for customers.”

The new funding gives Carro $30 million in total funding to date. It will be used to grow Carro’s team of 50 employees, product development and network expansion.

Meanwhile, the company has more than 1 million products in its network and sees 320 million visitors a month. In the past 12 months, over $10 billion was sold by brands through Carro.

“One thing that makes us happy is that when a smaller merchant, who was packing orders in their garage, joins Carro. They get really busy and can supercharge their business without having to save up tons of money for Facebook ads,” Goldberg said.

Conductor raises $150M at a $525M valuation to build out its search-based, organic marketing technology

Conductor — a marketing technology company that was snapped up by WeWork at the height of the latter company’s expansion ambitions, only then to buy itself out in the wake of WeWork’s collapse — has raised its first round of funding as a once-again independent startup. It has picked up $150 million, money that it will be using to continue investing in its technology and building out its business: an organic marketing platform aimed at SEO, content and web marketing teams, leveraging insights from search traffic to help build more accurate marketing strategies.

Conductor’s CEO and co-founder Seth Besmertnik has confirmed to us that the deal — led by Bregal Sagemount, with other investors not being disclosed — was made at a $525 million post-money valuation. Relatively speaking, that is a big leap considering that the management buyout he led for himself and others at the company was made at a price of $3.5 million, according to data from PitchBook.

About half of the investment, he said, would be in the form of secondary shares that are going to the employee-owners of the business; and half is new equity to put into the business.

For those who might not be familiar with Conductor’s backstory, here’s a brief summary, since it’s relevant to what the startup is doing today:

Conductor’s appeal to WeWork back in 2018 was based in part around WeWork already being one of Conductor’s big customers. The highly-capitalised WeWork was using Conductor’s marketing technology to figure out what businesses might be looking for when it came to office space, and it made sure that its marketing was aligned with this to drum up more business for WeWork itself. That proved to be a very successful partnership, enough to give WeWork the idea that it if it owned Conductor itself, it could leverage its technology for its existing business customers to help them grow their “virtual” real estate presence as much as their physical one, just as WeWork itself had done.

For Conductor, the deal also made sense, Besmertnik said, because Conductor already counted a number of enterprises among its customers, and WeWork potentially could provide another route to it reaching more of them. 

Those developments, as we now know, never quite came to fruition as the two companies thought it might. Luckily, the acquisition never touched Conductor’s pre-existing business. So when it started to become apparent that employees at the division might get laid off as part of WeWork’s drastic cost cuts, Besmertnik and the existing team hatched a plan to buy out the business to give it a shot of survival. When it went independent again in 2019, Conductor left with its customer list intact, and it has built on it since then. Between then and now, clients that it has added to its list include Microsoft, GlaxoSmithKline, and AT&T, with other customers including Visa, Twitter, Comcast and LG — some 450 big names in all.

At its heart, Conductor’s technology would typically be one of many tools — maybe dozens these days — that a marketer would use to both gather and analyse data to formulate and run campaigns. And to that end, the company already integrates with dozens of other sources and platforms to make that management and use easier for customers (The list includes insights tools like Dragon Metrics, Google Trends, TalkWalker, DeepCrawl, and SEMrush; project management platforms like Jira, Asana, Trello and WorkFront; and measurement tech from Adobe, Google, Webtrends, and more.)

But what is also interesting is that its approach in focusing around search is possibly more relevant today than ever before: not only are there an increasing number of controls being put in place to safeguard data (whether those are regulatory measures like GDPR, or choices being made by platform providers like Apple), but we are also seeing rapid growth of walled gardens in specific apps: audiences spend a lot of time in gaming environments or social media platforms like TikTok and Instagram, and that gives a lot less visibility to marketers when it comes to understand what users want, or what they are doing.

Search essentially breaks through that, and Conductor’s belief is that it’s a big enough area that it provides a window into those activities and needs.

“No one ever lies to a search engine,” said Besmertnik. “Whenever someone needs to buy something or look something up, they are searching on Google. Big social media companies may be taking eyeballs away from other forms of media but when you want to act or seek something, search is ingrained in our blood. They still go to search engines.”

The traffic going to search higher than ever, he points out, in part because of Covid-19 and the shift that it brought around more people carrying out their lives online, which has served as a big fillip to Conductor’s business, and specifically around how companies are leveraging search.

“Covid has hit the accelerator on digital for many companies that were not digital-first before,” he said. “You can’t just have a great digital experience now. It has to be discoverable. It has to be able to be found.”

There are a lot of other martech companies out there that have also discovered the power of search — specifically search-engine optimization specialists like SEMrush, Botify, BrightEdge, DeepCrawl and many more. Conductor’s customer list is one way that it has stepped out from the crowd to appear at to the top of the results, so to speak.

“At Bregal Sagemount, we pride ourselves on working with market leaders. The feedback we heard from Conductor’s customers and the market was definitive— Conductor is the leader in organic marketing,” said partner Michael Kosty, in a statement. “We are excited to partner with Seth and the whole Conductor team to advance their technology and their mission to empower brands to transform their wisdom into marketing that helps people.” Kosty is joining the board with this round.