Europe seals a deal on tighter rules for digital services

In the small hours local time, European Union lawmakers secured a provisional deal on a landmark update to rules for digital services operating in the region — grabbing political agreement after a final late night/early morning of compromise talks on the detail of what is a major retooling of the bloc’s existing ecommerce rulebook.

The political agreement on the Digital Services Act (DSA) paves the way for formal adoption in the coming weeks and the legislation entering into force — likely later this year. Although the rules won’t start to apply until 15 months after that — so there’s a fairly long lead in time to allow companies to adapt.

The regulation is wide ranging — setting out to harmonize content moderation and other governance rules to speed up the removal of illegal content and products. It addresses a grab-bag of consumer protection and privacy concerns, as well as introducing algorithmic accountability requirements for large platforms to dial up societal accountability around their services. While ‘KYC’ requirements are intended to do the same for online marketplaces.

How effective the package will be is of course tbc but the legislation that’s was agreed today goes further than the Commission proposal in a number of areas — with, for example, the European Parliament pushing to add in limits on tracking-based advertising.

A prohibition on the use of so-called ‘dark patterns’ for online platforms is also included — but not, it appears, a full blanket ban for all types of digital service (per details of the final text shared with TechCrunch via our sources).

See below for a fuller breakdown of what we know so far about what’s been agreed. 

The DSA was presented as a draft proposal by the Commission back in December 2020 which means it’s taken some 16 months of discussion — looping in the other branches of the EU: the directly elected European Parliament and the Council, which represents EU Member States’ governments — to reach this morning’s accord.

After last month’s deal on the Digital Markets Act (DMA), which selectively targets the most powerful intermediating platforms (aka gatekeepers) with an ex ante, pro-competition regime, EU policy watchers may be forgiven for a little euphoria at the (relative) speed with which substantial updates to digital rules are being agreed.

Big Tech’s lobbying of the EU over this period has been of an unprecedented scale in monetary terms. Notably, giants like Google have also sought to insert themselves into the ‘last mile’ stage of discussions where EU institutions are supposed to shut themselves off from external pressures to reach a compromise, as a report published earlier today by Corporate Europe Observatory underlines. That illustrates what they believe is at stake.

The full impact of Google et al‘s lobbying won’t be clear for months or even years. But, at the least, Big Tech’s lobbyists were not success in entirely blocking the passage of the two major digital regulations — so the EU is saved from an embarrassing repeat of the (stalled) ePrivacy update which may indicate that regional lawmakers are wising up to the tech industry’s tactics. Or, well, that Big Tech’s promises are not as shiny and popular as they used to be.

The Commission’s mantra for the DSA has always been that the goal is to ensure that what’s illegal offline will be illegal online. And in a video message tweeted out in the small hours local time, a tired but happy looking EVP, Margrethe Vestager, said it’s “not a slogan anymore that’s what illegal offline should also be seen and dealt with online”.

“Now it is a real thing,” she added. “Democracy’s back.”

In a statement, Commission president Ursula von der Leyen added:

“Today’s agreement on the Digital Services Act is historic, both in terms of speed and of substance. The DSA will upgrade the ground-rules for all online services in the EU. It will ensure that the online environment remains a safe space, safeguarding freedom of expression and opportunities for digital businesses. It gives practical effect to the principle that what is illegal offline, should be illegal online. The greater the size, the greater the responsibilities of online platforms. Today’s agreement — complementing the political agreement on the Digital Markets Act last month — sends a strong signal: to all Europeans, to all EU businesses, and to our international counterparts.”

In its own press release, the Council called the DSA “a world first in the field of digital regulation”.

While the parliament said the “landmark rules… effectively tackle the spread of illegal content online and protect people’s fundamental rights in the digital sphere”.

In a statement, its rapporteur for the file, MEP Christel Schaldemose, further suggested the DSA will “set new global standards”, adding: “Citizens will have better control over how their data are used by online platforms and big tech-companies. We have finally made sure that what is illegal offline is also illegal online. For the European Parliament, additional obligations on algorithmic transparency and disinformation are important achievements. These new rules also guarantee more choice for users and new obligations for platforms on targeted ads, including bans to target minors and restricting data harvesting for profiling.”

Other EU lawmakers are fast dubbing the DSA a “European constitution for the Internet”. And it’s hard not to see the gap between the EU and the US on comprehensive digital lawmaking as increasingly gaping.

Vestager’s victory message notably echoes encouragement tweeted out earlier this week by the former US secretary of state, senator, first lady and presidential candidate, Hillary Clinton, who urged Europe to get the DSA across the line and “bolster global democracy before it’s too late”, as she put it, adding: “For too long, tech platforms have amplified disinformation and extremism with no accountability. The EU is poised to do something about it.”

DSA: What’s been agreed?

In their respective press releases trumpeting the deal, the parliament and Council have provided an overview of areas of key elements of the regulation they’ve agreed.

It’s worth emphasizing that the full and final text hasn’t been published yet — and won’t be for a while. It’s pending legal checks and translation into the bloc’s many languages — which means the full detail of the regulation and the implication of all its nuance remains tbc.

But here’s an overview of what we know so far…

Scope, supervision & penalties

On scope, the Council says the DSA will apply to all online intermediaries providing services in the EU.

The regulation’s obligations are intended to be proportionate to the nature of the services concerned and the number of users — with extra, “more stringent” requirements for “very large online platforms” (aka VLOPs) and very large online search engines (VLOSEs).

Services with more than 45M monthly active users in the EU will be considered VLOPs or VLOSEs. So plenty of services will reach that bar — including, for example, the homegrown music streaming giant Spotify.

“To safeguard the development of start-ups and smaller enterprises in the internal market, micro and small enterprises with under 45 million monthly active users in the EU will be exempted from certain new obligations,” the Council adds.

The Commission itself will be responsible for supervising VLOPs and VLOSEs for the obligations that are specific to them — which is intended to avoid bottlenecks in oversight and enforcements of larger platforms (such as happened with the EU’s GDPR).

But national agencies at the Member State level will supervise the wider scope of the DSA — so EU lawmakers say this arrangement maintains the country-of-origin principle that’s baked into existing digital rules.

Penalties for breaches of the DSA can scale up to 6% of global annual turnover.

Per the parliament, there will also be a right for recipients of digital services to seek redress for any damages or loss suffered due to infringements by platforms.

Content moderation & marketplace rules

The content moderation measures are focused on harmonizing rules to ensure “swift” removal of illegal content.

This is being done through what the parliament describes as a “clearer ‘notice and action’ procedure” — where “users will be empowered to report illegal content online and online platforms will have to act quickly”, as it puts it.

It also flags support for victims of cyber violence — who it says will be “better protected especially against non-consensual sharing (revenge porn) with immediate takedowns”.

MEPs say fundamental rights are protected from the risk of over-removal of content from the regulation putting pressure on platforms to act quickly through “stronger safeguards to ensure notices are processed in a non-arbitrary and non-discriminatory manner and with respect for fundamental rights, including the freedom of expression and data protection”.

The regulation is also intended to ensure swift removal of illegal products/services from online marketplaces. So there are new  requirements incoming for ecommerce players.

On this, the Council says the DSA will impose a “duty of care” on marketplaces vis-à-vis sellers who sell products or services on their online platforms.

“Marketplaces will in particular have to collect and display information on the products and services sold in order to ensure that consumers are properly informed,” it notes, although there will be plenty of devil in the detail of the exact provisions.

On this, the parliament says marketplaces will “have to ensure that consumers can purchase safe products or services online by strengthening checks to prove that the information provided by traders is reliable (‘Know Your Business Customer’ principle) and make efforts to prevent illegal content appearing on their platforms, including through random checks”.

Random checks on traders/goods had been pushed for by consumer protection organizations — who had been concerned the measure would be dropped during trilogues — so EU lawmakers appear to have listened to those concerns.

Extra obligations for VLOPs/VLOSEs

These larger platform entities will face scrutiny of how their algorithms work from the European Commission and Member State agencies — which the parliament says will both have access to the algorithms of VLOPs.

The DSA also introduces an obligation for very large digital platforms and services to analyse “systemic risks they create” and to carry out “risk reduction analysis”, per the Council.

The analysis must be done annually — which the Council suggests will allow for monitoring of and reduced risks in areas such as the dissemination of illegal content; adverse effects on fundamental rights; manipulation of services having an impact on democratic processes and public security; adverse effects on gender-based violence, and on minors and serious consequences for the physical or mental health of users.

Additionally, VLOPs/VLOSEs will be subject to independent audits each year, per the parliament.

Large platforms that use algorithms to determine what content users see (aka “recommender systems”) will have to provide at least one option that is not based on profiling. Albeit, many already do — although they often also undermine these choices by applying dark pattern techniques to nudge users away from control over their feeds so holistic supervision will be needed to meaningfully improve user agency.

There will also be transparency requirements for the parameters of these recommender systems with the goal of improving information for users and any choices they make. Again, the detail will be interesting to see there.

Limits on targeted advertising  

Restrictions on tracking-based advertising appear to have survived the trilogue process with all sides reaching agreement on a ban on processing minors’ data for targeted ads.

This applies to platforms accessible to minors “when they are aware that a user is a minor”, per the Council.

“Platforms will be prohibited from presenting targeted advertising based on the use of minors’ personal data as defined in EU law,” it adds.

A final compromise text shared with TechCrunch by our sources suggests the DSA will stipulate that providers of online platforms should not do profile based advertising “when they are aware with reasonable certainty that the recipient of the service is a minor”.

A restriction on the use of sensitive data for targets ads has also made it into the text.

The parliament sums this up by saying “targeted advertising is banned when it comes to sensitive data (e.g. based on sexual orientation, religion, ethnicity)”.

The wording of the final compromise text which we’ve seen states that: “Providers of online platforms shall not present advertising to recipients of the service based on profiling within the meaning of Article 4(4) of Regulation 2016/679 [aka, the GDPR] using special categories of personal data as referred to in article 9(1) of Regulation 2016/679.”

Article 4(4) of the GDPR defines ‘profiling’ as: “any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements;”.

While the GDPR defines special category data as personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, as well as biometric and health data, data on sex life and/or sexual orientation.

So targeting ads based on tracking or inferring users’ sensitive interests is — on paper — facing a hard ban in the DSA.

Ban on use of dark patterns

A prohibition on dark patterns also made it into the text. But, as we understand it, this only applies to “online platforms” — so it does not look like a blanket ban across all types of apps and digital services.

That is unfortunate. Unethical practices shouldn’t be acceptable no matter the size of the business.

On dark patterns, the parliament says: “Online platforms and marketplaces should not nudge people into using their services, for example by giving more prominence to a particular choice or urging the recipient to change their choice via interfering pop-ups. Moreover, cancelling a subscription for a service should become as easy as subscribing to it.”

The wording of the final compromise text that we’ve seen says that: “Providers of online platforms shall not design, organize or operate their online interfaces in a way that deceives, manipulates or otherwise materially distorts or impairs the ability of recipients of their service to make free and informed decisions” — after which there’s an exemption for practices already covered by Directive 2005/29/EC [aka the Unfair Commercial Practices Directive] and by the GDPR.

The final compromise text we reviewed further notes that the Commission may issue guidance on specific practices — such as platforms giving more prominence to certain choices, repeatedly requesting a user makes a choice after they already have and making it harder to terminate a service than sign up. So the effectiveness of the dark pattern ban could well come down to how much attention the Commission is willing to give to a massively widespread online problem.

The wording of the associated recital in the final compromise we saw also specifies that the dark pattern ban (only) applies for “intermediary services”.

Crisis mechanism 

An entirely new article was also added to the DSA following Russia’s invasion of Ukraine — and in connection with rising concern around the impact of online disinformation — that creates a crisis response mechanism which will give the Commission extra powers to scrutinize VLOPs/VLOSEs in order to analyze the impact of their activities to the crisis in question.

The EU’s executive will also be able to come up with what the Council bills as “proportionate and effective measures to be put in place for the respect of fundamental rights”.

The mechanism will be activated by the Commission on the recommendation of the board of national Digital Services Coordinators.

EU lawmakers agree data reuse rules to foster AI and R&D

It’s just over a year since the European Union’s executive proposed legislation aimed at encouraging widespread sharing and reuse of industrial data and protected public sector data-sets — such as health data, environment, energy, agriculture, mobility, finance, manufacturing, public administration and skills.

Yesterday the bloc’s co-legislators reached agreement on the DGA after trilogue discussions — paving the way for formal adoption once the Council and Parliament vote to approval the final text.

The Data Governance Act (DGA) is intended to create a standardized framework of trusted tools and techniques to encourage data reuse — by setting “secure and privacy-compliant conditions” for sharing data, as the Commission presented it.

The hope is also that legislation will lead to the creation of common data spaces across strategic domains to support R&D.

Key components of the plan include the creation of a network of trusted and neutral data intermediaries; and an oversight regime to ensure conditions are being complied with — comprised of national monitoring authorities and a new advisory/steering body (aka, the European Data Innovation Board).

The legislation will also bring in international data transfer rules for non-personal data. (Such rules already exist — at least on paper — covering the export of personal data from the EU under the General Data Protection Regulation.)

Getting political agreement is typically the most challenging component of EU lawmaking.

Although the final text of the DGA will still need to be approved by the Council and Parliament before it’s formally adopted. (And, in terms of wider timeframe, the new rules will apply 15 months after the entry into force of the regulation — so likely not before 2023.)

In a press release on the provisional agreement, the Council of the EU said: “[N]egotiators from the Council and the European Parliament reached a provisional agreement on a new law to promote the availability of data and build a trustworthy environment to facilitate its use for research and the creation of innovative new services and products.”

“The [DGA] will create a mechanism to enable the safe reuse of certain categories of public-sector data that are subject to the rights of others. This includes, for example, trade secrets, personal data and data protected by intellectual property rights. Public-sector bodies allowing this type of reuse will need to be properly equipped, in technical terms, to ensure that privacy and confidentiality are fully preserved,” the Council added.

An earlier Open Data Directive, from 2019, does not cover the data types being legislated for here.

EU lawmakers believe Europe has a major advantage in industrial data — and want the DGA to create conditions that will encourage widespread data sharing and reuse. (A Data Act is also incoming which will propose measures intended to encourage business-to-business data sharing and business-to-government data sharing.)

The Commission has also been pushing the notion of “data altruism” and data sharing for the common good, although the overarching goal for the DGA is to foster the development of regional AIs to stoke economic competitiveness and growth.

“Businesses, both small and large, will benefit from new business opportunities as well as from a reduction in costs for acquiring, integrating and processing data, from lower barriers to enter markets, and from a reduction in time-to-market for novel products and services,” it suggests in a Q&A on the DGA.

At the same time the Commission has suggested that the creation of common European “data spaces” in sectors like manufacturing and health will help advance research — which could lead to cures for rare or chronic diseases or at least support evidence-based policymaking and foster other types of societally beneficial developments.

So the DGA co-mingles purely economic goals (via opening access to data) with broad-brush notions of “data altruism” and “data for society”.

This fuzziness has raised some watchful concern among civil society groups — such as the European Consumer Organisation BEUC, which has warned that “a weak definition of altruism in this agreement could allow companies to over-use vague, altruistic grounds to push consumers to share their data”.

Weak enforcement of existing legal protections for EU citizens’ data may also work against the sought for ‘trust to share’ push.

Commenting in a statement on the trilogue agreement on the DGA yesterday, the Commission’s EVP for digital, Margrethe Vestager, avoiding citing any such concerns — merely offering a boosterish claim that: “This Regulation is a first building block for establishing a solid and fair data-driven economy.”

“It is about setting up the right conditions for trustful data sharing in line with our European values and fundamental rights,” she went on, adding: “We are creating a safe environment in which data can be shared across sectors and Member States for the benefit of society and the economy.”

Internal Market commissioner, Thierry Breton, also stuck to the promotional script, saying the agreement on the DGA will allow the bloc to define “a common approach to data sharing”, and further suggesting: “We are facilitating the flow of growing industrial data across sectors and Member States to help Europe become the world’s number one data continent. We are doing so by building trust, putting the individuals and companies who generate data in the driving seat so they remain in control of the data they create. In a nutshell: an open yet sovereign European Single Market for data.”

In its PR, the European Parliament said MEPs secured tighter provisions on “trust and fair access” during the DGA negotiations — aimed at plugging loopholes in the legislation they said would have allowed operators from non-EU countries to abuse the scheme.

MEPs also focused on beefing up the role of the European Data Innovation Board — and on clarifying the scope of the legislation, saying they secured “precise requirements” on which services will fall under the new DGA.

Getting a better deal for SMEs and startups was another parliamentary priority, they said.

What’s been agreed?

The trilogue agreement reached yesterday includes the possibility for exclusive arrangements for the reuse of public-sector data — which the Council said will be possible “when justified and necessary for the provision of a service of general interest”.

However the maximum duration for existing contracts will be 2.5 years and for new contracts 12 months — with MEPs writing that: “Public sector bodies will have to avoid creating exclusive rights for the re-use of certain data, and exclusive agreements should be limited… to make more data available to SMEs and start-ups.”

The co-legislators have also agreed that the Commission will set up a European single access point with a searchable electronic register of public-sector data — which will be made available via national single information points.

The EU is anticipating that the DGA will create a new business model for data intermediation services — to provide a secure environment in which companies or individuals can share data; and which will commit not to use the data for their own ends (but will be able to charge for the transactions they enable).

Such services must be listed in an EU register under the incoming rules.

“For companies, these services can take the form of digital platforms, which will support voluntary data-sharing between companies or facilitate the fulfilment of data-sharing obligations set by law. By using these services, companies will be able to share their data without fear of its being misused or of losing their competitive advantage,” the Council writes.

“For personal data, such services and their providers will help individuals exercise their rights under the [GDPR].  This will help people have full control over their data and allow them to share it with a company they trust. This can be done, for example, by means of novel personal information management tools, such as personal data spaces or data wallets, which are apps that share such data with others, based on the data holder’s consent.”

EU lawmakers are hoping that having dedicated rules to govern data-sharing will encourage both companies and individuals to make data available voluntarily — for altruistic purposes such as medical research projects.

The DGA will therefore create national registers of recognised “data altruism organisations” — aka entities seeking to collect data for objectives of general interest. Such entities must agree to comply with specific rules, and (once registered) will be recognised across the EU.

“This will create the necessary trust in data altruism, encouraging individuals and companies to donate data to such organisations so that it can used for wider societal good,” the Council suggests.

The DGA will establish a voluntary certification program — with a logo — to make it easier to identify compliant providers of data intermediation services and data altruism organisations.

When the Commission proposed the DGA commissioners were closely questioned on the issue of international transfers of data. On this the Council says the Commission may (through secondary legislation) adopt adequacy decisions declaring that specific non-EU countries provide appropriate safeguards for the use of non-personal data transferred from the EU.

This mirrors procedure attached to the GDPR and covering personal data transfers to third countries.

However the Commission’s decisions on that front have been shown lacking on a number of occasions — notably two flagship data transfers agreements with the US were struck down by the CJEU in recent years. Which has resulted in increased uncertainty and complexity around international transfers of personal data.

So there are likely to be questions and concerns about what exactly constitutes “unlawful international transfer of or governmental access to non-personal data”, under the incoming legislation.

“These [DGA adequacy] decisions would be similar to adequacy decisions relating to personal data under the GDPR. Such safeguards should be considered to exist when the country in question has equivalent measures in place that ensure a level of protection similar to that provided by EU or member state law,” the Council writes, without delving into any detail.

It goes on to note that the EC may also adopt “model contractual clauses” to support public-sector bodies and re-users in the case of transfers of public-sector data to third countries — suggesting those that want to export data may be encouraged to follow a standardized approach. 

The parliament’s PR, meanwhile, talks of the DGA setting up “common European data spaces in strategic domains” — without making any reference to the international transfer issue.

How many of the touted “European data spaces” may end up hosted in the bloc — i.e. to meet the DGA’s requirements to provide adequate safeguards for the data — isn’t clear.

Although the Commission’s line is that “the free flow of data to third countries should be ensured”.

“We want Europe to create and strengthen its data economy but there is no obligation to store and process data in the EU. Nobody will be prohibited from dealing with the partner of their choice,” it also writes, before segueing into a caveat that: “At the same time, the EU must ensure that any access to EU citizen’s personal data and certain sensitive data is in compliance with its values and legislative framework.”

Driving AI innovation in tandem with regulation

The European Commission announced first-of-its-kind legislation regulating the use of artificial intelligence in April. This unleashed criticism that the regulations could slow AI innovation, hamstringing Europe in its competition with the U.S. and China for leadership in AI.

For example, Andrew McAfee wrote an article titled “EU proposals to regulate AI are only going to hinder innovation.”

Anticipating this criticism and mindful of the example of GDPR, where Europe’s thought-leadership position didn’t necessarily translate into data-related innovation, the EC has tried to address AI innovation directly by publishing a new Coordinated Plan on AI.

Released in conjunction with the proposed regulations, the plan is full of initiatives intended to help the EU become a leader in AI technology. So will the combination of regulation and pro-innovation policies be enough to spur accelerating AI leadership?

AI innovation can be accelerated with the right laws

While the combination is well considered and targets improvements in both regulation and innovation, there is a problem: The pro-innovation initiatives are R&D-focused and not targeted at increasing adoption in the “high-risk” AI use cases to be regulated.

Spurring adoption is a key missing element. Many research studies have shown that well-designed “hard law” regulations can actually increase innovation, especially when employed with incentives that accelerate adoption. If the EC were to follow such a strategy, the EU could become a hotbed of AI innovation.

High-risk AI regulation and investment in innovation

The main thrust of the EC regulations is to place new requirements on “high-risk” AI systems. These include AI systems used for remote biometric identification, public infrastructure management, hiring and employment, creditworthiness assessment, and education, as well as for various public-sector use cases, such as dispatching first responders.

The legislation requires developers of these systems to deploy an AI quality management system that addresses requirements around high-quality data sets, record keeping, transparency, human oversight, accuracy, robustness and security. Providers of AI systems not yet designated as high risk are encouraged to create voluntary codes of conduct to achieve similar goals.

It’s clear that the crafters of the proposal were cognizant of the balance between regulation and innovation.

First, the legislation limits the number of AI systems deemed to be high risk, excluding systems that could plausibly have been included, such as insurance, and mostly including AI systems that already have some amount of regulatory oversight, such as employment and lending.

Second, the legislation defines high-level requirements without dictating how they are achieved. It also creates a compliance system based on self-reporting instead of something more onerous.

Finally, the Coordinated Plan is chock-full of R&D-supporting initiatives, including spaces for data-sharing, testing and experimentation facilities, investment in research and AI excellence centers, digital innovation hubs, funding for education, and targeted, programmatic investments in AI for climate change, health, robotics, the public sector, law enforcement and sustainable agriculture.

However, the proposal lacks adoption-driving policies that have led to faster innovation in combination with regulation in other sectors.

A motivating precedent: EV incentives in the U.S.

So how could the EC promote much faster AI innovation while enacting regulatory guardrails? The example of electric vehicles in the United States provides a guide.

The U.S. has become a leader in electric car production because of a combination of entrepreneurship, regulations and smart market creation incentives.

Tesla invigorated the electric car industry with the insight that the new vanguard of electric cars should be desirable, high-performance sports cars.

The Corporate Average Fuel Efficiency (CAFE) regulations created a stick that required the development of more efficient vehicles. Generous tax credits for the purchase of electric vehicles helped directly accelerate vehicle sales without interfering with the natural, competitive market dynamics. The combination of CAFE regulations, tax credits and entrepreneurial companies like Tesla has created such a massive boost to innovation that electric vehicle engines are poised to become less expensive than internal combustion ones.

Getting AI incentives right: Three additional initiatives to pursue

The EC has an opportunity to achieve something similar with AI. Specifically, the EC should consider combining these current regulations with three additional initiatives.

Create tax incentives for companies to build or buy high-risk AI systems that adhere to these regulations. The EC should seek to proactively use AI to help meet economic and societal goals.

For example, some banks are using AI to better assess the creditworthiness of individuals with limited credit histories, while simultaneously working to ensure that banking activities are free from bias. This increases financial inclusion, a goal shared by governments, and represents a win-win AI innovation.

Further reduce uncertainty around EC legislative implementation. Part of this can be done directly by the EC — through the development of more specific standards around AI quality management and fairness. However, there may be even greater value in bringing together a coalition of AI technology providers and user organizations to translate these standards into practical steps for compliance.

For example, the Monetary Authority of Singapore has orchestrated an industry consortium for banks, insurers and AI technology providers called Veritas to achieve similar goals for its Fairness, Ethics, Accountability and Transparency (FEAT) guidelines.

Consider accelerating the adoption of the AI quality management systems that the legislation requires by funding companies to build or buy these systems. There is significant academic and commercial activity already in this space, in areas such as explainability of black box models, assessment of potential discrimination due to data or algorithmic bias, and testing and monitoring of AI systems for their ability to survive significant changes in data.

By creating the conditions to encourage widespread adoption of such technologies, the EC should be able to meet the dual objectives of encouraging innovation and enabling compliance with the new legislation in a sustainable manner.

If the EC assertively reduces uncertainty, promotes the use of regulated, “high-risk” AI and encourages the use of AI quality management techniques, then it has the chance to become the global leader in AI innovation while providing critical protection to its citizens. We should all be pulling for them to be successful, as it would set an example for the world to follow.

Clubhouse, Vimeo, DoubleVerify and others set to sign up to beefed-up disinformation code in the EU

Audio social network Clubhouse, video sharing platform Vimeo and anti-ad fraud startup DoubleVerify are among a clutch of tech companies and organzations preparing to sign up to a beefed-up version of the European Union’s Code of Practice on Online Disinformation, the Commission said today.

Back in May the EU’s executive said it would be strengthening the (still) voluntary code which is aimed at driving joined-up industry action to counter the spread of harmful disinformation online.

It called on smaller digital services and adtech firms to sign up — saying it particularly wanted broader participation, not just the usual platform giants (some of whom were among the first group of signatories).

Facebook, Google, Twitter and Mozilla were among the first clutch of tech platforms to sign up to the EU disinformation code back in 2018, while TikTok joined last summer.

Eight newbies are said to be preparing to agree to the beefed-up code now.

In addition to the three aforementioned startups, the Commission said organisations that provide specific expertise and technical solutions to fight disinformation, such as Avaaz, Globsec, Logically, NewsGuard and WhoTargetsMe, are also preparing to join.

“Substantial additional efforts are needed to reduce the flow of harmful disinformation, as illustrated by recent electoral campaigns and by the signatories’ reports on the COVID-19 disinformation monitoring program,” the EC writes in a press release.

Among the shortcomings the Commission has said it wants the revised code to address are: Inconsistent and incomplete application of the Code across platforms and Member States; gaps in the coverage of the Code’s commitments; a lack of appropriate monitoring mechanism, including key performance indicators; a lack of commitments on access to platforms’ data for research on disinformation; and limited participation from stakeholders, in particular from the advertising sector.

So quite the laundry list, then.

While the intention is to beef up the code, it will remain self-regulated. So whether it will become a “strong EU instrument”, as the Commission puts it — without having any real legal teeth — remains to be seen.

For large internet platforms there will be, if not literal fangs then some threatening growling as compliance with the beefed-up code will be taken into consideration under wider, incoming digital regulations — aka, the bloc’s Digital Services Act — which seeks to increase internet companies’ accountability across a number of areas.

And if self-regulation still fails to move the needle on harmful online disinformation like bogus information about COVID-19 vaccines or election interference the Commission could still propose legislation. However it seems reluctant to do that in such a subjective and controversial area given the risks of being accused of censoring free speech.

Pan-EU rules for political ads transparency are slated for introduction next year, in one targeted piece of planned legislation.

In a statement on the eight new “prospective signatories”, who have joined in the revision process as a beefed-up version of the disinformation code is drafted, Věra Jourová, the VP for values and transparency, said: “I am pleased to see new actors from relevant sectors answering our call and engaging in the overhaul of the Code of Practice. I encourage others, including platforms, messaging services and actors in the online advertising ecosystem, to join as soon as possible to co-shape the process.”

“The strengthened Code cannot fall short of the expectations we have set out in our Guidance from May,” she added. “Online players have a special responsibility regarding spreading and monetising disinformation. They must become more transparent, accountable and safe by design.”

Thierry Breton, the EU commissioner for the internal market, added: “I welcome the fact that more and more online platforms, technology providers, civil society and expert organisations are formally committing to fighting disinformation. The private sector should be ambitious and anticipate the new EU rules on digital services (DSA). Now is the time to show they walk the talk.”

EU considers single rules for regulate vacation rental platforms

European Union lawmakers are consulting on how to regulate the short term rental market across the bloc — asking whether a single set of pan-EU rules or something more locally flavored is needed to rule over platforms like Airbnb.

The Commission says it wants to support an expansion of competition and players in the sector but also respect EU law which allows for Member States to apply local rules based on concerns that are in the public interest.

As it stands, a patchwork of rules and regulations has grown up around short term rental platforms across Europe as cities grapple with the impact of over-tourism from uncontrolled growth of Airbnb et al — and also struggle to extract useful data from platforms whose business model benefits from flipping local housing stock to drive short term rental gains.

This dynamic is triggering the Commission’s customary anxiety around fragmentation of the Single Market. So it’s now asking for views from the industry and interested citizens as it considers whether a sector-specific regulation is needed.

In a summary of its intention for the forthcoming short term rental (STR) initiative, which will be fed by responses to the public consultation, the Commission writes that its aim is “to develop responsible, fair and trusted growth in short-term rentals, as part of a well-balanced tourist ecosystem” — and “ensure a level playing field for all accommodation service providers and to respond to the many requests from interested parties for EU-wide action in this area”.

“This will involve offering balanced solutions for cities, peer and professional short-term rental providers and platforms, while benefiting in particular small and medium-sized firms,” it adds.

The Commission proposed broad rules for the digital sector at the back end of last year — under the Digital Services Act (DSA) — which it has said will level the playing fields between online and offline business in areas such as illegal content and goods.

However the regulation has been criticized for not going far enough to tackle illegal holiday rentals.

In a position paper earlier this year, for example, the holiday accommodation booking platform, Booking.com, wrote that the Commission’s proposal could be more “ambitious” — saying it “insufficiently addresses illegal short term holiday rentals”.

“The powers for national (or local) authorities to order platforms to remove illegal content (Article 8) and to share information (Article 9) are a step in the right direction. However, enforcement loopholes remain,” it added. “Where short term vacation rentals are subject to an authorization /registration scheme, platforms should be required to ensure that only properly registered / authorized properties are listed. This is in line with the duty of care incumbent upon platforms.”

The Commission is also facing vocal pressure on the issue from heavily touristed cities across Europe — which have been trying to impose limits on the growth of holiday rentals, with mixed success.

Earlier this year, for example, a court in Amsterdam overturned a three-district ban on rentals on platforms like Airbnb which the city authorities had brought in after polling local residents over the impacts on their communities.

But this summer city authorities in Paris trumpeted winning a major case against Airbnb — which was ordered to pay $9.6M for illegal listings.

Governments in Europe have also been pressing the Commission to regulate for data access to vacation rental platforms, with the Dutch suggesting last year that it should embed such provisions in the DSA.

The European Parliament has also warned that the “expansive growth of short-term holiday rental in cities and popular tourist destinations is extracting housing from the market, driving up prices and has a negative impact on liveability”. And in a report last year MEPs also said: “We want to give cities more control over rentals short term accommodation platforms and ensure that these platforms share information with the cities, respecting data protection rules.”

Also last year the Commission inked a voluntary agreement with a handful of major platforms to push some aggregated data. However the agreement goes considerably less far than requirements that some individual European cities have imposed on STR platforms operating locally in a bid to control their impacts on communities.

In its consultation on the STR issue, the Commission notes that a number of authorities across the EU have taken steps to try to regulate short term rentals (STRs) — such as by imposing registration requirements and authorisation schemes on hosts, and by putting limits on the number of nights per year properties can be rented out.

“Whilst doing so, some national rules make a difference between so-called ‘peers’ offering STRs in a private capacity occasionally, and those offering STRs in a more professional capacity,” it goes on. “Criteria differ however and not everywhere a distinction is made between peers and professionals. Authorities have also taken a variety of measures to obtain better data from platforms, with a view to better assessing the volume of STRs and facilitating law enforcement.

“The foregoing has resulted in a patchwork of varying rules and regulations.”

“Under EU law, imposing restrictions and conditions on the provision of a service can be justified if necessary to pursue a legitimate public interest objective and proportionate to achieve that end. In spite of important clarifications offered by EU case law, there is still much uncertainty regarding the measures authorities may adopt and enforce as regards hosts and other service providers such as platforms,” the Commission adds.

“EU horizontal regulatory initiatives such as the Digital Services Act aim to impose harmonised obligations on online platforms, including collaborative economy platforms in the area of STRs. However, in order to foster a balanced development of the STR segment in the EU and to address the sector-specific aspects, a dedicated sector-specific initiative may be needed.”

It’s not clear exactly where the Commission will plant its flag on this contentious issue — but its consultation poses a number of questions for stakeholders to respond to in areas such as transparency and data access, as well as asking for views on the benefits and drawbacks of STRs.

The Commission is also seeking suggestions for specific rules and regulations that might be applied to STRs to achieve the sought for balance between supporting tourism and respecting local public interest — including asking for views on how best to ascertain whether an STR host is a ‘peer’ or a professional operator.

The marketing of platforms like Airbnb typically spins the promise that you’re renting unique and characterful spaces from locals — which differentiates the holiday rental platform business from more traditional tourist fare like hotels.

However the reality of such ‘p2p’ platforms is they quickly attracted a swathe of professional landlords — who could lucratively list and rent entire apartments to tourists, instead of renting the housing at a lower rate to locals on long term contracts. Hence the criticism that STRs hollow out local communities and make city living less affordable for locals. 

In one question from the Commission’s public consultation — which is asking for views on the type of rules and requirements for STRs that “could be acceptable” — it suggests a number of responses, including registration and authorization obligations on all hosts or only on professional hosts; and limits on the number of nights, amount of income generated and number of rooms that can be rented by peers.

At the end of this list it also asks for a response to the suggestion of a “total ban on STRs”.

Elsewhere, the consultation asks for views on whether rules for STRs should be imposed from the top (a “harmonized EU approach) — or allowed to be entirely bottom up (left to local authorities to define) or some vaguely defined combination of the two.

It is also concerned with whether there should be different rules for peers and professional hosts — and if so how best to define who is a peer and who a professional?

Other questions in the consultation ask for views on measures “that you think could support innovation in the STR and the development of new business models”. 

The Commission’s consultation runs until December 13, with the EU’s executive slated to publish a legislative proposal for vacation rentals in the first half of next year.

Dutch court finds Uber drivers are employees

Uber has lost another legal challenge in Europe over the employment status of drivers: The Court of Amsterdam, in the Netherlands, has ruled that drivers for Uber are employed, rather than self employed contractors.

The court also found drivers are covered by an existing collective labor agreement in the country — which pertains to taxi drivers — meaning Uber faces increased costs to comply with the agreement which sets pay requirements and covers benefits like sick pay. (And it may be liable for paying driver back pay in some cases.)

The court also ordered Uber to pay €50,000 in costs.

The ride hailing giant has some 4,000 drivers working on its platform in the Dutch capital.

The Amsterdam court rejected Uber’s customary defence that it’s just a technology platform that connects passengers with taxi service providers — finding instead that drivers are only self employed ‘on paper’.

The judges highlighted the nature of the service being provided by drivers and the fact Uber exerts controls over how they can work and earn through its app and algorithms.

Europe’s top court already ruled back in 2017 that Uber is a transport provider and must comply with local transport laws — so you’d be forgiven for deja vu.

The Dutch lawsuit was filed by the national trade union center, FNV, last year — with the hearing kicking off at the end of June.

In a statement today, the FNV’s VP, Zakaria Boufangacha, said: “This statement shows what we have been saying for years: Uber is an employer and the drivers are employees, so Uber must adhere to the collective labor agreement for Taxi Transport. It is also a signal to The Hague that these types of constructions are illegal and that the law must therefore be enforced.”

Uber has been contacted for a response to the ruling.

At the time of writing the company had not responded — but, per Reuters, Uber said it intends to appeal and “has no plans to employ drivers in the Netherlands”.

In the UK, Uber lost a string of tribunal rulings over its employment classification over a number of years — going on to lose in front of the UK supreme court this February.

Following that Uber said it would treat drivers in the UK as workers, although disputes remain (such as over its definition of working time). In May, Uber also said it would recognize a UK trade union for the first time.

Elsewhere in Europe, however, the company continues to fight employment lawsuits — and to lobby European Union lawmakers to deregulate platform work…

The EU has said it wants to find a way to improve platform work. However it’s not yet clear what any pan-EU ‘reform’ may look like. 

The Commission has been contacted with questions on its platform work initiative.

“Digital labour platforms are clearly worried, evident through investing heavily on their lobbying power and throwing more resources on the EU level. These companies — including Uber of course — have also recently come together to create a new funding lobby group that specifically targeting to influence policies on platform work,” said Jill Toh, a PhD researcher in data rights at the University of Amsterdam, talking to TechCrunch after the Amsterdam ruling.

“We saw how Uber wielded and amended laws in their Prop 22 campaign in California, and together with other companies in Europe, they’re attempting to do so again. It’s disheartening to see that the Commission in its two consultations on platform worker regulation has only been talking to tech companies and has held no meetings with trade unions or other platform work representatives.”

“All of this is incredibly problematic and concerning especially if the EC consultations result in a directive on platform work. Overall, the wins in the courts are important for workers, but there remains the issue of corporate power and influence in Brussels, as well as the lack of public enforcement to these court decisions,” she added.

Join Greylock’s Asheem Chandna on November 5 at noon PST/3 pm EST/8 pm GMT to discuss the future of enterprise and cybersecurity investing

The world of enterprise software and cybersecurity has taken multiple body blows since COVID-19 demolished the in-person office, flinging employees across the world and forcing companies to adapt to an all-remote productivity model. The shift has required companies to rethink not only collaboration software, but also the infrastructure that powers it and the best way to protect assets once their security perimeters have been destroyed.

The pandemic has also dramatically increased the usage of digital services, forcing cloud providers to keep up with crushing demands for performance and reliability.

In short — it’s never been a better time to be an enterprise investor (or, possibly, a founder).

So I’m excited to announce our next guest in our Extra Crunch Live interview series: Asheem Chandna from Greylock, one of the top enterprise investors of the past two decades who has worked with multiple important founding teams from whiteboard to IPO. We’re scheduled for Thursday, November 5 at noon PST/3 p.m. EST/8 p.m. GMT (check that daylight savings time math!)

Login details are below the fold for EC members, and if you don’t have an Extra Crunch membership, click through to sign up.

For nearly two decades, Asheem Chandna has invested in enterprise and security startups at Greylock, with massive investment wins in Palo Alto Networks, AppDynamics and Sumo Logic. These days, he continues to invest in cybersecurity with companies like Awake Security and Abnormal Security, data platforms like Rubrik and Delphix, and the stealthy search engine company Neeva. As a leading early-stage investor and mentor in the space, he’s seen a multitude of companies transition from inception to product-market fit to IPO.

We’ll talk about what all the turbulence in enterprise means for the future of startups in the space, how cybersecurity is evolving given the new threat landscape and also discuss a bit about how the public markets and their aggressive multiples for Silicon Valley enterprise startups is changing the strategy of venture capitalists. Plus, we’ll talk about company building, developing founders as leaders and more.

Join us next week with Asheem on Thursday, November 5 at noon PST/3 p.m. EST/8 p.m. GMT. Login details and calendar invite are below.

Event Details

GDPR’s two-year review flags lack of “vigorous” enforcement

It’s more than two years since a flagship update to the European Union’s data protection regime moved into the application phase. Yet the General Data Protection Regulation (GDPR) has been dogged by criticism of a failure of enforcement related to major cross-border complaints — lending weight to critics who claim the legislation has created a moat for dominant multinationals, at the expense of smaller entities.

Today the European Commission responded to that criticism as it gave a long scheduled assessment of how the regulation is functioning, in its first review two years in.

While EU lawmakers’ top-line message is the clear claim: ‘GDPR is working’ — with commissioners lauding what they couched as the many positives of this “modern and horizontal piece of legislation”; which they also said has become a “global reference point” — they conceded there is a “very serious to-do list”, calling for uniformly “vigorous” enforcement of the regulation across the bloc.

So, in other words, GDPR decisions need to flow more smoothly than they have so far.

Speaking at a Commission briefing today, Věra Jourová, Commission VP for values and transparency, said: “The European Data Protection Board and the data protection authorities have to step up their work to create a truly common European culture — providing more coherent and more practical guidance, and work on vigorous but uniform enforcement.

“We have to work together, as the Board and the Member States, to address concerns — in particular those of the small and medium enterprises.”

Justice commissioner, Didier Reynders, also speaking at the briefing, added: “We have to ensure that [GDPR] is applied harmoniously — or at least with the same vigour across the European territory. There may be some nuanced differences but it has to be applied with the same vigour.

“In order for that to happen data protection authorities have to be sufficiently equipped — they have to have the relevant number of staff, the relevant budgets, and there is a clear will to move in that direction.”

Front and center for GDPR enforcement is the issue of resourcing for national data protection authorities (DPAs), who are tasked with providing oversight and issuing enforcement decisions.

Jourová noted today that EU DPAs — taken as a whole — have increased headcount by 42% and budget by 49% between 2016 and 2019.

However that’s an aggregate which conceals major differences in resourcing. A recent report by pro-privacy browser Brave found that half of all national DPAs receive just €5M or less in annual budget from their governments, for example. Brave also found budget increases peaked for the application of the GDPR — saying, two years in, governments are now slowing the increase.

It’s also true that DPA case load isn’t uniform across the bloc, with certain Member States (notably Ireland and Luxembourg) handling many more and/or more complex complaints than others as a result of how many multinationals locate their regional HQs there.

One key issue for GDPR thus relates to how the regulation handles cross border cases.

A one-stop-shop mechanism was supposed to simplify this process — by having a single regulator (typically in the country where the business has its main establishment) taking a lead on complaints that affect users in multiple Member States, and other interested DPAs not dealing directly with the data processor. But they do remain involved — and, once there’s a draft decision, play an important role as they can raise objections to whatever the lead regulator has decided.

However a lot of friction seems to be creeping in via current processes, via technical issues related to sharing data between DPAs — and also via the opportunity for additional legal delays.

In the case of big tech, GDPR’s one-stop-shop has resulted in a major backlog around enforcement, with multiple complaints being re-routed via Ireland’s Data Protection Commission (DPC) — which is yet to issue a single decision on a cross border case. And has more than 20 such investigations ongoing.

Last month Ireland’s DPC trailed looming decisions on Twitter and Facebook — saying it had submitted a draft decision on the Twitter case to fellow DPAs and expressed hope that case could be finalized in July.

Its data protection commissioner, Helen Dixon, had previously suggested the first cross border decisions would be coming in “early” 2020. In the event, we’re past half way through the year still with no enforcement on show.

This looks especially problematic as there is a counter example elsewhere in the EU: France’s CNIL managed to issue a decision in a major GDPR case against Google all the way back in January 2019. Last week the country’s top court for administrative law cemented the regulator’s findings — dismissing Google’s appeal. Its $57M fine against Google remains the largest yet levied against big tech under GDPR.

Asked directly whether the Commission believes Ireland’s DPC is sufficiently resourced — with the questioner noting it has multiple ongoing investigations into Facebook, in particular, with still no decisions taken on the company — Jourová emphasized DPAs are “fully independent”, before adding: “The Commission has no tools to push them to speed up but the cases you mention, especially the cases that relate to big tech, are always complex and they require thorough investigation — and it simply requires more time.”

However CNIL’s example shows effective enforcement against major tech platforms is possible — at least, where there’s a will to take on corporate power. Though France’s relative agility may also have something to do with not having to deal simultaneously with such a massive load of complex cross-border cases.

At the same time, critics point to Ireland’s cosy political relationship with the corporate giants it attracts via low tax rates — which in turn raises plenty of questions when set against the oversized role its DPA has in overseeing most of big tech. The stench of forum shopping is unmistakable.

Criticism of national regulators extends beyond Ireland, too, though. In the UK, privacy experts have slammed the ICO’s repeated failure to enforce the law against the adtech industry — despite its own assessments finding systemic flouting of the law. The country remains an EU Member State until the end of the year — and the ICO is the best resourced DPA in the bloc, in terms of budget and headcount (and likely tech expertise too). Which hardly reflects well on the functional state of the regulation.

Despite all this, the Commission continues to present GDPR as a major geopolitical success, claiming — as it did again today — that it’s ahead of the digital regulatory curve globally at a time when lawmakers almost everywhere are considering putting harder limits on Internet players.

But there’s only so long it can sell a success on paper. Without consistently “vigorous” enforcement, the whole framework crumbles — so the EU’s executive has serious skin in the game when it comes to GDPR actually doing what it says on the tin.

Pressure is coming from commercial quarters too — not only privacy and consumer rights groups.

Earlier this year, Brave lodged a complaint with the Commission against 27 EU Member States — accusing them of under resourcing their national data protection watchdogs. It called on the EU executive to launch an infringement procedure against national governments, and refer them to the bloc’s top court if necessary. So startups are banging the drum for enforcement too.

If decision wheels don’t turn on their own, courts may eventually be needed to force Europe’s DPAs to get a move on — albeit, the Commission is still hoping it won’t have to come to that.

“We saw a considerable increase of capacities both in Ireland and Luxembourg,” said Jourová, discussing the DPA resourcing issue. “We saw a sufficient increase in at least half of other Member States DPAs so we have to let them do very responsible and good work — and of course wait for the results.”

Reynders suggested that while there has been an increase in resource for DPAs the Commission may need to conduct a “deeper” analysis — to see if more resource is needed in some Member States, “due to the size of the companies at work in the jurisdiction of such a national authority”.

“We have huge differences between the Member States about the need to react to the requests from the companies. And of course we need to reinforce the cooperation and the co-ordination on cross border issues. We need to be sure that it’s possible for all the national authorities to work together. And in the network of national authorities it’s the case — and with the Board [EDPB] it’s possible to organize that. So we’ll continue to work on it,” he said.

“So it’s not only a question to have the same kind of approach in all the Member States. It’s to be fit to all the demands coming in your jurisdiction and it’s true that in some jurisdictions we have more multinationals and more members of high tech companies than in others.”

“The best answer will be a decision from the Irish data protection authority about important cases,” he added.

We’ve reached out to the Irish DPC and the EDPB for comment on the Commission’s GDPR assessment.

Asked whether the Commission has a list of Member States that it might instigate infringement proceedings against related to the terms of GDPR — which, for example, require governments to provide adequate resourcing to their national DPA in order that they can properly oversee the regulation — Reynders said it doesn’t currently have such a list.

“We have a list of countries where we try to see if it’s possible to reinforce the possibilities for the national authorities to have enough resources — human resources, financial resources, to organize better cross border activities — if at the end we see there’s a real problem about the enforcement of the GDPR in one Member State we will propose to go maybe to the court with an infringement proceeding — but we don’t have, for the moment, a list of countries to organize such a kind of process,” he said.

The commissioners were a lot more comfortable talking up the positives of GDPR, with Jourová noting, with a sphinx-like smile, how three years ago there was “literal panic” and an army of lobbyists warning of a “doomsday” for business and innovation should the legislation pass. “I have good news today — no dooms day was here,” she said.

“Our approach to the GDPR was the right one,” she went on. “It created the more harmonized rules across the Single Market and more and more companies are using GDPR concepts, such as privacy by design and by default, as a competitive differentiation.

“I can say that the philosophy of one continent, one law is very advantageous for European small and medium enterprises who want to operate on the European Single Market.

“In general GDPR has become a truly European trade mark,” she added. “It puts people and their rights at the center. It does not leave everything to the market like in the US. And it does not see data as a means for state supervision, as in China. Our truly European approach to data is the first answer to difficult questions we face as a society.”

She also noted that the regulation served as inspiration for the current Commission’s tech-focused policy priorities — including a planned “human centric approach to AI“.

“It makes us pause before facial recognition technology, for instance, will be fully developed or implemented. And I dare to say that it makes Europe fit for the digital age. On the international side the GDPR has become a reference point — with a truly global convergence movement. In this context we are happy to support trade and safe digital data flows and work against digital protectionism.”

Another success the commissioners credited to the GDPR framework is the region’s relatively swift digital response to the coronavirus — with the regulation helping DPAs to more quickly assess the privacy implications of COVID-19 contacts tracing apps and tools.

Reynders lauded “a certain degree of flexibility in the GDPR” which he said had been able to come into play during the crisis, feeding into discussions around tracing apps — on “how to ensure protection of personal data in the context of such tracing apps linked to public and individual health”.

Under its to-do list, other areas of work the Commission cited today included ensuring DPAs provide more such support related to the application of the regulation by coming out with guidelines related to other new technologies. “In various new areas we will have to be able to provide guidance quickly, just as we did on the tracing apps recently,” noted Reynders.

Further increasing public awareness of GDPR and the rights it affords is another Commission focus — though it said more than two-third of EU citizens above the age of 16 have at least heard of the GDPR.

But it wants citizens to be able to make what Reynders called “best use” of their rights, perhaps via new applications.

“So the GDPR provides support to innovation in this respect,” he said. “And there’s a lot of work that still needs to be done in order to strengthen innovation.”

“We also have to convince those who may still be reticent about the GDPR. Certain companies, for instance, who have complained about how difficult it is to implement it. I think we need to explain to them what the requirements of the GDPR and how they can implement these,” he added.

Digital mapping of coronavirus contacts will have key role in lifting Europe’s lockdown, says Commission

The European Commission has set out a plan for co-ordinating the lifting of regional coronavirus restrictions that includes a role for digital tools — in what the EU executive couches as “a robust system of reporting and contact tracing”. However it has reiterated that such tools must “fully respect data privacy”.

Last week the Commission made a similar call for a common approach to data and apps for fighting the coronavirus, emphasizing the need for technical measures to be taken to ensure that citizens’ rights and freedoms aren’t torched in the scramble for a tech fix.

Today’s toolbox of measures and principles is the next step in its push to coordinate a pan-EU response.

Responsible planning on the ground, wisely balancing the interests of protection of public health with those of the functioning of our societies, needs a solid foundation. That’s why the Commission has drawn up a catalogue of guidelines, criteria and measures that provide a basis for thoughtful action,” said EC president Ursula von der Leyen, commenting on the full roadmap in a statement.

“The strength of Europe lies in its social and economic balance. Together we learn from each other and help our European Union out of this crisis,” she added.

Harmonized data gathering and sharing by public health authorities — “on the spread of the virus, the characteristics of infected and recovered persons and their potential direct contacts” — is another key plank of the plan for lifting coronavirus restrictions on citizens within the 27 Member State bloc.

While ‘anonymized and aggregated’ data from commercial sources — such as telcos and social media platforms — is seen as a potential aid to pandemic modelling and forecasting efforts, per the plan.

“Social media and mobile network operators can offer a wealth of data on mobility, social interactions, as well as voluntary reports of mild disease cases (e.g. via participatory surveillance) and/or indirect early signals of disease spread (e.g. searches/posts on unusual symptoms),” it writes. “Such data, if pooled and used in anonymised, aggregated format in compliance with EU data protection and privacy rules, could contribute to improve the quality of modelling and forecasting for the pandemic at EU level.”

The Commission has been leaning on telcos to hand over fuzzy metadata for coronavirus modelling which it wants done by the EU’s Joint Research Centre. It wrote to 19 mobile operators last week to formalize its request, per Euractiv, which reported yesterday that its aim is to have the data exchange system operational ‘as soon as possible’ — with the hope being it will cover all the EU’s member states.

Other measures included in the wider roadmap are the need for states to expand their coronavirus testing capacity and harmonize tesing methodologies — with the Commission today issuing guidelines to support the development of “safe and reliable testing”.

Steps to support the reopening of internal and external EU borders is another area of focus, with the executive generally urging a gradual and phased lifting of coronavirus restrictions.

On contacts tracing apps specifically, the Commission writes:

“Mobile applications that warn citizens of an increased risk due to contact with a person tested positive for COVID-19 are particularly relevant in the phase of lifting containment measures, when the infection risk grows as more and more people get in contact with each other. As experienced by other countries dealing with the COVID-19 pandemic, these applications can help interrupt infection chains and reduce the risk of further virus transmission. They should thus be an important element in the strategies put in place by Member States, complementing other measures like increased testing capacities.

“The use of such mobile applications should be voluntary for individuals, based on users’ consent and fully respecting European privacy and personal data protection rules. When using tracing apps, users should remain in control of their data. National health authorities should be involved in the design of the system. Tracing close proximity between mobile devices should be allowed only on an anonymous and aggregated basis, without any tracking of citizens, and names of possibly infected persons should not be disclosed to other users. Mobile tracing and warning applications should be subject to demanding transparency requirements, be deactivated as soon as the COVID-19 crisis is over and any remaining data erased.”

“Confidence in these applications and their respect of privacy and data protection are paramount to their success and effectiveness,” it adds.

Earlier this week Apple and Google announced a collaboration around coronavirus contracts tracing — throwing their weight behind a privacy-sensitive decentralized approach to proximity tracking that would see ephemeral IDs processed locally on devices, rather than being continually uploaded and held on a central server.

A similar decentralized infrastructure for Bluetooth-based COVID-19 contacts tracing had already been suggested by a European coalition of privacy and security experts, as we reported last week.

While a separate coalition of European technologists and researchers has been pushing a standardization effort for COVID-19 contacts tracing that they’ve said will support either centralized or decentralized approaches — in the hopes of garnering the broadest possible international backing.

For its part the Commission has urged the use of technologies such as decentralization for COVID-19 contacts tracing to ensure tools align with core EU principles for handling personal data and safeguarding individual privacy, such as data minimization.

However governments in the region are working on a variety of apps and approaches for coronavirus contacts tracing that don’t all look as if they will check a ‘rights respecting’ box…

In a video address last week, Europe’s lead privacy regulator, the EDPS, intervened to call for a “panEuropean model ‘COVID-19 mobile application’, coordinated at EU level” — in light of varied tech efforts by Member States which involve the processing of personal data for a claimed public health purpose.

“The use of temporary broadcast identifiers and bluetooth technology for contact tracing seems to be a useful path to achieve privacy and personal data protection effectively,” said Wojciech Wiewiórowski on Monday week. “Given these divergences, the European Data Protection Supervisor calls for a panEuropean model “COVID-19 mobile application”, coordinated at EU level. Ideally, coordination with the World Health Organisation should also take place, to ensure data protection by design globally from the start.”

The Commission has not gone so far in today’s plan — calling instead for Member States to ensure their own efforts align with the EU’s existing data protection framework.

Though its roadmap is also heavy on talk of the need for “coordination between Member Statesto avoid negative effects” — dubbing it “a matter of common European interest”. But, for now, the Commission has issued a list of recommendations; it’s up to Member States to choose to fall in behind them or not.

With the caveat that EU regulators are watching very carefully how states’ handle citizens’ data.

“Legality, transparency and proportionality are essential for me,” warned Wiewiórowski, ending last week’s intervention on the EU digital response to the coronavirus with a call for “digital solidarity, which should make data working for all people in Europe and especially for the most vulnerable” — and a cry against “the now tarnished and discredited business models of constant surveillance and targeting that have so damaged trust in the digital society”.

An EU coalition of techies is backing a “privacy-preserving” standard for COVID-19 contacts tracing

A European coalition of techies and scientists drawn from at least eight countries, and led by Germany’s Fraunhofer Heinrich Hertz Institute for telecoms (HHI), is working on contacts-tracing proximity technology for COVID-19 that’s designed to comply with the region’s strict privacy rules — officially unveiling the effort today.

China-style individual-level location-tracking of people by states via their smartphones even for a public health purpose is hard to imagine in Europe — which has a long history of legal protection for individual privacy. However the coronavirus pandemic is applying pressure to the region’s data protection model, as governments turn to data and mobile technologies to seek help with tracking the spread of the virus, supporting their public health response and mitigating wider social and economic impacts.

Scores of apps are popping up across Europe aimed at attacking coronavirus from different angles. European privacy not-for-profit, noyb, is keeping an updated list of approaches, both led by governments and private sector projects, to use personal data to combat SARS-CoV-2 — with examples so far including contacts tracing, lockdown or quarantine enforcement and COVID-19 self-assessment.

The efficacy of such apps is unclear — but the demand for tech and data to fuel such efforts is coming from all over the place.

In the UK the government has been quick to call in tech giants, including Google, Microsoft and Palantir, to help the National Health Service determine where resources need to be sent during the pandemic. While the European Commission has been leaning on regional telcos to hand over user location data to carry out coronavirus tracking — albeit in aggregated and anonymized form.

The newly unveiled Pan-European Privacy-Preserving Proximity Tracing (PEPP-PT) project is a response to the coronavirus pandemic generating a huge spike in demand for citizens’ data that’s intended to offer not just an another app — but what’s described as “a fully privacy-preserving approach” to COVID-19 contacts tracing.

The core idea is to leverage smartphone technology to help disrupt the next wave of infections by notifying individuals who have come into close contact with an infected person — via the proxy of their smartphones having been near enough to carry out a Bluetooth handshake. So far so standard. But the coalition behind the effort wants to steer developments in such a way that the EU response to COVID-19 doesn’t drift towards China-style state surveillance of citizens.

While, for the moment, strict quarantine measures remain in place across much of Europe there may be less imperative for governments to rip up the best practice rulebook to intrude on citizens’ privacy, given the majority of people are locked down at home. But the looming question is what happens when restrictions on daily life are lifted?

Contacts tracing — as a way to offer a chance for interventions that can break any new infection chains — is being touted as a key component of preventing a second wave of coronavirus infections by some, with examples such as Singapore’s TraceTogether app being eyed up by regional lawmakers.

Singapore does appear to have had some success in keeping a second wave of infections from turning into a major outbreak, via an aggressive testing and contacts-tracing regime. But what a small island city-state with a population of less than 6M can do vs a trading bloc of 27 different nations whose collective population exceeds 500M doesn’t necessarily seem immediately comparable.

Europe isn’t going to have a single coronavirus tracing app. It’s already got a patchwork. Hence the people behind PEPP-PT offering a set of “standards, technology, and services” to countries and developers to plug into to get a standardized COVID-19 contacts-tracing approach up and running across the bloc.

The other very European flavored piece here is privacy — and privacy law. “Enforcement of data protection, anonymization, GDPR [the EU’s General Data Protection Regulation] compliance, and security” are baked in, is the top-line claim.

“PEPP-PR was explicitly created to adhere to strong European privacy and data protection laws and principles,” the group writes in an online manifesto. “The idea is to make the technology available to as many countries, managers of infectious disease responses, and developers as quickly and as easily as possible.

“The technical mechanisms and standards provided by PEPP-PT fully protect privacy and leverage the possibilities and features of digital technology to maximize speed and real-time capability of any national pandemic response.”

Hans-Christian Boos, one of the project’s co-initiators — and the founder of an AI company called Arago –discussed the initiative with German newspaper Der Spiegel, telling it: “We collect no location data, no movement profiles, no contact information and no identifiable features of the end devices.”

The newspaper reports PEPP-PT’s approach means apps aligning to this standard would generate only temporary IDs — to avoid individuals being identified. Two or more smartphones running an app that uses the tech and has Bluetooth enabled when they come into proximity would exchange their respective IDs — saving them locally on the device in an encrypted form, according to the report.

Der Spiegel writes that should a user of the app subsequently be diagnosed with coronavirus their doctor would be able to ask them to transfer the contact list to a central server. The doctor would then be able to use the system to warn affected IDs they have had contact with a person who has since been diagnosed with the virus — meaning those at risk individuals could be proactively tested and/or self-isolate.

On its website PEPP-PT explains the approach thus:

Mode 1
If a user is not tested or has tested negative, the anonymous proximity history remains encrypted on the user’s phone and cannot be viewed or transmitted by anybody. At any point in time, only the proximity history that could be relevant for virus transmission is saved, and earlier history is continuously deleted.

Mode 2
If the user of phone A has been confirmed to be SARS-CoV-2 positive, the health authorities will contact user A and provide a TAN code to the user that ensures potential malware cannot inject incorrect infection information into the PEPP-PT system. The user uses this TAN code to voluntarily provide information to the national trust service that permits the notification of PEPP-PT apps recorded in the proximity history and hence potentially infected. Since this history contains anonymous identifiers, neither person can be aware of the other’s identity.

Providing further detail of what it envisages as “Country-dependent trust service operation”, it writes: “The anonymous IDs contain encrypted mechanisms to identify the country of each app that uses PEPP-PT. Using that information, anonymous IDs are handled in a country-specific manner.”

While on healthcare processing is suggests: “A process for how to inform and manage exposed contacts can be defined on a country by country basis.”

Among the other features of PEPP-PT’s mechanisms the group lists in its manifesto are:

  • Backend architecture and technology that can be deployed into local IT infrastructure and can handle hundreds of millions of devices and users per country instantly.
  • Managing the partner network of national initiatives and providing APIs for integration of PEPP-PT features and functionalities into national health processes (test, communication, …) and national system processes (health logistics, economy logistics, …) giving many local initiatives a local backbone architecture that enforces GDPR and ensures scalability.
  • Certification Service to test and approve local implementations to be using the PEPP-PT mechanisms as advertised and thus inheriting the privacy and security testing and approval PEPP-PT mechanisms offer.

Having a standardized approach that could be plugged into a variety of apps would allow for contacts tracing to work across borders — i.e. even if different apps are popular in different EU countries — an important consideration for the bloc, which has 27 Member States.

However there may be questions about the robustness of the privacy protection designed into the approach — if, for example, pseudonymized data is centralized on a server that doctors can access there could be a risk of it leaking and being re-identified. And identification of individual device holders would be legally risky.

Europe’s lead data regulator, the EDPS, recently made a point of tweeting to warn an MEP (and former EC digital commissioner) against the legality of applying Singapore-style Bluetooth-powered contacts tracing in the EU — writing: “Please be cautious comparing Singapore examples with European situation. Remember Singapore has a very specific legal regime on identification of device holder.”

A spokesman for the EDPS told us it’s in contact with data protection agencies of the Member States involved in the PEPP-PT project to collect “relevant information”.

“The general principles presented by EDPB on 20 March, and by EDPS on 24 March are still relevant in that context,” the spokesman added — referring to guidance issued by the privacy regulators last month in which they encouraged anonymization and aggregation should Member States want to use mobile location data for monitoring, containing or mitigating the spread of COVID-19. At least in the first instance.

“When it is not possible to only process anonymous data, the ePrivacy Directive enables Member States to introduce legislative measures to safeguard public security (Art. 15),” the EDPB further noted.

“If measures allowing for the processing of non-anonymised location data are introduced, a Member State is obliged to put in place adequate safeguards, such as providing individuals of electronic communication services the right to a judicial remedy.”

We reached out to the HHI with questions about the PEPP-PT project and were referred to Boos — but at the time of writing had been unable to speak to him.

“The PEPP-PT system is being created by a multi-national European team,” the HHI writes in a press release about the effort. “It is an anonymous and privacy-preserving digital contact tracing approach, which is in full compliance with GDPR and can also be used when traveling between countries through an anonymous multi-country exchange mechanism. No personal data, no location, no Mac-Id of any user is stored or transmitted. PEPP-PT is designed to be incorporated in national corona mobile phone apps as a contact tracing functionality and allows for the integration into the processes of national health services. The solution is offered to be shared openly with any country, given the commitment to achieve interoperability so that the anonymous multi-country exchange mechanism remains functional.”

“PEPP-PT’s international team consists of more than 130 members working across more than seven European countries and includes scientists, technologists, and experts from well-known research institutions and companies,” it adds.

“The result of the team’s work will be owned by a non-profit organization so that the technology and standards are available to all. Our priorities are the well being of world citizens today and the development of tools to limit the impact of future pandemics — all while conforming to European norms and standards.”

The PEPP-PT says its technology-focused efforts are being financed through donations. Per its website, it says it’s adopted the WHO standards for such financing — to “avoid any external influence”.

Of course for the effort to be useful it relies on EU citizens voluntarily downloading one of the aligned contacts tracing apps — and carrying their smartphone everywhere they go, with Bluetooth enabled.

Without substantial penetration of regional smartphones it’s questionable how much of an impact this initiative, or any contacts tracing technology, could have. Although if such tech were able to break even some infection chains people might argue it’s not wasted effort.

Notably, there are signs Europeans are willing to contribute to a public healthcare cause by doing their bit digitally — such as a self-reporting COVID-19 tracking app which last week racked up 750,000 downloads in the UK in 24 hours.

But, at the same time, contacts tracing apps are facing scepticism over their ability to contribute to the fight against COVID-19. Not everyone carries a smartphone, nor knows how to download an app, for instance. There’s plenty of people who would fall outside such a digital net.

Meanwhile, while there’s clearly been a big scramble across the region, at both government and grassroots level, to mobilize digital technology for a public health emergency cause there’s arguably greater imperative to direct effort and resources at scaling up coronavirus testing programs — an area where most European countries continue to lag.

Germany — where some of the key backers of the PEPP-PT are from — being the most notable exception.