Streamer Deezer cheers Apple antitrust fine, but calls tech giant’s DMA response ‘deceptive’

Streaming music service Deezer is joining Spotify in cheering the European Union’s €1.84 billion fine imposed on Apple for breaking antitrust rules in the streaming music market. However, the company urges the EU Commission to assess Apple’s response to the Digital Markets Act (DMA), which it says is “deceptive” and “an attempt to bypass European […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Apple plans to appeal European Commission’s massive antitrust fine favoring Spotify

Apple says it plans to appeal the historic €1.84 billion fine issued today by the European Commission over Apple’s anticompetitive practices in the streaming music market. In a newsroom post, Apple called out Spotify, a company based in Stockholm, Sweden as the “primary advocate” and “biggest beneficiary” of the EC’s decision — noting the streamer […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Spotify, Epic Games, and others pen letter to EC, claiming Apple has made a ‘mockery’ of the DMA

Epic Games, Spotify, Proton, 37Signals, and other developers had already signaled their displeasure with how Apple has chosen to adapt its rules to meet the requirements of the new EU regulation, the Digital Markets Act (DMA) called it “extortion” and “bad-faith” compliance, among other things. Now those companies have formalized their complaints in a letter […]

© 2024 TechCrunch. All rights reserved. For personal use only.

As possible EC fine nears, Apple claims Spotify is trying to get ‘limitless access’ to its tools without paying

With the European Commission set to rule on Spotify’s complaint focused on competition in the streaming music market, there are hints that the ruling will not be in Apple’s favor. This week, the Financial Times reported the EC will issue its first-ever fine against the tech giant for allegedly breaking EU law over competition in […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Europe seals a deal on tighter rules for digital services

In the small hours local time, European Union lawmakers secured a provisional deal on a landmark update to rules for digital services operating in the region — grabbing political agreement after a final late night/early morning of compromise talks on the detail of what is a major retooling of the bloc’s existing ecommerce rulebook.

The political agreement on the Digital Services Act (DSA) paves the way for formal adoption in the coming weeks and the legislation entering into force — likely later this year. Although the rules won’t start to apply until 15 months after that — so there’s a fairly long lead in time to allow companies to adapt.

The regulation is wide ranging — setting out to harmonize content moderation and other governance rules to speed up the removal of illegal content and products. It addresses a grab-bag of consumer protection and privacy concerns, as well as introducing algorithmic accountability requirements for large platforms to dial up societal accountability around their services. While ‘KYC’ requirements are intended to do the same for online marketplaces.

How effective the package will be is of course tbc but the legislation that’s was agreed today goes further than the Commission proposal in a number of areas — with, for example, the European Parliament pushing to add in limits on tracking-based advertising.

A prohibition on the use of so-called ‘dark patterns’ for online platforms is also included — but not, it appears, a full blanket ban for all types of digital service (per details of the final text shared with TechCrunch via our sources).

See below for a fuller breakdown of what we know so far about what’s been agreed. 

The DSA was presented as a draft proposal by the Commission back in December 2020 which means it’s taken some 16 months of discussion — looping in the other branches of the EU: the directly elected European Parliament and the Council, which represents EU Member States’ governments — to reach this morning’s accord.

After last month’s deal on the Digital Markets Act (DMA), which selectively targets the most powerful intermediating platforms (aka gatekeepers) with an ex ante, pro-competition regime, EU policy watchers may be forgiven for a little euphoria at the (relative) speed with which substantial updates to digital rules are being agreed.

Big Tech’s lobbying of the EU over this period has been of an unprecedented scale in monetary terms. Notably, giants like Google have also sought to insert themselves into the ‘last mile’ stage of discussions where EU institutions are supposed to shut themselves off from external pressures to reach a compromise, as a report published earlier today by Corporate Europe Observatory underlines. That illustrates what they believe is at stake.

The full impact of Google et al‘s lobbying won’t be clear for months or even years. But, at the least, Big Tech’s lobbyists were not success in entirely blocking the passage of the two major digital regulations — so the EU is saved from an embarrassing repeat of the (stalled) ePrivacy update which may indicate that regional lawmakers are wising up to the tech industry’s tactics. Or, well, that Big Tech’s promises are not as shiny and popular as they used to be.

The Commission’s mantra for the DSA has always been that the goal is to ensure that what’s illegal offline will be illegal online. And in a video message tweeted out in the small hours local time, a tired but happy looking EVP, Margrethe Vestager, said it’s “not a slogan anymore that’s what illegal offline should also be seen and dealt with online”.

“Now it is a real thing,” she added. “Democracy’s back.”

In a statement, Commission president Ursula von der Leyen added:

“Today’s agreement on the Digital Services Act is historic, both in terms of speed and of substance. The DSA will upgrade the ground-rules for all online services in the EU. It will ensure that the online environment remains a safe space, safeguarding freedom of expression and opportunities for digital businesses. It gives practical effect to the principle that what is illegal offline, should be illegal online. The greater the size, the greater the responsibilities of online platforms. Today’s agreement — complementing the political agreement on the Digital Markets Act last month — sends a strong signal: to all Europeans, to all EU businesses, and to our international counterparts.”

In its own press release, the Council called the DSA “a world first in the field of digital regulation”.

While the parliament said the “landmark rules… effectively tackle the spread of illegal content online and protect people’s fundamental rights in the digital sphere”.

In a statement, its rapporteur for the file, MEP Christel Schaldemose, further suggested the DSA will “set new global standards”, adding: “Citizens will have better control over how their data are used by online platforms and big tech-companies. We have finally made sure that what is illegal offline is also illegal online. For the European Parliament, additional obligations on algorithmic transparency and disinformation are important achievements. These new rules also guarantee more choice for users and new obligations for platforms on targeted ads, including bans to target minors and restricting data harvesting for profiling.”

Other EU lawmakers are fast dubbing the DSA a “European constitution for the Internet”. And it’s hard not to see the gap between the EU and the US on comprehensive digital lawmaking as increasingly gaping.

Vestager’s victory message notably echoes encouragement tweeted out earlier this week by the former US secretary of state, senator, first lady and presidential candidate, Hillary Clinton, who urged Europe to get the DSA across the line and “bolster global democracy before it’s too late”, as she put it, adding: “For too long, tech platforms have amplified disinformation and extremism with no accountability. The EU is poised to do something about it.”

DSA: What’s been agreed?

In their respective press releases trumpeting the deal, the parliament and Council have provided an overview of areas of key elements of the regulation they’ve agreed.

It’s worth emphasizing that the full and final text hasn’t been published yet — and won’t be for a while. It’s pending legal checks and translation into the bloc’s many languages — which means the full detail of the regulation and the implication of all its nuance remains tbc.

But here’s an overview of what we know so far…

Scope, supervision & penalties

On scope, the Council says the DSA will apply to all online intermediaries providing services in the EU.

The regulation’s obligations are intended to be proportionate to the nature of the services concerned and the number of users — with extra, “more stringent” requirements for “very large online platforms” (aka VLOPs) and very large online search engines (VLOSEs).

Services with more than 45M monthly active users in the EU will be considered VLOPs or VLOSEs. So plenty of services will reach that bar — including, for example, the homegrown music streaming giant Spotify.

“To safeguard the development of start-ups and smaller enterprises in the internal market, micro and small enterprises with under 45 million monthly active users in the EU will be exempted from certain new obligations,” the Council adds.

The Commission itself will be responsible for supervising VLOPs and VLOSEs for the obligations that are specific to them — which is intended to avoid bottlenecks in oversight and enforcements of larger platforms (such as happened with the EU’s GDPR).

But national agencies at the Member State level will supervise the wider scope of the DSA — so EU lawmakers say this arrangement maintains the country-of-origin principle that’s baked into existing digital rules.

Penalties for breaches of the DSA can scale up to 6% of global annual turnover.

Per the parliament, there will also be a right for recipients of digital services to seek redress for any damages or loss suffered due to infringements by platforms.

Content moderation & marketplace rules

The content moderation measures are focused on harmonizing rules to ensure “swift” removal of illegal content.

This is being done through what the parliament describes as a “clearer ‘notice and action’ procedure” — where “users will be empowered to report illegal content online and online platforms will have to act quickly”, as it puts it.

It also flags support for victims of cyber violence — who it says will be “better protected especially against non-consensual sharing (revenge porn) with immediate takedowns”.

MEPs say fundamental rights are protected from the risk of over-removal of content from the regulation putting pressure on platforms to act quickly through “stronger safeguards to ensure notices are processed in a non-arbitrary and non-discriminatory manner and with respect for fundamental rights, including the freedom of expression and data protection”.

The regulation is also intended to ensure swift removal of illegal products/services from online marketplaces. So there are new  requirements incoming for ecommerce players.

On this, the Council says the DSA will impose a “duty of care” on marketplaces vis-à-vis sellers who sell products or services on their online platforms.

“Marketplaces will in particular have to collect and display information on the products and services sold in order to ensure that consumers are properly informed,” it notes, although there will be plenty of devil in the detail of the exact provisions.

On this, the parliament says marketplaces will “have to ensure that consumers can purchase safe products or services online by strengthening checks to prove that the information provided by traders is reliable (‘Know Your Business Customer’ principle) and make efforts to prevent illegal content appearing on their platforms, including through random checks”.

Random checks on traders/goods had been pushed for by consumer protection organizations — who had been concerned the measure would be dropped during trilogues — so EU lawmakers appear to have listened to those concerns.

Extra obligations for VLOPs/VLOSEs

These larger platform entities will face scrutiny of how their algorithms work from the European Commission and Member State agencies — which the parliament says will both have access to the algorithms of VLOPs.

The DSA also introduces an obligation for very large digital platforms and services to analyse “systemic risks they create” and to carry out “risk reduction analysis”, per the Council.

The analysis must be done annually — which the Council suggests will allow for monitoring of and reduced risks in areas such as the dissemination of illegal content; adverse effects on fundamental rights; manipulation of services having an impact on democratic processes and public security; adverse effects on gender-based violence, and on minors and serious consequences for the physical or mental health of users.

Additionally, VLOPs/VLOSEs will be subject to independent audits each year, per the parliament.

Large platforms that use algorithms to determine what content users see (aka “recommender systems”) will have to provide at least one option that is not based on profiling. Albeit, many already do — although they often also undermine these choices by applying dark pattern techniques to nudge users away from control over their feeds so holistic supervision will be needed to meaningfully improve user agency.

There will also be transparency requirements for the parameters of these recommender systems with the goal of improving information for users and any choices they make. Again, the detail will be interesting to see there.

Limits on targeted advertising  

Restrictions on tracking-based advertising appear to have survived the trilogue process with all sides reaching agreement on a ban on processing minors’ data for targeted ads.

This applies to platforms accessible to minors “when they are aware that a user is a minor”, per the Council.

“Platforms will be prohibited from presenting targeted advertising based on the use of minors’ personal data as defined in EU law,” it adds.

A final compromise text shared with TechCrunch by our sources suggests the DSA will stipulate that providers of online platforms should not do profile based advertising “when they are aware with reasonable certainty that the recipient of the service is a minor”.

A restriction on the use of sensitive data for targets ads has also made it into the text.

The parliament sums this up by saying “targeted advertising is banned when it comes to sensitive data (e.g. based on sexual orientation, religion, ethnicity)”.

The wording of the final compromise text which we’ve seen states that: “Providers of online platforms shall not present advertising to recipients of the service based on profiling within the meaning of Article 4(4) of Regulation 2016/679 [aka, the GDPR] using special categories of personal data as referred to in article 9(1) of Regulation 2016/679.”

Article 4(4) of the GDPR defines ‘profiling’ as: “any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements;”.

While the GDPR defines special category data as personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, as well as biometric and health data, data on sex life and/or sexual orientation.

So targeting ads based on tracking or inferring users’ sensitive interests is — on paper — facing a hard ban in the DSA.

Ban on use of dark patterns

A prohibition on dark patterns also made it into the text. But, as we understand it, this only applies to “online platforms” — so it does not look like a blanket ban across all types of apps and digital services.

That is unfortunate. Unethical practices shouldn’t be acceptable no matter the size of the business.

On dark patterns, the parliament says: “Online platforms and marketplaces should not nudge people into using their services, for example by giving more prominence to a particular choice or urging the recipient to change their choice via interfering pop-ups. Moreover, cancelling a subscription for a service should become as easy as subscribing to it.”

The wording of the final compromise text that we’ve seen says that: “Providers of online platforms shall not design, organize or operate their online interfaces in a way that deceives, manipulates or otherwise materially distorts or impairs the ability of recipients of their service to make free and informed decisions” — after which there’s an exemption for practices already covered by Directive 2005/29/EC [aka the Unfair Commercial Practices Directive] and by the GDPR.

The final compromise text we reviewed further notes that the Commission may issue guidance on specific practices — such as platforms giving more prominence to certain choices, repeatedly requesting a user makes a choice after they already have and making it harder to terminate a service than sign up. So the effectiveness of the dark pattern ban could well come down to how much attention the Commission is willing to give to a massively widespread online problem.

The wording of the associated recital in the final compromise we saw also specifies that the dark pattern ban (only) applies for “intermediary services”.

Crisis mechanism 

An entirely new article was also added to the DSA following Russia’s invasion of Ukraine — and in connection with rising concern around the impact of online disinformation — that creates a crisis response mechanism which will give the Commission extra powers to scrutinize VLOPs/VLOSEs in order to analyze the impact of their activities to the crisis in question.

The EU’s executive will also be able to come up with what the Council bills as “proportionate and effective measures to be put in place for the respect of fundamental rights”.

The mechanism will be activated by the Commission on the recommendation of the board of national Digital Services Coordinators.

EU lawmakers agree data reuse rules to foster AI and R&D

It’s just over a year since the European Union’s executive proposed legislation aimed at encouraging widespread sharing and reuse of industrial data and protected public sector data-sets — such as health data, environment, energy, agriculture, mobility, finance, manufacturing, public administration and skills.

Yesterday the bloc’s co-legislators reached agreement on the DGA after trilogue discussions — paving the way for formal adoption once the Council and Parliament vote to approval the final text.

The Data Governance Act (DGA) is intended to create a standardized framework of trusted tools and techniques to encourage data reuse — by setting “secure and privacy-compliant conditions” for sharing data, as the Commission presented it.

The hope is also that legislation will lead to the creation of common data spaces across strategic domains to support R&D.

Key components of the plan include the creation of a network of trusted and neutral data intermediaries; and an oversight regime to ensure conditions are being complied with — comprised of national monitoring authorities and a new advisory/steering body (aka, the European Data Innovation Board).

The legislation will also bring in international data transfer rules for non-personal data. (Such rules already exist — at least on paper — covering the export of personal data from the EU under the General Data Protection Regulation.)

Getting political agreement is typically the most challenging component of EU lawmaking.

Although the final text of the DGA will still need to be approved by the Council and Parliament before it’s formally adopted. (And, in terms of wider timeframe, the new rules will apply 15 months after the entry into force of the regulation — so likely not before 2023.)

In a press release on the provisional agreement, the Council of the EU said: “[N]egotiators from the Council and the European Parliament reached a provisional agreement on a new law to promote the availability of data and build a trustworthy environment to facilitate its use for research and the creation of innovative new services and products.”

“The [DGA] will create a mechanism to enable the safe reuse of certain categories of public-sector data that are subject to the rights of others. This includes, for example, trade secrets, personal data and data protected by intellectual property rights. Public-sector bodies allowing this type of reuse will need to be properly equipped, in technical terms, to ensure that privacy and confidentiality are fully preserved,” the Council added.

An earlier Open Data Directive, from 2019, does not cover the data types being legislated for here.

EU lawmakers believe Europe has a major advantage in industrial data — and want the DGA to create conditions that will encourage widespread data sharing and reuse. (A Data Act is also incoming which will propose measures intended to encourage business-to-business data sharing and business-to-government data sharing.)

The Commission has also been pushing the notion of “data altruism” and data sharing for the common good, although the overarching goal for the DGA is to foster the development of regional AIs to stoke economic competitiveness and growth.

“Businesses, both small and large, will benefit from new business opportunities as well as from a reduction in costs for acquiring, integrating and processing data, from lower barriers to enter markets, and from a reduction in time-to-market for novel products and services,” it suggests in a Q&A on the DGA.

At the same time the Commission has suggested that the creation of common European “data spaces” in sectors like manufacturing and health will help advance research — which could lead to cures for rare or chronic diseases or at least support evidence-based policymaking and foster other types of societally beneficial developments.

So the DGA co-mingles purely economic goals (via opening access to data) with broad-brush notions of “data altruism” and “data for society”.

This fuzziness has raised some watchful concern among civil society groups — such as the European Consumer Organisation BEUC, which has warned that “a weak definition of altruism in this agreement could allow companies to over-use vague, altruistic grounds to push consumers to share their data”.

Weak enforcement of existing legal protections for EU citizens’ data may also work against the sought for ‘trust to share’ push.

Commenting in a statement on the trilogue agreement on the DGA yesterday, the Commission’s EVP for digital, Margrethe Vestager, avoiding citing any such concerns — merely offering a boosterish claim that: “This Regulation is a first building block for establishing a solid and fair data-driven economy.”

“It is about setting up the right conditions for trustful data sharing in line with our European values and fundamental rights,” she went on, adding: “We are creating a safe environment in which data can be shared across sectors and Member States for the benefit of society and the economy.”

Internal Market commissioner, Thierry Breton, also stuck to the promotional script, saying the agreement on the DGA will allow the bloc to define “a common approach to data sharing”, and further suggesting: “We are facilitating the flow of growing industrial data across sectors and Member States to help Europe become the world’s number one data continent. We are doing so by building trust, putting the individuals and companies who generate data in the driving seat so they remain in control of the data they create. In a nutshell: an open yet sovereign European Single Market for data.”

In its PR, the European Parliament said MEPs secured tighter provisions on “trust and fair access” during the DGA negotiations — aimed at plugging loopholes in the legislation they said would have allowed operators from non-EU countries to abuse the scheme.

MEPs also focused on beefing up the role of the European Data Innovation Board — and on clarifying the scope of the legislation, saying they secured “precise requirements” on which services will fall under the new DGA.

Getting a better deal for SMEs and startups was another parliamentary priority, they said.

What’s been agreed?

The trilogue agreement reached yesterday includes the possibility for exclusive arrangements for the reuse of public-sector data — which the Council said will be possible “when justified and necessary for the provision of a service of general interest”.

However the maximum duration for existing contracts will be 2.5 years and for new contracts 12 months — with MEPs writing that: “Public sector bodies will have to avoid creating exclusive rights for the re-use of certain data, and exclusive agreements should be limited… to make more data available to SMEs and start-ups.”

The co-legislators have also agreed that the Commission will set up a European single access point with a searchable electronic register of public-sector data — which will be made available via national single information points.

The EU is anticipating that the DGA will create a new business model for data intermediation services — to provide a secure environment in which companies or individuals can share data; and which will commit not to use the data for their own ends (but will be able to charge for the transactions they enable).

Such services must be listed in an EU register under the incoming rules.

“For companies, these services can take the form of digital platforms, which will support voluntary data-sharing between companies or facilitate the fulfilment of data-sharing obligations set by law. By using these services, companies will be able to share their data without fear of its being misused or of losing their competitive advantage,” the Council writes.

“For personal data, such services and their providers will help individuals exercise their rights under the [GDPR].  This will help people have full control over their data and allow them to share it with a company they trust. This can be done, for example, by means of novel personal information management tools, such as personal data spaces or data wallets, which are apps that share such data with others, based on the data holder’s consent.”

EU lawmakers are hoping that having dedicated rules to govern data-sharing will encourage both companies and individuals to make data available voluntarily — for altruistic purposes such as medical research projects.

The DGA will therefore create national registers of recognised “data altruism organisations” — aka entities seeking to collect data for objectives of general interest. Such entities must agree to comply with specific rules, and (once registered) will be recognised across the EU.

“This will create the necessary trust in data altruism, encouraging individuals and companies to donate data to such organisations so that it can used for wider societal good,” the Council suggests.

The DGA will establish a voluntary certification program — with a logo — to make it easier to identify compliant providers of data intermediation services and data altruism organisations.

When the Commission proposed the DGA commissioners were closely questioned on the issue of international transfers of data. On this the Council says the Commission may (through secondary legislation) adopt adequacy decisions declaring that specific non-EU countries provide appropriate safeguards for the use of non-personal data transferred from the EU.

This mirrors procedure attached to the GDPR and covering personal data transfers to third countries.

However the Commission’s decisions on that front have been shown lacking on a number of occasions — notably two flagship data transfers agreements with the US were struck down by the CJEU in recent years. Which has resulted in increased uncertainty and complexity around international transfers of personal data.

So there are likely to be questions and concerns about what exactly constitutes “unlawful international transfer of or governmental access to non-personal data”, under the incoming legislation.

“These [DGA adequacy] decisions would be similar to adequacy decisions relating to personal data under the GDPR. Such safeguards should be considered to exist when the country in question has equivalent measures in place that ensure a level of protection similar to that provided by EU or member state law,” the Council writes, without delving into any detail.

It goes on to note that the EC may also adopt “model contractual clauses” to support public-sector bodies and re-users in the case of transfers of public-sector data to third countries — suggesting those that want to export data may be encouraged to follow a standardized approach. 

The parliament’s PR, meanwhile, talks of the DGA setting up “common European data spaces in strategic domains” — without making any reference to the international transfer issue.

How many of the touted “European data spaces” may end up hosted in the bloc — i.e. to meet the DGA’s requirements to provide adequate safeguards for the data — isn’t clear.

Although the Commission’s line is that “the free flow of data to third countries should be ensured”.

“We want Europe to create and strengthen its data economy but there is no obligation to store and process data in the EU. Nobody will be prohibited from dealing with the partner of their choice,” it also writes, before segueing into a caveat that: “At the same time, the EU must ensure that any access to EU citizen’s personal data and certain sensitive data is in compliance with its values and legislative framework.”

Driving AI innovation in tandem with regulation

The European Commission announced first-of-its-kind legislation regulating the use of artificial intelligence in April. This unleashed criticism that the regulations could slow AI innovation, hamstringing Europe in its competition with the U.S. and China for leadership in AI.

For example, Andrew McAfee wrote an article titled “EU proposals to regulate AI are only going to hinder innovation.”

Anticipating this criticism and mindful of the example of GDPR, where Europe’s thought-leadership position didn’t necessarily translate into data-related innovation, the EC has tried to address AI innovation directly by publishing a new Coordinated Plan on AI.

Released in conjunction with the proposed regulations, the plan is full of initiatives intended to help the EU become a leader in AI technology. So will the combination of regulation and pro-innovation policies be enough to spur accelerating AI leadership?

AI innovation can be accelerated with the right laws

While the combination is well considered and targets improvements in both regulation and innovation, there is a problem: The pro-innovation initiatives are R&D-focused and not targeted at increasing adoption in the “high-risk” AI use cases to be regulated.

Spurring adoption is a key missing element. Many research studies have shown that well-designed “hard law” regulations can actually increase innovation, especially when employed with incentives that accelerate adoption. If the EC were to follow such a strategy, the EU could become a hotbed of AI innovation.

High-risk AI regulation and investment in innovation

The main thrust of the EC regulations is to place new requirements on “high-risk” AI systems. These include AI systems used for remote biometric identification, public infrastructure management, hiring and employment, creditworthiness assessment, and education, as well as for various public-sector use cases, such as dispatching first responders.

The legislation requires developers of these systems to deploy an AI quality management system that addresses requirements around high-quality data sets, record keeping, transparency, human oversight, accuracy, robustness and security. Providers of AI systems not yet designated as high risk are encouraged to create voluntary codes of conduct to achieve similar goals.

It’s clear that the crafters of the proposal were cognizant of the balance between regulation and innovation.

First, the legislation limits the number of AI systems deemed to be high risk, excluding systems that could plausibly have been included, such as insurance, and mostly including AI systems that already have some amount of regulatory oversight, such as employment and lending.

Second, the legislation defines high-level requirements without dictating how they are achieved. It also creates a compliance system based on self-reporting instead of something more onerous.

Finally, the Coordinated Plan is chock-full of R&D-supporting initiatives, including spaces for data-sharing, testing and experimentation facilities, investment in research and AI excellence centers, digital innovation hubs, funding for education, and targeted, programmatic investments in AI for climate change, health, robotics, the public sector, law enforcement and sustainable agriculture.

However, the proposal lacks adoption-driving policies that have led to faster innovation in combination with regulation in other sectors.

A motivating precedent: EV incentives in the U.S.

So how could the EC promote much faster AI innovation while enacting regulatory guardrails? The example of electric vehicles in the United States provides a guide.

The U.S. has become a leader in electric car production because of a combination of entrepreneurship, regulations and smart market creation incentives.

Tesla invigorated the electric car industry with the insight that the new vanguard of electric cars should be desirable, high-performance sports cars.

The Corporate Average Fuel Efficiency (CAFE) regulations created a stick that required the development of more efficient vehicles. Generous tax credits for the purchase of electric vehicles helped directly accelerate vehicle sales without interfering with the natural, competitive market dynamics. The combination of CAFE regulations, tax credits and entrepreneurial companies like Tesla has created such a massive boost to innovation that electric vehicle engines are poised to become less expensive than internal combustion ones.

Getting AI incentives right: Three additional initiatives to pursue

The EC has an opportunity to achieve something similar with AI. Specifically, the EC should consider combining these current regulations with three additional initiatives.

Create tax incentives for companies to build or buy high-risk AI systems that adhere to these regulations. The EC should seek to proactively use AI to help meet economic and societal goals.

For example, some banks are using AI to better assess the creditworthiness of individuals with limited credit histories, while simultaneously working to ensure that banking activities are free from bias. This increases financial inclusion, a goal shared by governments, and represents a win-win AI innovation.

Further reduce uncertainty around EC legislative implementation. Part of this can be done directly by the EC — through the development of more specific standards around AI quality management and fairness. However, there may be even greater value in bringing together a coalition of AI technology providers and user organizations to translate these standards into practical steps for compliance.

For example, the Monetary Authority of Singapore has orchestrated an industry consortium for banks, insurers and AI technology providers called Veritas to achieve similar goals for its Fairness, Ethics, Accountability and Transparency (FEAT) guidelines.

Consider accelerating the adoption of the AI quality management systems that the legislation requires by funding companies to build or buy these systems. There is significant academic and commercial activity already in this space, in areas such as explainability of black box models, assessment of potential discrimination due to data or algorithmic bias, and testing and monitoring of AI systems for their ability to survive significant changes in data.

By creating the conditions to encourage widespread adoption of such technologies, the EC should be able to meet the dual objectives of encouraging innovation and enabling compliance with the new legislation in a sustainable manner.

If the EC assertively reduces uncertainty, promotes the use of regulated, “high-risk” AI and encourages the use of AI quality management techniques, then it has the chance to become the global leader in AI innovation while providing critical protection to its citizens. We should all be pulling for them to be successful, as it would set an example for the world to follow.

Clubhouse, Vimeo, DoubleVerify and others set to sign up to beefed-up disinformation code in the EU

Audio social network Clubhouse, video sharing platform Vimeo and anti-ad fraud startup DoubleVerify are among a clutch of tech companies and organzations preparing to sign up to a beefed-up version of the European Union’s Code of Practice on Online Disinformation, the Commission said today.

Back in May the EU’s executive said it would be strengthening the (still) voluntary code which is aimed at driving joined-up industry action to counter the spread of harmful disinformation online.

It called on smaller digital services and adtech firms to sign up — saying it particularly wanted broader participation, not just the usual platform giants (some of whom were among the first group of signatories).

Facebook, Google, Twitter and Mozilla were among the first clutch of tech platforms to sign up to the EU disinformation code back in 2018, while TikTok joined last summer.

Eight newbies are said to be preparing to agree to the beefed-up code now.

In addition to the three aforementioned startups, the Commission said organisations that provide specific expertise and technical solutions to fight disinformation, such as Avaaz, Globsec, Logically, NewsGuard and WhoTargetsMe, are also preparing to join.

“Substantial additional efforts are needed to reduce the flow of harmful disinformation, as illustrated by recent electoral campaigns and by the signatories’ reports on the COVID-19 disinformation monitoring program,” the EC writes in a press release.

Among the shortcomings the Commission has said it wants the revised code to address are: Inconsistent and incomplete application of the Code across platforms and Member States; gaps in the coverage of the Code’s commitments; a lack of appropriate monitoring mechanism, including key performance indicators; a lack of commitments on access to platforms’ data for research on disinformation; and limited participation from stakeholders, in particular from the advertising sector.

So quite the laundry list, then.

While the intention is to beef up the code, it will remain self-regulated. So whether it will become a “strong EU instrument”, as the Commission puts it — without having any real legal teeth — remains to be seen.

For large internet platforms there will be, if not literal fangs then some threatening growling as compliance with the beefed-up code will be taken into consideration under wider, incoming digital regulations — aka, the bloc’s Digital Services Act — which seeks to increase internet companies’ accountability across a number of areas.

And if self-regulation still fails to move the needle on harmful online disinformation like bogus information about COVID-19 vaccines or election interference the Commission could still propose legislation. However it seems reluctant to do that in such a subjective and controversial area given the risks of being accused of censoring free speech.

Pan-EU rules for political ads transparency are slated for introduction next year, in one targeted piece of planned legislation.

In a statement on the eight new “prospective signatories”, who have joined in the revision process as a beefed-up version of the disinformation code is drafted, Věra Jourová, the VP for values and transparency, said: “I am pleased to see new actors from relevant sectors answering our call and engaging in the overhaul of the Code of Practice. I encourage others, including platforms, messaging services and actors in the online advertising ecosystem, to join as soon as possible to co-shape the process.”

“The strengthened Code cannot fall short of the expectations we have set out in our Guidance from May,” she added. “Online players have a special responsibility regarding spreading and monetising disinformation. They must become more transparent, accountable and safe by design.”

Thierry Breton, the EU commissioner for the internal market, added: “I welcome the fact that more and more online platforms, technology providers, civil society and expert organisations are formally committing to fighting disinformation. The private sector should be ambitious and anticipate the new EU rules on digital services (DSA). Now is the time to show they walk the talk.”

EU considers single rules for regulate vacation rental platforms

European Union lawmakers are consulting on how to regulate the short term rental market across the bloc — asking whether a single set of pan-EU rules or something more locally flavored is needed to rule over platforms like Airbnb.

The Commission says it wants to support an expansion of competition and players in the sector but also respect EU law which allows for Member States to apply local rules based on concerns that are in the public interest.

As it stands, a patchwork of rules and regulations has grown up around short term rental platforms across Europe as cities grapple with the impact of over-tourism from uncontrolled growth of Airbnb et al — and also struggle to extract useful data from platforms whose business model benefits from flipping local housing stock to drive short term rental gains.

This dynamic is triggering the Commission’s customary anxiety around fragmentation of the Single Market. So it’s now asking for views from the industry and interested citizens as it considers whether a sector-specific regulation is needed.

In a summary of its intention for the forthcoming short term rental (STR) initiative, which will be fed by responses to the public consultation, the Commission writes that its aim is “to develop responsible, fair and trusted growth in short-term rentals, as part of a well-balanced tourist ecosystem” — and “ensure a level playing field for all accommodation service providers and to respond to the many requests from interested parties for EU-wide action in this area”.

“This will involve offering balanced solutions for cities, peer and professional short-term rental providers and platforms, while benefiting in particular small and medium-sized firms,” it adds.

The Commission proposed broad rules for the digital sector at the back end of last year — under the Digital Services Act (DSA) — which it has said will level the playing fields between online and offline business in areas such as illegal content and goods.

However the regulation has been criticized for not going far enough to tackle illegal holiday rentals.

In a position paper earlier this year, for example, the holiday accommodation booking platform, Booking.com, wrote that the Commission’s proposal could be more “ambitious” — saying it “insufficiently addresses illegal short term holiday rentals”.

“The powers for national (or local) authorities to order platforms to remove illegal content (Article 8) and to share information (Article 9) are a step in the right direction. However, enforcement loopholes remain,” it added. “Where short term vacation rentals are subject to an authorization /registration scheme, platforms should be required to ensure that only properly registered / authorized properties are listed. This is in line with the duty of care incumbent upon platforms.”

The Commission is also facing vocal pressure on the issue from heavily touristed cities across Europe — which have been trying to impose limits on the growth of holiday rentals, with mixed success.

Earlier this year, for example, a court in Amsterdam overturned a three-district ban on rentals on platforms like Airbnb which the city authorities had brought in after polling local residents over the impacts on their communities.

But this summer city authorities in Paris trumpeted winning a major case against Airbnb — which was ordered to pay $9.6M for illegal listings.

Governments in Europe have also been pressing the Commission to regulate for data access to vacation rental platforms, with the Dutch suggesting last year that it should embed such provisions in the DSA.

The European Parliament has also warned that the “expansive growth of short-term holiday rental in cities and popular tourist destinations is extracting housing from the market, driving up prices and has a negative impact on liveability”. And in a report last year MEPs also said: “We want to give cities more control over rentals short term accommodation platforms and ensure that these platforms share information with the cities, respecting data protection rules.”

Also last year the Commission inked a voluntary agreement with a handful of major platforms to push some aggregated data. However the agreement goes considerably less far than requirements that some individual European cities have imposed on STR platforms operating locally in a bid to control their impacts on communities.

In its consultation on the STR issue, the Commission notes that a number of authorities across the EU have taken steps to try to regulate short term rentals (STRs) — such as by imposing registration requirements and authorisation schemes on hosts, and by putting limits on the number of nights per year properties can be rented out.

“Whilst doing so, some national rules make a difference between so-called ‘peers’ offering STRs in a private capacity occasionally, and those offering STRs in a more professional capacity,” it goes on. “Criteria differ however and not everywhere a distinction is made between peers and professionals. Authorities have also taken a variety of measures to obtain better data from platforms, with a view to better assessing the volume of STRs and facilitating law enforcement.

“The foregoing has resulted in a patchwork of varying rules and regulations.”

“Under EU law, imposing restrictions and conditions on the provision of a service can be justified if necessary to pursue a legitimate public interest objective and proportionate to achieve that end. In spite of important clarifications offered by EU case law, there is still much uncertainty regarding the measures authorities may adopt and enforce as regards hosts and other service providers such as platforms,” the Commission adds.

“EU horizontal regulatory initiatives such as the Digital Services Act aim to impose harmonised obligations on online platforms, including collaborative economy platforms in the area of STRs. However, in order to foster a balanced development of the STR segment in the EU and to address the sector-specific aspects, a dedicated sector-specific initiative may be needed.”

It’s not clear exactly where the Commission will plant its flag on this contentious issue — but its consultation poses a number of questions for stakeholders to respond to in areas such as transparency and data access, as well as asking for views on the benefits and drawbacks of STRs.

The Commission is also seeking suggestions for specific rules and regulations that might be applied to STRs to achieve the sought for balance between supporting tourism and respecting local public interest — including asking for views on how best to ascertain whether an STR host is a ‘peer’ or a professional operator.

The marketing of platforms like Airbnb typically spins the promise that you’re renting unique and characterful spaces from locals — which differentiates the holiday rental platform business from more traditional tourist fare like hotels.

However the reality of such ‘p2p’ platforms is they quickly attracted a swathe of professional landlords — who could lucratively list and rent entire apartments to tourists, instead of renting the housing at a lower rate to locals on long term contracts. Hence the criticism that STRs hollow out local communities and make city living less affordable for locals. 

In one question from the Commission’s public consultation — which is asking for views on the type of rules and requirements for STRs that “could be acceptable” — it suggests a number of responses, including registration and authorization obligations on all hosts or only on professional hosts; and limits on the number of nights, amount of income generated and number of rooms that can be rented by peers.

At the end of this list it also asks for a response to the suggestion of a “total ban on STRs”.

Elsewhere, the consultation asks for views on whether rules for STRs should be imposed from the top (a “harmonized EU approach) — or allowed to be entirely bottom up (left to local authorities to define) or some vaguely defined combination of the two.

It is also concerned with whether there should be different rules for peers and professional hosts — and if so how best to define who is a peer and who a professional?

Other questions in the consultation ask for views on measures “that you think could support innovation in the STR and the development of new business models”. 

The Commission’s consultation runs until December 13, with the EU’s executive slated to publish a legislative proposal for vacation rentals in the first half of next year.

Dutch court finds Uber drivers are employees

Uber has lost another legal challenge in Europe over the employment status of drivers: The Court of Amsterdam, in the Netherlands, has ruled that drivers for Uber are employed, rather than self employed contractors.

The court also found drivers are covered by an existing collective labor agreement in the country — which pertains to taxi drivers — meaning Uber faces increased costs to comply with the agreement which sets pay requirements and covers benefits like sick pay. (And it may be liable for paying driver back pay in some cases.)

The court also ordered Uber to pay €50,000 in costs.

The ride hailing giant has some 4,000 drivers working on its platform in the Dutch capital.

The Amsterdam court rejected Uber’s customary defence that it’s just a technology platform that connects passengers with taxi service providers — finding instead that drivers are only self employed ‘on paper’.

The judges highlighted the nature of the service being provided by drivers and the fact Uber exerts controls over how they can work and earn through its app and algorithms.

Europe’s top court already ruled back in 2017 that Uber is a transport provider and must comply with local transport laws — so you’d be forgiven for deja vu.

The Dutch lawsuit was filed by the national trade union center, FNV, last year — with the hearing kicking off at the end of June.

In a statement today, the FNV’s VP, Zakaria Boufangacha, said: “This statement shows what we have been saying for years: Uber is an employer and the drivers are employees, so Uber must adhere to the collective labor agreement for Taxi Transport. It is also a signal to The Hague that these types of constructions are illegal and that the law must therefore be enforced.”

Uber has been contacted for a response to the ruling.

At the time of writing the company had not responded — but, per Reuters, Uber said it intends to appeal and “has no plans to employ drivers in the Netherlands”.

In the UK, Uber lost a string of tribunal rulings over its employment classification over a number of years — going on to lose in front of the UK supreme court this February.

Following that Uber said it would treat drivers in the UK as workers, although disputes remain (such as over its definition of working time). In May, Uber also said it would recognize a UK trade union for the first time.

Elsewhere in Europe, however, the company continues to fight employment lawsuits — and to lobby European Union lawmakers to deregulate platform work…

The EU has said it wants to find a way to improve platform work. However it’s not yet clear what any pan-EU ‘reform’ may look like. 

The Commission has been contacted with questions on its platform work initiative.

“Digital labour platforms are clearly worried, evident through investing heavily on their lobbying power and throwing more resources on the EU level. These companies — including Uber of course — have also recently come together to create a new funding lobby group that specifically targeting to influence policies on platform work,” said Jill Toh, a PhD researcher in data rights at the University of Amsterdam, talking to TechCrunch after the Amsterdam ruling.

“We saw how Uber wielded and amended laws in their Prop 22 campaign in California, and together with other companies in Europe, they’re attempting to do so again. It’s disheartening to see that the Commission in its two consultations on platform worker regulation has only been talking to tech companies and has held no meetings with trade unions or other platform work representatives.”

“All of this is incredibly problematic and concerning especially if the EC consultations result in a directive on platform work. Overall, the wins in the courts are important for workers, but there remains the issue of corporate power and influence in Brussels, as well as the lack of public enforcement to these court decisions,” she added.