Join Greylock’s Asheem Chandna on November 5 at noon PST/3 pm EST/8 pm GMT to discuss the future of enterprise and cybersecurity investing

The world of enterprise software and cybersecurity has taken multiple body blows since COVID-19 demolished the in-person office, flinging employees across the world and forcing companies to adapt to an all-remote productivity model. The shift has required companies to rethink not only collaboration software, but also the infrastructure that powers it and the best way to protect assets once their security perimeters have been destroyed.

The pandemic has also dramatically increased the usage of digital services, forcing cloud providers to keep up with crushing demands for performance and reliability.

In short — it’s never been a better time to be an enterprise investor (or, possibly, a founder).

So I’m excited to announce our next guest in our Extra Crunch Live interview series: Asheem Chandna from Greylock, one of the top enterprise investors of the past two decades who has worked with multiple important founding teams from whiteboard to IPO. We’re scheduled for Thursday, November 5 at noon PST/3 p.m. EST/8 p.m. GMT (check that daylight savings time math!)

Login details are below the fold for EC members, and if you don’t have an Extra Crunch membership, click through to sign up.

For nearly two decades, Asheem Chandna has invested in enterprise and security startups at Greylock, with massive investment wins in Palo Alto Networks, AppDynamics and Sumo Logic. These days, he continues to invest in cybersecurity with companies like Awake Security and Abnormal Security, data platforms like Rubrik and Delphix, and the stealthy search engine company Neeva. As a leading early-stage investor and mentor in the space, he’s seen a multitude of companies transition from inception to product-market fit to IPO.

We’ll talk about what all the turbulence in enterprise means for the future of startups in the space, how cybersecurity is evolving given the new threat landscape and also discuss a bit about how the public markets and their aggressive multiples for Silicon Valley enterprise startups is changing the strategy of venture capitalists. Plus, we’ll talk about company building, developing founders as leaders and more.

Join us next week with Asheem on Thursday, November 5 at noon PST/3 p.m. EST/8 p.m. GMT. Login details and calendar invite are below.

Event Details

GDPR’s two-year review flags lack of “vigorous” enforcement

It’s more than two years since a flagship update to the European Union’s data protection regime moved into the application phase. Yet the General Data Protection Regulation (GDPR) has been dogged by criticism of a failure of enforcement related to major cross-border complaints — lending weight to critics who claim the legislation has created a moat for dominant multinationals, at the expense of smaller entities.

Today the European Commission responded to that criticism as it gave a long scheduled assessment of how the regulation is functioning, in its first review two years in.

While EU lawmakers’ top-line message is the clear claim: ‘GDPR is working’ — with commissioners lauding what they couched as the many positives of this “modern and horizontal piece of legislation”; which they also said has become a “global reference point” — they conceded there is a “very serious to-do list”, calling for uniformly “vigorous” enforcement of the regulation across the bloc.

So, in other words, GDPR decisions need to flow more smoothly than they have so far.

Speaking at a Commission briefing today, Věra Jourová, Commission VP for values and transparency, said: “The European Data Protection Board and the data protection authorities have to step up their work to create a truly common European culture — providing more coherent and more practical guidance, and work on vigorous but uniform enforcement.

“We have to work together, as the Board and the Member States, to address concerns — in particular those of the small and medium enterprises.”

Justice commissioner, Didier Reynders, also speaking at the briefing, added: “We have to ensure that [GDPR] is applied harmoniously — or at least with the same vigour across the European territory. There may be some nuanced differences but it has to be applied with the same vigour.

“In order for that to happen data protection authorities have to be sufficiently equipped — they have to have the relevant number of staff, the relevant budgets, and there is a clear will to move in that direction.”

Front and center for GDPR enforcement is the issue of resourcing for national data protection authorities (DPAs), who are tasked with providing oversight and issuing enforcement decisions.

Jourová noted today that EU DPAs — taken as a whole — have increased headcount by 42% and budget by 49% between 2016 and 2019.

However that’s an aggregate which conceals major differences in resourcing. A recent report by pro-privacy browser Brave found that half of all national DPAs receive just €5M or less in annual budget from their governments, for example. Brave also found budget increases peaked for the application of the GDPR — saying, two years in, governments are now slowing the increase.

It’s also true that DPA case load isn’t uniform across the bloc, with certain Member States (notably Ireland and Luxembourg) handling many more and/or more complex complaints than others as a result of how many multinationals locate their regional HQs there.

One key issue for GDPR thus relates to how the regulation handles cross border cases.

A one-stop-shop mechanism was supposed to simplify this process — by having a single regulator (typically in the country where the business has its main establishment) taking a lead on complaints that affect users in multiple Member States, and other interested DPAs not dealing directly with the data processor. But they do remain involved — and, once there’s a draft decision, play an important role as they can raise objections to whatever the lead regulator has decided.

However a lot of friction seems to be creeping in via current processes, via technical issues related to sharing data between DPAs — and also via the opportunity for additional legal delays.

In the case of big tech, GDPR’s one-stop-shop has resulted in a major backlog around enforcement, with multiple complaints being re-routed via Ireland’s Data Protection Commission (DPC) — which is yet to issue a single decision on a cross border case. And has more than 20 such investigations ongoing.

Last month Ireland’s DPC trailed looming decisions on Twitter and Facebook — saying it had submitted a draft decision on the Twitter case to fellow DPAs and expressed hope that case could be finalized in July.

Its data protection commissioner, Helen Dixon, had previously suggested the first cross border decisions would be coming in “early” 2020. In the event, we’re past half way through the year still with no enforcement on show.

This looks especially problematic as there is a counter example elsewhere in the EU: France’s CNIL managed to issue a decision in a major GDPR case against Google all the way back in January 2019. Last week the country’s top court for administrative law cemented the regulator’s findings — dismissing Google’s appeal. Its $57M fine against Google remains the largest yet levied against big tech under GDPR.

Asked directly whether the Commission believes Ireland’s DPC is sufficiently resourced — with the questioner noting it has multiple ongoing investigations into Facebook, in particular, with still no decisions taken on the company — Jourová emphasized DPAs are “fully independent”, before adding: “The Commission has no tools to push them to speed up but the cases you mention, especially the cases that relate to big tech, are always complex and they require thorough investigation — and it simply requires more time.”

However CNIL’s example shows effective enforcement against major tech platforms is possible — at least, where there’s a will to take on corporate power. Though France’s relative agility may also have something to do with not having to deal simultaneously with such a massive load of complex cross-border cases.

At the same time, critics point to Ireland’s cosy political relationship with the corporate giants it attracts via low tax rates — which in turn raises plenty of questions when set against the oversized role its DPA has in overseeing most of big tech. The stench of forum shopping is unmistakable.

Criticism of national regulators extends beyond Ireland, too, though. In the UK, privacy experts have slammed the ICO’s repeated failure to enforce the law against the adtech industry — despite its own assessments finding systemic flouting of the law. The country remains an EU Member State until the end of the year — and the ICO is the best resourced DPA in the bloc, in terms of budget and headcount (and likely tech expertise too). Which hardly reflects well on the functional state of the regulation.

Despite all this, the Commission continues to present GDPR as a major geopolitical success, claiming — as it did again today — that it’s ahead of the digital regulatory curve globally at a time when lawmakers almost everywhere are considering putting harder limits on Internet players.

But there’s only so long it can sell a success on paper. Without consistently “vigorous” enforcement, the whole framework crumbles — so the EU’s executive has serious skin in the game when it comes to GDPR actually doing what it says on the tin.

Pressure is coming from commercial quarters too — not only privacy and consumer rights groups.

Earlier this year, Brave lodged a complaint with the Commission against 27 EU Member States — accusing them of under resourcing their national data protection watchdogs. It called on the EU executive to launch an infringement procedure against national governments, and refer them to the bloc’s top court if necessary. So startups are banging the drum for enforcement too.

If decision wheels don’t turn on their own, courts may eventually be needed to force Europe’s DPAs to get a move on — albeit, the Commission is still hoping it won’t have to come to that.

“We saw a considerable increase of capacities both in Ireland and Luxembourg,” said Jourová, discussing the DPA resourcing issue. “We saw a sufficient increase in at least half of other Member States DPAs so we have to let them do very responsible and good work — and of course wait for the results.”

Reynders suggested that while there has been an increase in resource for DPAs the Commission may need to conduct a “deeper” analysis — to see if more resource is needed in some Member States, “due to the size of the companies at work in the jurisdiction of such a national authority”.

“We have huge differences between the Member States about the need to react to the requests from the companies. And of course we need to reinforce the cooperation and the co-ordination on cross border issues. We need to be sure that it’s possible for all the national authorities to work together. And in the network of national authorities it’s the case — and with the Board [EDPB] it’s possible to organize that. So we’ll continue to work on it,” he said.

“So it’s not only a question to have the same kind of approach in all the Member States. It’s to be fit to all the demands coming in your jurisdiction and it’s true that in some jurisdictions we have more multinationals and more members of high tech companies than in others.”

“The best answer will be a decision from the Irish data protection authority about important cases,” he added.

We’ve reached out to the Irish DPC and the EDPB for comment on the Commission’s GDPR assessment.

Asked whether the Commission has a list of Member States that it might instigate infringement proceedings against related to the terms of GDPR — which, for example, require governments to provide adequate resourcing to their national DPA in order that they can properly oversee the regulation — Reynders said it doesn’t currently have such a list.

“We have a list of countries where we try to see if it’s possible to reinforce the possibilities for the national authorities to have enough resources — human resources, financial resources, to organize better cross border activities — if at the end we see there’s a real problem about the enforcement of the GDPR in one Member State we will propose to go maybe to the court with an infringement proceeding — but we don’t have, for the moment, a list of countries to organize such a kind of process,” he said.

The commissioners were a lot more comfortable talking up the positives of GDPR, with Jourová noting, with a sphinx-like smile, how three years ago there was “literal panic” and an army of lobbyists warning of a “doomsday” for business and innovation should the legislation pass. “I have good news today — no dooms day was here,” she said.

“Our approach to the GDPR was the right one,” she went on. “It created the more harmonized rules across the Single Market and more and more companies are using GDPR concepts, such as privacy by design and by default, as a competitive differentiation.

“I can say that the philosophy of one continent, one law is very advantageous for European small and medium enterprises who want to operate on the European Single Market.

“In general GDPR has become a truly European trade mark,” she added. “It puts people and their rights at the center. It does not leave everything to the market like in the US. And it does not see data as a means for state supervision, as in China. Our truly European approach to data is the first answer to difficult questions we face as a society.”

She also noted that the regulation served as inspiration for the current Commission’s tech-focused policy priorities — including a planned “human centric approach to AI“.

“It makes us pause before facial recognition technology, for instance, will be fully developed or implemented. And I dare to say that it makes Europe fit for the digital age. On the international side the GDPR has become a reference point — with a truly global convergence movement. In this context we are happy to support trade and safe digital data flows and work against digital protectionism.”

Another success the commissioners credited to the GDPR framework is the region’s relatively swift digital response to the coronavirus — with the regulation helping DPAs to more quickly assess the privacy implications of COVID-19 contacts tracing apps and tools.

Reynders lauded “a certain degree of flexibility in the GDPR” which he said had been able to come into play during the crisis, feeding into discussions around tracing apps — on “how to ensure protection of personal data in the context of such tracing apps linked to public and individual health”.

Under its to-do list, other areas of work the Commission cited today included ensuring DPAs provide more such support related to the application of the regulation by coming out with guidelines related to other new technologies. “In various new areas we will have to be able to provide guidance quickly, just as we did on the tracing apps recently,” noted Reynders.

Further increasing public awareness of GDPR and the rights it affords is another Commission focus — though it said more than two-third of EU citizens above the age of 16 have at least heard of the GDPR.

But it wants citizens to be able to make what Reynders called “best use” of their rights, perhaps via new applications.

“So the GDPR provides support to innovation in this respect,” he said. “And there’s a lot of work that still needs to be done in order to strengthen innovation.”

“We also have to convince those who may still be reticent about the GDPR. Certain companies, for instance, who have complained about how difficult it is to implement it. I think we need to explain to them what the requirements of the GDPR and how they can implement these,” he added.

Digital mapping of coronavirus contacts will have key role in lifting Europe’s lockdown, says Commission

The European Commission has set out a plan for co-ordinating the lifting of regional coronavirus restrictions that includes a role for digital tools — in what the EU executive couches as “a robust system of reporting and contact tracing”. However it has reiterated that such tools must “fully respect data privacy”.

Last week the Commission made a similar call for a common approach to data and apps for fighting the coronavirus, emphasizing the need for technical measures to be taken to ensure that citizens’ rights and freedoms aren’t torched in the scramble for a tech fix.

Today’s toolbox of measures and principles is the next step in its push to coordinate a pan-EU response.

Responsible planning on the ground, wisely balancing the interests of protection of public health with those of the functioning of our societies, needs a solid foundation. That’s why the Commission has drawn up a catalogue of guidelines, criteria and measures that provide a basis for thoughtful action,” said EC president Ursula von der Leyen, commenting on the full roadmap in a statement.

“The strength of Europe lies in its social and economic balance. Together we learn from each other and help our European Union out of this crisis,” she added.

Harmonized data gathering and sharing by public health authorities — “on the spread of the virus, the characteristics of infected and recovered persons and their potential direct contacts” — is another key plank of the plan for lifting coronavirus restrictions on citizens within the 27 Member State bloc.

While ‘anonymized and aggregated’ data from commercial sources — such as telcos and social media platforms — is seen as a potential aid to pandemic modelling and forecasting efforts, per the plan.

“Social media and mobile network operators can offer a wealth of data on mobility, social interactions, as well as voluntary reports of mild disease cases (e.g. via participatory surveillance) and/or indirect early signals of disease spread (e.g. searches/posts on unusual symptoms),” it writes. “Such data, if pooled and used in anonymised, aggregated format in compliance with EU data protection and privacy rules, could contribute to improve the quality of modelling and forecasting for the pandemic at EU level.”

The Commission has been leaning on telcos to hand over fuzzy metadata for coronavirus modelling which it wants done by the EU’s Joint Research Centre. It wrote to 19 mobile operators last week to formalize its request, per Euractiv, which reported yesterday that its aim is to have the data exchange system operational ‘as soon as possible’ — with the hope being it will cover all the EU’s member states.

Other measures included in the wider roadmap are the need for states to expand their coronavirus testing capacity and harmonize tesing methodologies — with the Commission today issuing guidelines to support the development of “safe and reliable testing”.

Steps to support the reopening of internal and external EU borders is another area of focus, with the executive generally urging a gradual and phased lifting of coronavirus restrictions.

On contacts tracing apps specifically, the Commission writes:

“Mobile applications that warn citizens of an increased risk due to contact with a person tested positive for COVID-19 are particularly relevant in the phase of lifting containment measures, when the infection risk grows as more and more people get in contact with each other. As experienced by other countries dealing with the COVID-19 pandemic, these applications can help interrupt infection chains and reduce the risk of further virus transmission. They should thus be an important element in the strategies put in place by Member States, complementing other measures like increased testing capacities.

“The use of such mobile applications should be voluntary for individuals, based on users’ consent and fully respecting European privacy and personal data protection rules. When using tracing apps, users should remain in control of their data. National health authorities should be involved in the design of the system. Tracing close proximity between mobile devices should be allowed only on an anonymous and aggregated basis, without any tracking of citizens, and names of possibly infected persons should not be disclosed to other users. Mobile tracing and warning applications should be subject to demanding transparency requirements, be deactivated as soon as the COVID-19 crisis is over and any remaining data erased.”

“Confidence in these applications and their respect of privacy and data protection are paramount to their success and effectiveness,” it adds.

Earlier this week Apple and Google announced a collaboration around coronavirus contracts tracing — throwing their weight behind a privacy-sensitive decentralized approach to proximity tracking that would see ephemeral IDs processed locally on devices, rather than being continually uploaded and held on a central server.

A similar decentralized infrastructure for Bluetooth-based COVID-19 contacts tracing had already been suggested by a European coalition of privacy and security experts, as we reported last week.

While a separate coalition of European technologists and researchers has been pushing a standardization effort for COVID-19 contacts tracing that they’ve said will support either centralized or decentralized approaches — in the hopes of garnering the broadest possible international backing.

For its part the Commission has urged the use of technologies such as decentralization for COVID-19 contacts tracing to ensure tools align with core EU principles for handling personal data and safeguarding individual privacy, such as data minimization.

However governments in the region are working on a variety of apps and approaches for coronavirus contacts tracing that don’t all look as if they will check a ‘rights respecting’ box…

In a video address last week, Europe’s lead privacy regulator, the EDPS, intervened to call for a “panEuropean model ‘COVID-19 mobile application’, coordinated at EU level” — in light of varied tech efforts by Member States which involve the processing of personal data for a claimed public health purpose.

“The use of temporary broadcast identifiers and bluetooth technology for contact tracing seems to be a useful path to achieve privacy and personal data protection effectively,” said Wojciech Wiewiórowski on Monday week. “Given these divergences, the European Data Protection Supervisor calls for a panEuropean model “COVID-19 mobile application”, coordinated at EU level. Ideally, coordination with the World Health Organisation should also take place, to ensure data protection by design globally from the start.”

The Commission has not gone so far in today’s plan — calling instead for Member States to ensure their own efforts align with the EU’s existing data protection framework.

Though its roadmap is also heavy on talk of the need for “coordination between Member Statesto avoid negative effects” — dubbing it “a matter of common European interest”. But, for now, the Commission has issued a list of recommendations; it’s up to Member States to choose to fall in behind them or not.

With the caveat that EU regulators are watching very carefully how states’ handle citizens’ data.

“Legality, transparency and proportionality are essential for me,” warned Wiewiórowski, ending last week’s intervention on the EU digital response to the coronavirus with a call for “digital solidarity, which should make data working for all people in Europe and especially for the most vulnerable” — and a cry against “the now tarnished and discredited business models of constant surveillance and targeting that have so damaged trust in the digital society”.

An EU coalition of techies is backing a “privacy-preserving” standard for COVID-19 contacts tracing

A European coalition of techies and scientists drawn from at least eight countries, and led by Germany’s Fraunhofer Heinrich Hertz Institute for telecoms (HHI), is working on contacts-tracing proximity technology for COVID-19 that’s designed to comply with the region’s strict privacy rules — officially unveiling the effort today.

China-style individual-level location-tracking of people by states via their smartphones even for a public health purpose is hard to imagine in Europe — which has a long history of legal protection for individual privacy. However the coronavirus pandemic is applying pressure to the region’s data protection model, as governments turn to data and mobile technologies to seek help with tracking the spread of the virus, supporting their public health response and mitigating wider social and economic impacts.

Scores of apps are popping up across Europe aimed at attacking coronavirus from different angles. European privacy not-for-profit, noyb, is keeping an updated list of approaches, both led by governments and private sector projects, to use personal data to combat SARS-CoV-2 — with examples so far including contacts tracing, lockdown or quarantine enforcement and COVID-19 self-assessment.

The efficacy of such apps is unclear — but the demand for tech and data to fuel such efforts is coming from all over the place.

In the UK the government has been quick to call in tech giants, including Google, Microsoft and Palantir, to help the National Health Service determine where resources need to be sent during the pandemic. While the European Commission has been leaning on regional telcos to hand over user location data to carry out coronavirus tracking — albeit in aggregated and anonymized form.

The newly unveiled Pan-European Privacy-Preserving Proximity Tracing (PEPP-PT) project is a response to the coronavirus pandemic generating a huge spike in demand for citizens’ data that’s intended to offer not just an another app — but what’s described as “a fully privacy-preserving approach” to COVID-19 contacts tracing.

The core idea is to leverage smartphone technology to help disrupt the next wave of infections by notifying individuals who have come into close contact with an infected person — via the proxy of their smartphones having been near enough to carry out a Bluetooth handshake. So far so standard. But the coalition behind the effort wants to steer developments in such a way that the EU response to COVID-19 doesn’t drift towards China-style state surveillance of citizens.

While, for the moment, strict quarantine measures remain in place across much of Europe there may be less imperative for governments to rip up the best practice rulebook to intrude on citizens’ privacy, given the majority of people are locked down at home. But the looming question is what happens when restrictions on daily life are lifted?

Contacts tracing — as a way to offer a chance for interventions that can break any new infection chains — is being touted as a key component of preventing a second wave of coronavirus infections by some, with examples such as Singapore’s TraceTogether app being eyed up by regional lawmakers.

Singapore does appear to have had some success in keeping a second wave of infections from turning into a major outbreak, via an aggressive testing and contacts-tracing regime. But what a small island city-state with a population of less than 6M can do vs a trading bloc of 27 different nations whose collective population exceeds 500M doesn’t necessarily seem immediately comparable.

Europe isn’t going to have a single coronavirus tracing app. It’s already got a patchwork. Hence the people behind PEPP-PT offering a set of “standards, technology, and services” to countries and developers to plug into to get a standardized COVID-19 contacts-tracing approach up and running across the bloc.

The other very European flavored piece here is privacy — and privacy law. “Enforcement of data protection, anonymization, GDPR [the EU’s General Data Protection Regulation] compliance, and security” are baked in, is the top-line claim.

“PEPP-PR was explicitly created to adhere to strong European privacy and data protection laws and principles,” the group writes in an online manifesto. “The idea is to make the technology available to as many countries, managers of infectious disease responses, and developers as quickly and as easily as possible.

“The technical mechanisms and standards provided by PEPP-PT fully protect privacy and leverage the possibilities and features of digital technology to maximize speed and real-time capability of any national pandemic response.”

Hans-Christian Boos, one of the project’s co-initiators — and the founder of an AI company called Arago –discussed the initiative with German newspaper Der Spiegel, telling it: “We collect no location data, no movement profiles, no contact information and no identifiable features of the end devices.”

The newspaper reports PEPP-PT’s approach means apps aligning to this standard would generate only temporary IDs — to avoid individuals being identified. Two or more smartphones running an app that uses the tech and has Bluetooth enabled when they come into proximity would exchange their respective IDs — saving them locally on the device in an encrypted form, according to the report.

Der Spiegel writes that should a user of the app subsequently be diagnosed with coronavirus their doctor would be able to ask them to transfer the contact list to a central server. The doctor would then be able to use the system to warn affected IDs they have had contact with a person who has since been diagnosed with the virus — meaning those at risk individuals could be proactively tested and/or self-isolate.

On its website PEPP-PT explains the approach thus:

Mode 1
If a user is not tested or has tested negative, the anonymous proximity history remains encrypted on the user’s phone and cannot be viewed or transmitted by anybody. At any point in time, only the proximity history that could be relevant for virus transmission is saved, and earlier history is continuously deleted.

Mode 2
If the user of phone A has been confirmed to be SARS-CoV-2 positive, the health authorities will contact user A and provide a TAN code to the user that ensures potential malware cannot inject incorrect infection information into the PEPP-PT system. The user uses this TAN code to voluntarily provide information to the national trust service that permits the notification of PEPP-PT apps recorded in the proximity history and hence potentially infected. Since this history contains anonymous identifiers, neither person can be aware of the other’s identity.

Providing further detail of what it envisages as “Country-dependent trust service operation”, it writes: “The anonymous IDs contain encrypted mechanisms to identify the country of each app that uses PEPP-PT. Using that information, anonymous IDs are handled in a country-specific manner.”

While on healthcare processing is suggests: “A process for how to inform and manage exposed contacts can be defined on a country by country basis.”

Among the other features of PEPP-PT’s mechanisms the group lists in its manifesto are:

  • Backend architecture and technology that can be deployed into local IT infrastructure and can handle hundreds of millions of devices and users per country instantly.
  • Managing the partner network of national initiatives and providing APIs for integration of PEPP-PT features and functionalities into national health processes (test, communication, …) and national system processes (health logistics, economy logistics, …) giving many local initiatives a local backbone architecture that enforces GDPR and ensures scalability.
  • Certification Service to test and approve local implementations to be using the PEPP-PT mechanisms as advertised and thus inheriting the privacy and security testing and approval PEPP-PT mechanisms offer.

Having a standardized approach that could be plugged into a variety of apps would allow for contacts tracing to work across borders — i.e. even if different apps are popular in different EU countries — an important consideration for the bloc, which has 27 Member States.

However there may be questions about the robustness of the privacy protection designed into the approach — if, for example, pseudonymized data is centralized on a server that doctors can access there could be a risk of it leaking and being re-identified. And identification of individual device holders would be legally risky.

Europe’s lead data regulator, the EDPS, recently made a point of tweeting to warn an MEP (and former EC digital commissioner) against the legality of applying Singapore-style Bluetooth-powered contacts tracing in the EU — writing: “Please be cautious comparing Singapore examples with European situation. Remember Singapore has a very specific legal regime on identification of device holder.”

A spokesman for the EDPS told us it’s in contact with data protection agencies of the Member States involved in the PEPP-PT project to collect “relevant information”.

“The general principles presented by EDPB on 20 March, and by EDPS on 24 March are still relevant in that context,” the spokesman added — referring to guidance issued by the privacy regulators last month in which they encouraged anonymization and aggregation should Member States want to use mobile location data for monitoring, containing or mitigating the spread of COVID-19. At least in the first instance.

“When it is not possible to only process anonymous data, the ePrivacy Directive enables Member States to introduce legislative measures to safeguard public security (Art. 15),” the EDPB further noted.

“If measures allowing for the processing of non-anonymised location data are introduced, a Member State is obliged to put in place adequate safeguards, such as providing individuals of electronic communication services the right to a judicial remedy.”

We reached out to the HHI with questions about the PEPP-PT project and were referred to Boos — but at the time of writing had been unable to speak to him.

“The PEPP-PT system is being created by a multi-national European team,” the HHI writes in a press release about the effort. “It is an anonymous and privacy-preserving digital contact tracing approach, which is in full compliance with GDPR and can also be used when traveling between countries through an anonymous multi-country exchange mechanism. No personal data, no location, no Mac-Id of any user is stored or transmitted. PEPP-PT is designed to be incorporated in national corona mobile phone apps as a contact tracing functionality and allows for the integration into the processes of national health services. The solution is offered to be shared openly with any country, given the commitment to achieve interoperability so that the anonymous multi-country exchange mechanism remains functional.”

“PEPP-PT’s international team consists of more than 130 members working across more than seven European countries and includes scientists, technologists, and experts from well-known research institutions and companies,” it adds.

“The result of the team’s work will be owned by a non-profit organization so that the technology and standards are available to all. Our priorities are the well being of world citizens today and the development of tools to limit the impact of future pandemics — all while conforming to European norms and standards.”

The PEPP-PT says its technology-focused efforts are being financed through donations. Per its website, it says it’s adopted the WHO standards for such financing — to “avoid any external influence”.

Of course for the effort to be useful it relies on EU citizens voluntarily downloading one of the aligned contacts tracing apps — and carrying their smartphone everywhere they go, with Bluetooth enabled.

Without substantial penetration of regional smartphones it’s questionable how much of an impact this initiative, or any contacts tracing technology, could have. Although if such tech were able to break even some infection chains people might argue it’s not wasted effort.

Notably, there are signs Europeans are willing to contribute to a public healthcare cause by doing their bit digitally — such as a self-reporting COVID-19 tracking app which last week racked up 750,000 downloads in the UK in 24 hours.

But, at the same time, contacts tracing apps are facing scepticism over their ability to contribute to the fight against COVID-19. Not everyone carries a smartphone, nor knows how to download an app, for instance. There’s plenty of people who would fall outside such a digital net.

Meanwhile, while there’s clearly been a big scramble across the region, at both government and grassroots level, to mobilize digital technology for a public health emergency cause there’s arguably greater imperative to direct effort and resources at scaling up coronavirus testing programs — an area where most European countries continue to lag.

Germany — where some of the key backers of the PEPP-PT are from — being the most notable exception.

Tech giants still not doing enough to fight fakes, says European Commission

It’s a year since the European Commission got a bunch of adtech giants together to spill ink on a voluntary Code of Practice to do something — albeit, nothing very quantifiable — as a first step to stop the spread of disinformation online.

Its latest report card on this voluntary effort sums to the platforms could do better.

The Commission said the same in January. And will doubtless say it again. Unless or until regulators grasp the nettle of online business models that profit by maximizing engagement. As the saying goes, lies fly while the truth comes stumbling after. So attempts to shrink disinformation without fixing the economic incentives to spread BS in the first place are mostly dealing in cosmetic tweaks and optics.

Signatories to the Commission’s EU Code of Practice on Disinformation are: Facebook, Google, Twitter, Mozilla, Microsoft and several trade associations representing online platforms, the advertising industry, and advertisers — including the Internet Advertising Bureau (IAB) and World Federation of Advertisers (WFA).

In a press release assessing today’s annual reports, compiled by signatories, the Commission expresses disappointment that no other Internet platforms or advertising companies have signed up since Microsoft joined as a late addition to the Code this year.

“We commend the commitment of the online platforms to become more transparent about their policies and to establish closer cooperation with researchers, fact-checkers and Member States. However, progress varies a lot between signatories and the reports provide little insight on the actual impact of the self-regulatory measures taken over the past year as well as mechanisms for independent scrutiny,” write commissioners Věra Jourová, Julian King, and Mariya Gabriel said in a joint statement. [emphasis ours]

“While the 2019 European Parliament elections in May were clearly not free from disinformation, the actions and the monthly reporting ahead of the elections contributed to limiting the space for interference and improving the integrity of services, to disrupting economic incentives for disinformation, and to ensuring greater transparency of political and issue-based advertising. Still, large-scale automated propaganda and disinformation persist and there is more work to be done under all areas of the Code. We cannot accept this as a new normal,” they add.

The risk, of course, is that the Commission’s limp-wristed code risks rapidly cementing a milky jelly of self-regulation in the fuzzy zone of disinformation as the new normal, as we warned when the Code launched last year.

The Commission continues to leave the door open (a crack) to doing something platforms can’t (mostly) ignore — i.e. actual regulation — saying it’s assessment of the effectiveness of the Code remains ongoing.

But that’s just a dangled stick. At this transitionary point between outgoing and incoming Commissions, it seems content to stay in a ‘must do better’ holding pattern. (Or: “It’s what the Commission says when it has other priorities,” as one source inside the institution put it.)

A comprehensive assessment of how the Code is working is slated as coming in early 2020 — i.e. after the new Commission has taken up its mandate. So, yes, that’s the sound of the can being kicked a few more months on.

Summing up its main findings from signatories’ self-marked ‘progress’ reports, the outgoing Commission says they have reported improved transparency between themselves vs a year ago on discussing their respective policies against disinformation. 

But it flags poor progress on implementing commitments to empower consumers and the research community.

“The provision of data and search tools is still episodic and arbitrary and does not respond to the needs of researchers for independent scrutiny,” it warns. 

This is ironically an issue that one of the signatories, Mozilla, has been an active critic of others over — including Facebook, whose political ad API it reviewed damningly this year, finding it not fit for purpose and “designed in ways that hinders the important work of researchers, who inform the public and policymakers about the nature and consequences of misinformation”. So, er, ouch.

The Commission is also critical of what it says are “significant” variations in the scope of actions undertaken by platforms to implement “commitments” under the Code, noting also differences in implementation of platform policy; cooperation with stakeholders; and sensitivity to electoral contexts persist across Member States; as well as differences in EU-specific metrics provided.

But given the Code only ever asked for fairly vague action in some pretty broad areas, without prescribing exactly what platforms were committing themselves to doing, nor setting benchmarks for action to be measured against, inconsistency and variety is really what you’d expect. That and the can being kicked down the road. 

The Code did extract one quasi-firm commitment from signatories — on the issue of bot detection and identification — by getting platforms to promise to “establish clear marking systems and rules for bots to ensure their activities cannot be confused with human interactions”.

A year later it’s hard to see clear sign of progress on that goal. Although platforms might argue that what they claim is increased effort toward catching and killing malicious bot accounts before they have a chance to spread any fakes is where most of their sweat is going on that front.

Twitter’s annual report, for instance, talks about what it’s doing to fight “spam and malicious automation strategically and at scale” on its platform — saying its focus is “increasingly on proactively identifying problematic accounts and behaviour rather than waiting until we receive a report”; after which it says it aims to “challenge… accounts engaging in spammy or manipulative behavior before users are ​exposed to ​misleading, inauthentic, or distracting content”.

So, in other words, if Twitter does this perfectly — and catches every malicious bot before it has a chance to tweet — it might plausibly argue that bot labels are redundant. Though it’s clearly not in a position to claim it’s won the spam/malicious bot war yet. Ergo, its users remain at risk of consuming inauthentic tweets that aren’t clearly labeled as such (or even as ‘potentially suspect’ by Twitter). Presumably because these are the accounts that continue slipping under its bot-detection radar.

There’s also nothing in Twitter’s report about it labelling even (non-malicious) bot accounts as bots — for the purpose of preventing accidental confusion (after all satire misinterpreted as truth can also result in disinformation). And this despite the company suggesting a year ago that it was toying with adding contextual labels to bot accounts, at least where it could detect them.

In the event it’s resisted adding any more badges to accounts. While an internal reform of its verification policy for verified account badges was put on pause last year.

Facebook’s report also only makes a passing mention of bots, under a section sub-headed “spam” — where it writes circularly: “Content actioned for spam has increased considerably, since we found and took action on more content that goes against our standards.”

It includes some data-points to back up this claim of more spam squashed — citing a May 2019 Community Standards Enforcement report — where it states that in Q4 2018 and Q1 2019 it acted on 1.8 billion pieces of spam in each of the quarters vs 737 million in Q4 2017; 836 million in Q1 2018; 957 million in Q2 2018; and 1.2 billion in Q3 2018. 

Though it’s lagging on publishing more up-to-date spam data now, noting in the report submitted to the EC that: “Updated spam metrics are expected to be available in November 2019 for Q2 and Q3 2019″ — i.e. conveniently late for inclusion in this report.

Facebook’s report notes ongoing efforts to put contextual labels on certain types of suspect/partisan content, such as labelling photos and videos which have been independently fact-checked as misleading; labelling state-controlled media; and labelling political ads.

Labelling bots is not discussed in the report — presumably because Facebook prefers to focus attention on self-defined spam-removal metrics vs muddying the water with discussion of how much suspect activity it continues to host on its platform, either through incompetence, lack of resources or because it’s politically expedient for its business to do so.

Labelling all these bots would mean Facebook signposting inconsistencies in how it applies its own policies –in a way that might foreground its own political bias. And there’s no self-regulatory mechanism under the sun that will make Facebook fess up to such double-standards.

For now, the Code’s requirement for signatories to publish an annual report on what they’re doing to tackle disinformation looks to be the biggest win so far. Albeit, it’s very loosely bound self-reporting. While some of these ‘reports’ don’t even run to a full page of A4-text — so set your expectations accordingly.

The Commission has published all the reports here. It has also produced its own summary and assessment of them (here).

“Overall, the reporting would benefit from more detailed and qualitative insights in some areas and from further big-picture context, such as trends,” it writes. “In addition, the metrics provided so far are mainly output indicators rather than impact indicators.”

Of the Code generally — as a “self-regulatory standard” — the Commission argues it has “provided an opportunity for greater transparency into the platforms’ policies on disinformation as well as a framework for structured dialogue to monitor, improve and effectively implement those policies”, adding: “This represents progress over the situation prevailing before the Code’s entry into force, while further serious steps by individual signatories and the community as a whole are still necessary.”

EU-US Privacy Shield passes third Commission ‘health check’ — but litigation looms

The third annual review of the EU-US Privacy Shield data transfer mechanism has once again been nodded through by Europe’s executive.

This despite the EU parliament calling last year for the mechanism to be suspended.

The European Commission also issued US counterparts with a compliance deadline last December — saying the US must appoint a permanent ombudsperson to handle EU citizens’ complaints, as required by the arrangement, and do so by February.

This summer the US senate finally confirmed Keith Krach — under secretary of state for economic growth, energy, and the environment — in the ombudsperson role.

The Privacy Shield arrangement was struck between EU and US negotiators back in 2016 — as a rushed replacement for the prior Safe Harbor data transfer pact which in fall 2015 was struck down by Europe’s top court following a legal challenge after NSA whistleblower Edward Snowden revealed US government agencies were liberally helping themselves to digital data from Internet companies.

At heart is a fundamental legal clash between EU privacy rights and US national security priorities.

The intent for the Privacy Shield framework is to paper over those cracks by devising enough checks and balances that the Commission can claim it offers adequate protection for EU citizens personal data when taken to the US for processing, despite the lack of a commensurate, comprehensive data protection region. But critics have argued from the start that the mechanism is flawed.

Even so around 5,000 companies are now signed up to use Privacy Shield to certify transfers of personal data. So there would be major disruption to businesses were it to go the way of its predecessor — as has looked likely in recent years, since Donald Trump took office as US president.

The Commission remains a staunch defender of Privacy Shield, warts and all, preferring to support data-sharing business as usual than offer a pro-active defence of EU citizens’ privacy rights.

To date it has offered little in the way of objection about how the US has implemented Privacy Shield in these annual reviews, despite some glaring flaws and failures (for example the disgraced political data firm, Cambridge Analytica, was a signatory of the framework, even after the data misuse scandal blew up).

The Commission did lay down one deadline late last year, regarding the ongoing lack of a permanent ombudsperson. So it can now check that box.

It also notes approvingly today that the final two vacancies on the US’ Privacy and Civil Liberties Oversight Board have been filled, meaning it’s fully-staffed for the first time since 2016.

Commenting in a statement, commissioner for justice, consumers and gender equality, Věra Jourová, added: “With around 5,000 participating companies, the Privacy Shield has become a success story. The annual review is an important health check for its functioning. We will continue the digital diplomacy dialogue with our U.S. counterparts to make the Shield stronger, including when it comes to oversight, enforcement and, in a longer-term, to increase convergence of our systems.”

Its press release characterizes US enforcement action related to the Privacy Shield as having “improved” — citing the Federal Trade Commission taking enforcement action in a grand total of seven cases.

It also says vaguely that “an increasing number” of EU individuals are making use of their rights under the Privacy Shield, claiming the relevant redress mechanisms are “functioning well”. (Critics have long suggested the opposite.)

The Commission is recommending further improvements too though, including that the US expand compliance checks such as concerning false claims of participation in the framework.

So presumably there’s a bunch of entirely fake compliance claims going unchecked, as well as actual compliance going under-checked…

“The Commission also expects the Federal Trade Commission to further step up its investigations into compliance with substantive requirements of the Privacy Shield and provide the Commission and the EU data protection authorities with information on ongoing investigations,” the EC adds.

All these annual Commission reviews are just fiddling around the edges, though. The real substantive test for Privacy Shield which will determine its long term survival is looming on the horizon — from a judgement expected from Europe’s top court next year.

In July a hearing took place on a key case that’s been dubbed Schrems II. This is a legal challenge which initially targeted Facebook’s use of another EU data transfer mechanism but has been broadened to include a series of legal questions over Privacy Shield — now with the Court of Justice of the European Union.

There is also a separate litigation directly targeting Privacy Shield that was brought by a French digital rights group which argues it’s incompatible with EU law on account of US government mass surveillance practices.

The Commission’s PR notes the pending litigation — writing that this “may also have an impact on the Privacy Shield”. “A hearing took place in July 2019 in case C-311/18 (Schrems II) and, once the Court’s judgement is issued, the Commission will assess its consequences for the Privacy Shield,” it adds.

So, tl;dr, today’s third annual review doesn’t mean Privacy Shield is out of the legal woods.

The EU will reportedly investigate Apple following anti-competition complaint from Spotify

The spat between Spotify and Apple is going to be the focus on a new investigation from the EU, according to a report from the FT.

The paper reported today that the European Commission (EC), the EU’s regulatory body, plans to launch a competition inquiry around Spotify’s claim that the iPhone-maker uses its position as the gatekeeper of the App Store to “deliberately disadvantage other app developers.”

In a complaint filed to the EC in March, Spotify said Apple has “tilted the playing field” by operating iOS, the platform, and the App Store for distribution, as well as its own Spotify rival, Apple Music.

In particular, Spotify CEO Daniel Ek has said that Apple “locks” developers and their platform, which includes a 30 percent cut of in-app spending. Ek also claimed Apple Music has unfair advantages over rivals like Spotify, while he expressed concern that Apple controls communication between users and app publishers, “including placing unfair restrictions on marketing and promotions that benefit consumers.”

Spotify’s announcement was unprecedented — Ek claimed many other developers feel the same way, but do not want to upset Apple by speaking up. The EU is sure to tap into that silent base if the investigation does indeed go ahead as the FT claims.

Apple bit back at Spotify’s claims, but its response was more a rebuttal — or alternative angle — on those complaints. Apple did not directly address any of the demands that Spotify put forward, and those include alternative payment options (as offered in the Google Play store) and equal treatment for Apple apps and those from third-parties like Spotify.

The EU is gaining a reputation as a tough opponent that’s reining in U.S. tech giants.

Aside from its GDPR initiative, it has a history of taking action on apparent monopolies in tech.

Google fined €1.49 billion ($1.67 billion) in March of this year over antitrust violations in search ad brokering, for example. Google was fined a record $5 billion last year over Android abuses and there have been calls to look into breaking the search company up. Inevitably, Facebook has come under the spotlight for a series of privacy concerns, particularly around elections.

Pressure from the EU has already led to the social network introduce clear terms and conditions around its use of data for advertising, while it may also change its rules limiting overseas ad spending around EU elections following concern from Brussels.

Despite what some in the U.S. may think, the EU’s competition commissioner, Margrethe Vestager, has said publicly that she is against breaking companies up. Instead, Vestager has pledged to regulate data access.

“To break up a company, to break up private property would be very far-reaching and you would need to have a very strong case that it would produce better results for consumers in the marketplace than what you could do with more mainstream tools. We’re dealing with private property. Businesses that are built and invested in and become successful because of their innovation,” she said in an interview at SXSW earlier this year.

Startups Weekly: All these startups are raising big rounds

TechCrunch’s Connie Loizos published some interesting stats on seed and Series A financings this week, courtesy of data collected by Wing Venture Capital. In short, seed is the new Series A and Series A is the new Series B. Sure, we’ve been saying that for a while, but Wing has some clean data to back up those claims.

Years ago, a Series A round was roughly $5 million and a startup at that stage wasn’t expected to be generating revenue just yet, something typically expected upon raising a Series B. Now, those rounds have swelled to $15 million, according to deal data from the top 21 VC firms. And VCs are expecting the startups to be making money off their customers.

“Again, for the old gangsters of the industry, that’s a big shift from 2010, when just 15 percent of seed-stage companies that raised Series A rounds were already making some money,” Connie writes.

As for seed, in 2018, the average startup raised a total of $5.6 million prior to raising a Series A, up from $1.3 million in 2010.

Now on to IPO updates, then a closer look at all the companies raising big rounds. Want more TechCrunch newsletters? Sign up here. Contact me at kate.clark@techcrunch.com or @KateClarkTweets.

Slack iOS logo (2019)

IPO corner

Slack: The workplace communication software provider dropped its S-1 on Friday ahead of a direct listing. That’s when companies sell existing shares directly to the market, allowing them to skip the roadshow and minimize the astronomical fees typically associated with an initial public offering. Here’s the TLDR on financials: Slack reported revenues of $400.6 million in the fiscal year ending January 31, 2019, on losses of $138.9 million. That’s compared to a loss of $140.1 million on revenue of $220.5 million for the year before. Slack’s losses are shrinking (slowly), while its revenues expand (quickly). It’s not profitable yet, but is that surprising?

Uber: The ride-hail giant is fast approaching its IPO, expected as soon as next week. On Friday, the company established an IPO price range of $44 to $50 per share to raise between $7.9 billion and $9 billion at a valuation of approximately $84 billion, significantly lower than the $100 billion previously reported estimations. The most likely outcome is Uber will price above range and all the latest estimates will be way off course. Best to sit back and see how Uber plays it. Oh, and PayPal said it would make a $500 million investment in the company in a private placement, as part of an extension of the partnership between the two.

There are a lot of fascinating companies raising colossal rounds, so I thought I’d dive a bit deeper than I normally do. Bear with me.

Carbon: The poster child for 3D printing has authorized the sale of $300 million in Series E shares, according to a Delaware stock filing uncovered by PitchBook. If Carbon raises the full amount, it could reach a valuation of $2.5 billion. Using its proprietary Digital Light Synthesis technology, the business has brought 3D-printing technology to manufacturing, building high-tech sports equipment, a line of custom sneakers for Adidas and more. It was valued at $1.7 billion by venture capitalists with a $200 million Series D in 2018.

Canoo: The electric vehicle startup formerly known as Evelozcity is on the hunt for $200 million in new capital. Backed by a clutch of private individuals and family offices from China, Germany and Taiwan, the company is hoping to line up the new capital from some more recognizable names as it finalizes supply deals with vendors, according to reporting from TechCrunch’s Jonathan Shieber. The company intends to make its vehicles available through a subscription-based model and currently has 400 employees. Canoo was founded in 2017 after Stefan Krause, a former executive at BMW and Deutsche Bank, and another former BMW executive, Ulrich Kranz, exited Faraday Future amid that company’s struggles.

Starry: The Boston-based wireless broadband internet startup has authorized the sale of Series D shares worth up to $125 million, according to a Delaware stock filing. If Starry closes the full authorized raise it will hold a post-money valuation of $870 million. A spokesperson for the company confirmed it had already raised new capital, but disputed the numbers. The company has already raised more than $160 million from investors, including FirstMark Capital and IAC. The company most recently closed a $100 million Series C this past July.

Selina & Sonder: The Airbnb competitor Sonder is in the process of closing a financing worth roughly $200 million at a $1 billion valuation, reports The Wall Street Journal. Investors including Greylock Partners, Spark Capital and Structure Capital are likely to participate. Sonder is four years old but didn’t emerge from stealth until 2018. The startup, which turns homes into hotels, quickly attracted more than $100 million in venture funding. Meanwhile, another hospitality business called Selina has raised $100 million at an $850 million valuation. The company, backed by Access Industries, Grupo Wiese and Colony Latam Partners, builds living/co-working/activity spaces across the world for digital nomads.

Fresh funds: Mary Meeker has made history with the close of her new fund, Bond Capital, the largest VC fund founded and led by a female investor to date. Bond has $1.25 billion in committed capital. If you remember, Meeker ditched Kleiner Perkins last fall and brought the firm’s entire growth team with her. Kleiner said it was a peaceful split that would allow the firm to focus more on its early-stage efforts, leaving the growth investing to Bond. Fortune, however, reported this week that a power struggle of sorts between Meeker and Mamoon Hamid, who joined recently to reenergize the early-stage side of things, was a larger cause of her exit.

Plus, SOSV, a multi-stage venture firm that was founded as the personal investment vehicle of entrepreneur Sean O’Sullivan after his company went public in 1994, has raised $218 million for its third fund. The vehicle has a $250 million target that SOSV expects to meet. Already, the fund is substantially larger than the firm’s previous vehicle, which closed with $150 million.

A grocery delivery startup crumbles: Honestbee, the online grocery delivery service in Asia, is nearly out of money and trying to offload its business. Despite looking impressive from the outside, the company is currently in crisis mode due to a cash crunch — there’s a lot happening right now. TechCrunch’s Jon Russell dives in deep here.

Extra Crunch: When it comes to working with journalists, so many people are, frankly, idiots. I have seen reporters yank stories because founders are assholes, play unfairly, or have PR firms that use ridiculous pressure tactics when they have already committed to a story.” Sign up for Extra Crunch for a full list of PR don’ts. Here are some other EC pieces to hit the wire this week:

Equity: If you enjoy this newsletter, be sure to check out TechCrunch’s venture-focused podcast, Equity. In this week’s episode, available here, Crunchbase News editor-in-chief Alex Wilhelm and I chat about Kleiner Perkins, Chinese IPOs and Slack & Uber’s upcoming exits. 

Twitter to offer report option for misleading election tweets

Twitter is adding a dedicated report option that enables users to tell it about misleading tweets related to voting — starting with elections taking place in India and the European Union .

From tomorrow users in India can report tweets they believe are trying to mislead voters — such as disinformation related to the date or location of polling stations; or fake claims about identity requirements for being able to vote — by tapping on the arrow menu of the suspicious tweet and selecting the ‘report tweet’ option and then choosing: ‘It’s misleading about voting’.

Twitter says the tool will go live for the Indian Lok Sabha elections from tomorrow, and will launch in all European Union member states on April 29 — ahead of elections for the EU parliament next month.

The ‘misleading about voting’ option will persist in the list of available choices for reporting tweets for seven days after each election ends, Twitter said in a blog post announcing the feature.

It also said it intends to the vote-focused feature to be rolled out to “other elections globally throughout the rest of the year”, without providing further detail on which elections and markets it will prioritize for getting the tool.

“Our teams have been trained and we recently enhanced our appeals process in the event that we make the wrong call,” Twitter added.

In recent months the European Commission has been ramping up pressure on tech platforms to scrub disinformation ahead of elections to the EU parliament — issuing monthly reports on progress, or, well, the lack of it.

This follows a Commission initiative last year which saw major tech and ad platforms — including Facebook, Google and Twitter — sign up to a voluntary Code of Practice on disinformation, committing themselves to take some non-prescribed actions to disrupt the ad revenues of disinformation agents and make political ads more transparent on their platforms.

Another strand of the Code looks to have directly contributed to the development of Twitter’s new ‘misleading about voting’ report option — with signatories committing to:

  • Empower consumers to report disinformation and access different news sources, while improving the visibility and findability of authoritative content;

In the latest progress report on the Code, which was published by the Commission yesterday but covers steps taken by the platforms in March 2019, it noted some progress made — but said it’s still not enough.

“Further technical improvements as well as sharing of methodology and data sets for fake accounts are necessary to allow third-party experts, fact-checkers and researchers to carry out independent evaluation,” EC commissioners warned in a joint statement.

In the case of Twitter the company was commended for having made political ad libraries publicly accessible but criticized (along with Google) for not doing more to improve transparency around issue-based advertising.

“It is regrettable that Google and Twitter have not yet reported further progress regarding transparency of issue-based advertising, meaning issues that are sources of important debate during elections,” the Commission said. 

It also reported that Twitter had provided figures on actions undertaken against spam and fake accounts but had failed to explain how these actions relate to activity in the EU.

“Twitter did not report on any actions to improve the scrutiny of ad placements or provide any metrics with respect to its commitments in this area,” it also noted.

The EC says it will assess the Code’s initial 12-month period by the end of 2019 — and take a view on whether it needs to step in and propose regulation to control online disinformation. (Something which some individual EU Member States are already doing.)

Huawei opens a cybersecurity transparency center in the heart of Europe

5G kit maker Huawei opened a Cyber Security Transparency center in Brussels yesterday as the Chinese tech giant continues to try to neutralize suspicion in Western markets that its networking gear could be used for espionage by the Chinese state.

Huawei announced its plan to open a European transparency center last year but giving a speech at an opening ceremony for the center yesterday the company’s rotating CEO, Ken Hu, said: “Looking at the events from the past few months, it’s clear that this facility is now more critical than ever.”

Huawei said the center, which will demonstrate the company’s security solutions in areas including 5G, IoT and cloud, aims to provide a platform to enhance communication and “joint innovation” with all stakeholders, as well as providing a “technical verification and evaluation platform for our
customers”.

“Huawei will work with industry partners to explore and promote the development of security standards and verification mechanisms, to facilitate technological innovation in cyber security across the industry,” it said in a press release.

“To build a trustworthy environment, we need to work together,” Hu also said in his speech. “Both trust and distrust should be based on facts, not feelings, not speculation, and not baseless rumour.

“We believe that facts must be verifiable, and verification must be based on standards. So, to start, we need to work together on unified standards. Based on a common set of standards, technical verification and legal verification can lay the foundation for building trust. This must be a collaborative effort, because no single vendor, government, or telco operator can do it alone.”

The company made a similar plea at Mobile World Congress last week when its rotating chairman, Guo Ping, used a keynote speech to claim its kit is secure and will never contain backdoors. He also pressed the telco industry to work together on creating standards and structures to enable trust.

“Government and the mobile operators should work together to agree what this assurance testing and certification rating for Europe will be,” he urged. “Let experts decide whether networks are safe or not.”

Also speaking at MWC last week the EC’s digital commissioner, Mariya Gabriel, suggested the executive is prepared to take steps to prevent security concerns at the EU Member State level from fragmenting 5G rollouts across the Single Market.

She told delegates at the flagship industry conference that Europe must have “a common approach to this challenge” and “we need to bring it on the table soon”.

Though she did not suggest exactly how the Commission might act.

A spokesman for the Commission confirmed that EC VP Andrus Ansip and Huawei’s Hu met in person yesterday to discuss issues around cybersecurity, 5G and the Digital Single Market — adding that the meeting was held at the request of Hu.

“The Vice-President emphasised that the EU is an open rules based market to all players who fulfil EU rules,” the spokesman told us. “Specific concerns by European citizens should be addressed. We have rules in place which address security issues. We have EU procurement rules in place, and we have the investment screening proposal to protect European interests.”

“The VP also mentioned the need for reciprocity in respective market openness,” he added, further noting: “The College of the European Commission will hold today an orientation debate on China where this issue will come back.”

In a tweet following the meeting Ansip also said: “Agreed that understanding local security concerns, being open and transparent, and cooperating with countries and regulators would be preconditions for increasing trust in the context of 5G security.”

Reuters reports Hu saying the pair had discussed the possibility of setting up a cybersecurity standard along the lines of Europe’s updated privacy framework, the General Data Protection Regulation (GDPR).

Although the Commission did not respond when we asked it to confirm that discussion point.

GDPR was multiple years in the making and before European institutions had agreed on a final text that could come into force. So if the Commission is keen to act “soon” — per Gabriel’s comments on 5G security — to fashion supportive guardrails for next-gen network rollouts a full blown regulation seems an unlikely template.

More likely GDPR is being used by Huawei as a byword for creating consensus around rules that work across an ecosystem of many players by providing standards that different businesses can latch on in an effort to keep moving.

Hu referenced GDPR directly in his speech yesterday, lauding it as “a shining example” of Europe’s “strong experience in driving unified standards and regulation” — so the company is clearly well-versed in how to flatter hosts.

“It sets clear standards, defines responsibilities for all parties, and applies equally to all companies operating in Europe,” he went on. “As a result, GDPR has become the golden standard for privacy protection around the world. We believe that European regulators can also lead the way on similar mechanisms for cyber security.”

Hu ended his speech with a further industry-wide plea, saying: “We also commit to working more closely with all stakeholders in Europe to build a system of trust based on objective facts and verification. This is the cornerstone of a secure digital environment for all.”

Huawei’s appetite to do business in Europe is not in doubt, though.

The question is whether Europe’s telcos and governments can be convinced to swallow any doubts they might have about spying risks and commit to working with the Chinese kit giant as they roll out a new generation of critical infrastructure.