Google is now publishing coronavirus mobility reports, feeding off users’ location history

Google is giving the world a clearer glimpse of exactly how much it knows about people everywhere — using the coronavirus crisis as an opportunity to repackage its persistent tracking of where users go and what they do as a public good in the midst of a pandemic.

In a blog post today the tech giant announced the publication of what it’s branding ‘COVID-19 Community Mobility Reports‘. Aka an in-house analysis of the much more granular location data it maps and tracks to fuel its ad-targeting, product development and wider commercial strategy to showcase aggregated changes in population movements around the world.

The coronavirus pandemic has generated a worldwide scramble for tools and data to inform government responses. In the EU, for example, the European Commission has been leaning on telcos to hand over anonymized and aggregated location data to model the spread of COVID-19.

Google’s data dump looks intended to dangle a similar idea of public policy utility while providing an eyeball-grabbing public snapshot of mobility shifts via data pulled off of its global user-base.

In terms of actual utility for policymakers, Google’s suggestions are pretty vague. The reports could help government and public health officials “understand changes in essential trips that can shape recommendations on business hours or inform delivery service offerings”, it writes.

“Similarly, persistent visits to transportation hubs might indicate the need to add additional buses or trains in order to allow people who need to travel room to spread out for social distancing,” it goes on. “Ultimately, understanding not only whether people are traveling, but also trends in destinations, can help officials design guidance to protect public health and essential needs of communities.”

The location data Google is making public is similarly fuzzy — to avoid inviting a privacy storm — with the company writing it’s using “the same world-class anonymization technology that we use in our products every day”, as it puts it.

“For these reports, we use differential privacy, which adds artificial noise to our datasets enabling high quality results without identifying any individual person,” Google writes. “The insights are created with aggregated, anonymized sets of data from users who have turned on the Location History setting, which is off by default.”

“In Google Maps, we use aggregated, anonymized data showing how busy certain types of places are—helping identify when a local business tends to be the most crowded. We have heard from public health officials that this same type of aggregated, anonymized data could be helpful as they make critical decisions to combat COVID-19,” it adds, tacitly linking an existing offering in Google Maps to a coronavirus-busting cause.

The reports consist of per country, or per state, downloads (with 131 countries covered initially), further broken down into regions/counties — with Google offering an analysis of how community mobility has changed vs a baseline average before COVID-19 arrived to change everything.

So, for example, a March 29 report for the whole of the US shows a 47 per cent drop in retail and recreation activity vs the pre-CV period; a 22% drop in grocery & pharmacy; and a 19% drop in visits to parks and beaches, per Google’s data.

While the same date report for California shows a considerably greater drop in the latter (down 38% compared to the regional baseline); and slightly bigger decreases in both retail and recreation activity (down 50%) and grocery & pharmacy (-24%).

Google says it’s using “aggregated, anonymized data to chart movement trends over time by geography, across different high-level categories of places such as retail and recreation, groceries and pharmacies, parks, transit stations, workplaces, and residential”. The trends are displayed over several weeks, with the most recent information representing 48-to-72 hours prior, it adds.

The company says it’s not publishing the “absolute number of visits” as a privacy step, adding: “To protect people’s privacy, no personally identifiable information, like an individual’s location, contacts or movement, is made available at any point.”

Google’s location mobility report for Italy, which remains the European country hardest hit by the virus, illustrates the extent of the change from lockdown measures applied to the population — with retail & recreation dropping 94% vs Google’s baseline; grocery & pharmacy down 85%; and a 90% drop in trips to parks and beaches.

The same report shows an 87% drop in activity at transit stations; a 63% drop in activity at workplaces; and an increase of almost a quarter (24%) of activity in residential locations — as many Italians stay at home, instead of commuting to work.

It’s a similar story in Spain — another country hard-hit by COVID-19. Though Google’s data for France suggests instructions to stay-at-home may not be being quite as keenly observed by its users there, with only an 18% increase in activity at residential locations and a 56% drop in activity at workplaces. (Perhaps because the pandemic has so far had a less severe impact on France, although numbers of confirmed cases and deaths continue to rise across the region.)

While policymakers have been scrambling for data and tools to inform their responses to COVID-19, privacy experts and civil liberties campaigners have rushed to voice concerns about the impacts of such data-fuelled efforts on individual rights, while also querying the wider utility of some of this tracking.

Contacts tracing is another area where apps are fast being touted as a potential solution to get the West out of economically crushing population lockdowns — opening up the possibility of people’s mobile devices becoming a tool to enforce lockdowns, as has happened in China.

“Large-scale collection of personal data can quickly lead to mass surveillance,” is the succinct warning of a trio of academics from London’s Imperial College’s Computational Privacy Group, who have compiled their privacy concerns vis-a-vis COVID-19 contacts tracing apps into a set of eight questions app developers should be asking.

Discussing Google’s release of mobile location data for a COVID-19 cause, the head of the group, Yves-Alexandre de Montjoye, gave a general thumbs up to the steps it’s taken to shrink privacy risks. Although he also called for Google to provide more detail about the technical processes it’s using in order that external researchers can better assess the robustness of the claimed privacy protections. Such scrutiny is of pressing importance with so much coronavirus-related data grabbing going on right now, he argues.

“It is all aggregated; they normalize to a specific set of dates; they threshold when there are too few people and on top of this they add noise to make — according to them — the data differentially private. So from a pure anonymization perspective it’s good work,” de Montjoye told TechCrunch, discussing the technical side of Google’s release of location data. “Those are three of the big ‘levers’ that you can use to limit risk. And I think it’s well done.”

“But — especially in times like this when there’s a lot of people using data — I think what we would have liked is more details. There’s a lot of assumptions on thresholding, on how do you apply differential privacy, right?… What kind of assumptions are you making?” he added, querying how much noise Google is adding to the data, for example. “It would be good to have a bit more detail on how they applied [differential privacy]… Especially in times like this it is good to be… overly transparent.”

While Google’s mobility data release might appear to overlap in purpose with the Commission’s call for EU telco metadata for COVID-19 tracking, de Montjoye points out there are likely to be key differences based on the different data sources.

“It’s always a trade off between the two,” he says. “It’s basically telco data would probably be less fine-grained, because GPS is much more precise spatially and you might have more data points per person per day with GPS than what you get with mobile phone but on the other hand the carrier/telco data is much more representative — it’s not only smartphone, and it’s not only people who have latitude on, it’s everyone in the country, including non smartphone.”

There may be country specific questions that could be better addressed by working with a local carrier, he also suggested. (The Commission has said it’s intending to have one carrier per EU Member State providing anonymized and aggregated metadata.)

On the topical question of whether location data can ever be truly anonymized, de Montjoye — an expert in data reidentification — gave a “yes and no” response, arguing that original location data is “probably really, really hard to anonymize”.

“Can you process this data and make the aggregate results anonymous? Probably, probably, probably yes — it always depends. But then it also means that the original data exists… Then it’s mostly a question of the controls you have in place to ensure the process that leads to generating those aggregates does not contain privacy risks,” he added.

Perhaps a bigger question related to Google’s location data dump is around the issue of legal consent to be tracking people in the first place.

While the tech giant claims the data is based on opt-ins to location tracking the company was fined $57M by France’s data watchdog last year for a lack of transparency over how it uses people’s data.

Then, earlier this year, the Irish Data Protection Commission (DPC) — now the lead privacy regulator for Google in Europe — confirmed a formal probe of the company’s location tracking activity, following a 2018 complaint by EU consumers groups which accuses Google of using manipulative tactics in order to keep tracking web users’ locations for ad-targeting purposes.

“The issues raised within the concerns relate to the legality of Google’s processing of location data and the transparency surrounding that processing,” said the DPC in a statement in February, announcing the investigation.

The legal questions hanging over Google’s consent to track people likely explains the repeat references in its blog post to people choosing to opt in and having the ability to clear their Location History via settings. (“Users who have Location History turned on can choose to turn the setting off at any time from their Google Account, and can always delete Location History data directly from their Timeline,” it writes in one example.)

In addition to offering up coronavirus mobility porn reports — which Google specifies it will continue to do throughout the crisis — the company says it’s collaborating with “select epidemiologists working on COVID-19 with updates to an existing aggregate, anonymized dataset that can be used to better understand and forecast the pandemic”.

“Data of this type has helped researchers look into predicting epidemics, plan urban and transit infrastructure, and understand people’s mobility and responses to conflict and natural disasters,” it adds.

Using AI responsibly to fight the coronavirus pandemic

The emergence of the novel coronavirus has left the world in turmoil. COVID-19, the disease caused by the virus, has reached virtually every corner of the world, with the number of cases exceeding a million and the number of deaths more than 50,000 worldwide. It is a situation that will affect us all in one way or another.

With the imposition of lockdowns, limitations of movement, the closure of borders and other measures to contain the virus, the operating environment of law enforcement agencies and those security services tasked with protecting the public from harm has suddenly become ever more complex. They find themselves thrust into the middle of an unparalleled situation, playing a critical role in halting the spread of the virus and preserving public safety and social order in the process. In response to this growing crisis, many of these agencies and entities are turning to AI and related technologies for support in unique and innovative ways. Enhancing surveillance, monitoring and detection capabilities is high on the priority list.

For instance, early in the outbreak, Reuters reported a case in China wherein the authorities relied on facial recognition cameras to track a man from Hangzhou who had traveled in an affected area. Upon his return home, the local police were there to instruct him to self-quarantine or face repercussions. Police in China and Spain have also started to use technology to enforce quarantine, with drones being used to patrol and broadcast audio messages to the public, encouraging them to stay at home. People flying to Hong Kong airport receive monitoring bracelets that alert the authorities if they breach the quarantine by leaving their home.

In the United States, a surveillance company announced that its AI-enhanced thermal cameras can detect fevers, while in Thailand, border officers at airports are already piloting a biometric screening system using fever-detecting cameras.

Isolated cases or the new norm?

With the number of cases, deaths and countries on lockdown increasing at an alarming rate, we can assume that these will not be isolated examples of technological innovation in response to this global crisis. In the coming days, weeks and months of this outbreak, we will most likely see more and more AI use cases come to the fore.

While the application of AI can play an important role in seizing the reins in this crisis, and even safeguard officers and officials from infection, we must not forget that its use can raise very real and serious human rights concerns that can be damaging and undermine the trust placed in government by communities. Human rights, civil liberties and the fundamental principles of law may be exposed or damaged if we do not tread this path with great caution. There may be no turning back if Pandora’s box is opened.

In a public statement on March 19, the monitors for freedom of expression and freedom of the media for the United Nations, the Inter-American Commission for Human Rights and the Representative on Freedom of the Media of the Organization for Security and Co-operation in Europe issued a joint statement on promoting and protecting access to and free flow of information during the pandemic, and specifically took note of the growing use of surveillance technology to track the spread of the coronavirus. They acknowledged that there is a need for active efforts to confront the pandemic, but stressed that “it is also crucial that such tools be limited in use, both in terms of purpose and time, and that individual rights to privacy, non-discrimination, the protection of journalistic sources and other freedoms be rigorously protected.”

This is not an easy task, but a necessary one. So what can we do?

Ways to responsibly use AI to fight the coronavirus pandemic

  1. Data anonymization: While some countries are tracking individual suspected patients and their contacts, Austria, Belgium, Italy and the U.K. are collecting anonymized data to study the movement of people in a more general manner. This option still provides governments with the ability to track the movement of large groups, but minimizes the risk of infringing data privacy rights.
  2. Purpose limitation: Personal data that is collected and processed to track the spread of the coronavirus should not be reused for another purpose. National authorities should seek to ensure that the large amounts of personal and medical data are exclusively used for public health reasons. The is a concept already in force in Europe, within the context of the European Union’s General Data Protection Regulation (GDPR), but it’s time for this to become a global principle for AI.
  3. Knowledge-sharing and open access data: António Guterres, the United Nations Secretary-General, has insisted that “global action and solidarity are crucial,” and that we will not win this fight alone. This is applicable on many levels, even for the use of AI by law enforcement and security services in the fight against COVID-19. These agencies and entities must collaborate with one another and with other key stakeholders in the community, including the public and civil society organizations. AI use case and data should be shared and transparency promoted.
  4. Time limitation:  Although the end of this pandemic seems rather far away at this point in time, it will come to an end. When it does, national authorities will need to scale back their newly acquired monitoring capabilities after this pandemic. As Yuval Noah Harari observed in his recent article, “temporary measures have a nasty habit of outlasting emergencies, especially as there is always a new emergency lurking on the horizon.” We must ensure that these exceptional capabilities are indeed scaled back and do not become the new norm.

Within the United Nations system, the United Nations Interregional Crime and Justice Research Institute (UNICRI) is working to advance approaches to AI such as these. It has established a specialized Centre for AI and Robotics in The Hague and is one of the few international actors dedicated to specifically looking at AI vis-à-vis crime prevention and control, criminal justice, rule of law and security. It assists national authorities, in particular law enforcement agencies, to understand the opportunities presented by these technologies and, at the same time, to navigate the potential pitfalls associated with these technologies.

Working closely with International Criminal Police Organization (INTERPOL), UNICRI has set up a global platform for law enforcement, fostering discussion on AI, identifying practical use cases and defining principles for responsible use. Much work has been done through this forum, but it is still early days, and the path ahead is long.

While the COVID-19 pandemic has illustrated several innovative use cases, as well as the urgency for the governments to do their utmost to stop the spread of the virus, it is important to not let consideration of fundamental principles, rights and respect for the rule of law be set aside. The positive power and potential of AI is real. It can help those embroiled in fighting this battle to slow the spread of this debilitating disease. It can help save lives. But we must stay vigilant and commit to the safe, ethical and responsible use of AI.

It is essential that, even in times of great crisis, we remain conscience of the duality of AI and strive to advance AI for good.

Collibra nabs another $112.5M at a $2.3B valuation for its big data management platform

GDPR and other data protection and privacy regulations — as well as a significant (and growing) number of data breaches and exposées of companies’ privacy policies — have put a spotlight on not just on the vast troves of data that businesses and other organizations hold on us, but also how they handle it. Today, one of the companies helping them cope with that data trove in a better and legal way is announcing a huge round of funding to continue that work. Collibra, which provides tools to manage, warehouse, store and analyse data troves, is today announcing that it has raised $112.5 million in funding, at a post-money valuation of $2.3 billion.

The funding — a Series F from the looks of it — represents a big bump for the startup, which last year raised $100 million at a valuation of just over $1 billion. This latest round was co-led by ICONIQ Capital, Index Ventures, and Durable Capital Partners LIP, with previous investors CapitalG (Google’s growth fund), Battery Ventures, and Dawn Capital also participating.

Collibra, originally a spin-out from Vrije Universiteit in Brussels, Belgium, today works with some 450 enterprises and other large organizations — customers include Adobe, Verizon (which owns TechCrunch), insurers AXA, and a number of healthcare providers. Its products cover a range of services focused around company data, including tools to help customers comply with local data protection policies, store it securely, and to run analytics and more.

These are all tools that have long had a place in enterprise big data IT, but have become increasingly more used and in-demand both as data policies have expanded, and as the prospects of what can be discovered through big data analytics have become more advanced. With that growth, many companies have realised that they are not in a position to use and store their data in the best possible way, and that is where companies like Collibra step in.

“Most large organizations are in data chaos,” Felix Van de Maele, co-founder and CEO, previously told us. “We help them understand what data they have, where they store it and [understand] whether they are allowed to use it.”

As you would expect with a big IT trend, Collibra is not the only company chasing this opportunity. Competitors include Informatica, IBM, Talend, Egnyte, among a number of others, but the market position of Collibra, and its advanced technology, is what has continued to impress investors.

“Durable Capital Partners invests in innovative companies that have significant potential to shape growing industries and build larger companies,” said Henry Ellenbogen, founder and chief investment officer for Durable Capital Partners LP, in a statement (Ellenbogen is formerly an investment manager a T. Rowe Price, and this is his first investment in Collibra under Durable). “We believe Collibra is a leader in the Data Intelligence category, a space that could have a tremendous impact on global business operations and a space that we expect will continue to grow as data becomes an increasingly critical asset.”

“We have a high degree of conviction in Collibra and the importance of the company’s mission to help organizations benefit from their data,” added Matt Jacobson, general partner at ICONIQ Capital and Collibra board member, in his own statement. “There is an increasing urgency for enterprises to harness their data for strategic business decisions. Collibra empowers organizations to use their data to make critical business decisions, especially in uncertain business environments.”

An EU coalition of techies is backing a “privacy-preserving” standard for COVID-19 contacts tracing

A European coalition of techies and scientists drawn from at least eight countries, and led by Germany’s Fraunhofer Heinrich Hertz Institute for telecoms (HHI), is working on contacts-tracing proximity technology for COVID-19 that’s designed to comply with the region’s strict privacy rules — officially unveiling the effort today.

China-style individual-level location-tracking of people by states via their smartphones even for a public health purpose is hard to imagine in Europe — which has a long history of legal protection for individual privacy. However the coronavirus pandemic is applying pressure to the region’s data protection model, as governments turn to data and mobile technologies to seek help with tracking the spread of the virus, supporting their public health response and mitigating wider social and economic impacts.

Scores of apps are popping up across Europe aimed at attacking coronavirus from different angles. European privacy not-for-profit, noyb, is keeping an updated list of approaches, both led by governments and private sector projects, to use personal data to combat SARS-CoV-2 — with examples so far including contacts tracing, lockdown or quarantine enforcement and COVID-19 self-assessment.

The efficacy of such apps is unclear — but the demand for tech and data to fuel such efforts is coming from all over the place.

In the UK the government has been quick to call in tech giants, including Google, Microsoft and Palantir, to help the National Health Service determine where resources need to be sent during the pandemic. While the European Commission has been leaning on regional telcos to hand over user location data to carry out coronavirus tracking — albeit in aggregated and anonymized form.

The newly unveiled Pan-European Privacy-Preserving Proximity Tracing (PEPP-PT) project is a response to the coronavirus pandemic generating a huge spike in demand for citizens’ data that’s intended to offer not just an another app — but what’s described as “a fully privacy-preserving approach” to COVID-19 contacts tracing.

The core idea is to leverage smartphone technology to help disrupt the next wave of infections by notifying individuals who have come into close contact with an infected person — via the proxy of their smartphones having been near enough to carry out a Bluetooth handshake. So far so standard. But the coalition behind the effort wants to steer developments in such a way that the EU response to COVID-19 doesn’t drift towards China-style state surveillance of citizens.

While, for the moment, strict quarantine measures remain in place across much of Europe there may be less imperative for governments to rip up the best practice rulebook to intrude on citizens’ privacy, given the majority of people are locked down at home. But the looming question is what happens when restrictions on daily life are lifted?

Contacts tracing — as a way to offer a chance for interventions that can break any new infection chains — is being touted as a key component of preventing a second wave of coronavirus infections by some, with examples such as Singapore’s TraceTogether app being eyed up by regional lawmakers.

Singapore does appear to have had some success in keeping a second wave of infections from turning into a major outbreak, via an aggressive testing and contacts-tracing regime. But what a small island city-state with a population of less than 6M can do vs a trading bloc of 27 different nations whose collective population exceeds 500M doesn’t necessarily seem immediately comparable.

Europe isn’t going to have a single coronavirus tracing app. It’s already got a patchwork. Hence the people behind PEPP-PT offering a set of “standards, technology, and services” to countries and developers to plug into to get a standardized COVID-19 contacts-tracing approach up and running across the bloc.

The other very European flavored piece here is privacy — and privacy law. “Enforcement of data protection, anonymization, GDPR [the EU’s General Data Protection Regulation] compliance, and security” are baked in, is the top-line claim.

“PEPP-PR was explicitly created to adhere to strong European privacy and data protection laws and principles,” the group writes in an online manifesto. “The idea is to make the technology available to as many countries, managers of infectious disease responses, and developers as quickly and as easily as possible.

“The technical mechanisms and standards provided by PEPP-PT fully protect privacy and leverage the possibilities and features of digital technology to maximize speed and real-time capability of any national pandemic response.”

Hans-Christian Boos, one of the project’s co-initiators — and the founder of an AI company called Arago –discussed the initiative with German newspaper Der Spiegel, telling it: “We collect no location data, no movement profiles, no contact information and no identifiable features of the end devices.”

The newspaper reports PEPP-PT’s approach means apps aligning to this standard would generate only temporary IDs — to avoid individuals being identified. Two or more smartphones running an app that uses the tech and has Bluetooth enabled when they come into proximity would exchange their respective IDs — saving them locally on the device in an encrypted form, according to the report.

Der Spiegel writes that should a user of the app subsequently be diagnosed with coronavirus their doctor would be able to ask them to transfer the contact list to a central server. The doctor would then be able to use the system to warn affected IDs they have had contact with a person who has since been diagnosed with the virus — meaning those at risk individuals could be proactively tested and/or self-isolate.

On its website PEPP-PT explains the approach thus:

Mode 1
If a user is not tested or has tested negative, the anonymous proximity history remains encrypted on the user’s phone and cannot be viewed or transmitted by anybody. At any point in time, only the proximity history that could be relevant for virus transmission is saved, and earlier history is continuously deleted.

Mode 2
If the user of phone A has been confirmed to be SARS-CoV-2 positive, the health authorities will contact user A and provide a TAN code to the user that ensures potential malware cannot inject incorrect infection information into the PEPP-PT system. The user uses this TAN code to voluntarily provide information to the national trust service that permits the notification of PEPP-PT apps recorded in the proximity history and hence potentially infected. Since this history contains anonymous identifiers, neither person can be aware of the other’s identity.

Providing further detail of what it envisages as “Country-dependent trust service operation”, it writes: “The anonymous IDs contain encrypted mechanisms to identify the country of each app that uses PEPP-PT. Using that information, anonymous IDs are handled in a country-specific manner.”

While on healthcare processing is suggests: “A process for how to inform and manage exposed contacts can be defined on a country by country basis.”

Among the other features of PEPP-PT’s mechanisms the group lists in its manifesto are:

  • Backend architecture and technology that can be deployed into local IT infrastructure and can handle hundreds of millions of devices and users per country instantly.
  • Managing the partner network of national initiatives and providing APIs for integration of PEPP-PT features and functionalities into national health processes (test, communication, …) and national system processes (health logistics, economy logistics, …) giving many local initiatives a local backbone architecture that enforces GDPR and ensures scalability.
  • Certification Service to test and approve local implementations to be using the PEPP-PT mechanisms as advertised and thus inheriting the privacy and security testing and approval PEPP-PT mechanisms offer.

Having a standardized approach that could be plugged into a variety of apps would allow for contacts tracing to work across borders — i.e. even if different apps are popular in different EU countries — an important consideration for the bloc, which has 27 Member States.

However there may be questions about the robustness of the privacy protection designed into the approach — if, for example, pseudonymized data is centralized on a server that doctors can access there could be a risk of it leaking and being re-identified. And identification of individual device holders would be legally risky.

Europe’s lead data regulator, the EDPS, recently made a point of tweeting to warn an MEP (and former EC digital commissioner) against the legality of applying Singapore-style Bluetooth-powered contacts tracing in the EU — writing: “Please be cautious comparing Singapore examples with European situation. Remember Singapore has a very specific legal regime on identification of device holder.”

A spokesman for the EDPS told us it’s in contact with data protection agencies of the Member States involved in the PEPP-PT project to collect “relevant information”.

“The general principles presented by EDPB on 20 March, and by EDPS on 24 March are still relevant in that context,” the spokesman added — referring to guidance issued by the privacy regulators last month in which they encouraged anonymization and aggregation should Member States want to use mobile location data for monitoring, containing or mitigating the spread of COVID-19. At least in the first instance.

“When it is not possible to only process anonymous data, the ePrivacy Directive enables Member States to introduce legislative measures to safeguard public security (Art. 15),” the EDPB further noted.

“If measures allowing for the processing of non-anonymised location data are introduced, a Member State is obliged to put in place adequate safeguards, such as providing individuals of electronic communication services the right to a judicial remedy.”

We reached out to the HHI with questions about the PEPP-PT project and were referred to Boos — but at the time of writing had been unable to speak to him.

“The PEPP-PT system is being created by a multi-national European team,” the HHI writes in a press release about the effort. “It is an anonymous and privacy-preserving digital contact tracing approach, which is in full compliance with GDPR and can also be used when traveling between countries through an anonymous multi-country exchange mechanism. No personal data, no location, no Mac-Id of any user is stored or transmitted. PEPP-PT is designed to be incorporated in national corona mobile phone apps as a contact tracing functionality and allows for the integration into the processes of national health services. The solution is offered to be shared openly with any country, given the commitment to achieve interoperability so that the anonymous multi-country exchange mechanism remains functional.”

“PEPP-PT’s international team consists of more than 130 members working across more than seven European countries and includes scientists, technologists, and experts from well-known research institutions and companies,” it adds.

“The result of the team’s work will be owned by a non-profit organization so that the technology and standards are available to all. Our priorities are the well being of world citizens today and the development of tools to limit the impact of future pandemics — all while conforming to European norms and standards.”

The PEPP-PT says its technology-focused efforts are being financed through donations. Per its website, it says it’s adopted the WHO standards for such financing — to “avoid any external influence”.

Of course for the effort to be useful it relies on EU citizens voluntarily downloading one of the aligned contacts tracing apps — and carrying their smartphone everywhere they go, with Bluetooth enabled.

Without substantial penetration of regional smartphones it’s questionable how much of an impact this initiative, or any contacts tracing technology, could have. Although if such tech were able to break even some infection chains people might argue it’s not wasted effort.

Notably, there are signs Europeans are willing to contribute to a public healthcare cause by doing their bit digitally — such as a self-reporting COVID-19 tracking app which last week racked up 750,000 downloads in the UK in 24 hours.

But, at the same time, contacts tracing apps are facing scepticism over their ability to contribute to the fight against COVID-19. Not everyone carries a smartphone, nor knows how to download an app, for instance. There’s plenty of people who would fall outside such a digital net.

Meanwhile, while there’s clearly been a big scramble across the region, at both government and grassroots level, to mobilize digital technology for a public health emergency cause there’s arguably greater imperative to direct effort and resources at scaling up coronavirus testing programs — an area where most European countries continue to lag.

Germany — where some of the key backers of the PEPP-PT are from — being the most notable exception.

What does a pandemic say about the tech we’ve built?

There’s a joke* being reshared on chat apps that takes the form of a multiple choice question — asking who’s the leading force in workplace digital transformation? The red-lined punchline is not the CEO or CTO but: C) COVID-19.

There’s likely more than a grain of truth underpinning the quip. The novel coronavirus is pushing a lot of metaphorical buttons right now. ‘Pause’ buttons for people and industries, as large swathes of the world’s population face quarantine conditions that can resemble house arrest. The majority of offline social and economic activities are suddenly off limits.

Such major pauses in our modern lifestyle may even turn into a full reset, over time. The world as it was, where mobility of people has been all but taken for granted — regardless of the environmental costs of so much commuting and indulged wanderlust — may never return to ‘business as usual’.

If global leadership rises to the occasional then the coronavirus crisis offers an opportunity to rethink how we structure our societies and economies — to make a shift towards lower carbon alternatives. After all, how many physical meetings do you really need when digital connectivity is accessible and reliable? As millions more office workers log onto the day job from home that number suddenly seems vanishingly small.

COVID-19 is clearly strengthening the case for broadband to be a utility — as so much more activity is pushed online. Even social media seems to have a genuine community purpose during a moment of national crisis when many people can only connect remotely, even with their nearest neighbours.

Hence the reports of people stuck at home flocking back to Facebook to sound off in the digital town square. Now the actual high street is off limits the vintage social network is experiencing a late second wind.

Facebook understands this sort of higher societal purpose already, of course. Which is why it’s been so proactive about building features that nudge users to ‘mark yourself safe’ during extraordinary events like natural disasters, major accidents and terrorist attacks. (Or indeed why it encouraged politicians to get into bed with its data platform in the first place — no matter the cost to democracy.)

In less fraught times, Facebook’s ‘purpose’ can be loosely summed to ‘killing time’. But with ever more sinkholes being drilled by the attention economy that’s a function under ferocious and sustained attack.

Over the years the tech giant has responded by engineering ways to rise back to the top of the social heap — including spying on and buying up competition, or directly cloning rival products. It’s been pulling off this trick, by hook or by crook, for over a decade. Albeit, this time Facebook can’t take any credit for the traffic uptick; A pandemic is nature’s dark pattern design.

What’s most interesting about this virally disrupted moment is how much of the digital technology that’s been built out online over the past two decades could very well have been designed for living through just such a dystopia.

Seen through this lens, VR should be having a major moment. A face computer that swaps out the stuff your eyes can actually see with a choose-your-own-digital-adventure of virtual worlds to explore, all from the comfort of your living room? What problem are you fixing VR? Well, the conceptual limits of human lockdown in the face of a pandemic quarantine right now, actually…

Virtual reality has never been a compelling proposition vs the rich and textured opportunity of real life, except within very narrow and niche bounds. Yet all of a sudden here we all are — with our horizons drastically narrowed and real-life news that’s ceaselessly harrowing. So it might yet end up wry punchline to another multiple choice joke: ‘My next vacation will be: A) Staycation, B) The spare room, C) VR escapism.’

It’s videoconferencing that’s actually having the big moment, though. Turns out even a pandemic can’t make VR go viral. Instead, long lapsed friendships are being rekindled over Zoom group chats or Google Hangouts. And Houseparty — a video chat app — has seen surging downloads as barflies seek out alternative night life with their usual watering-holes shuttered.

Bored celebs are TikToking. Impromptu concerts are being livestreamed from living rooms via Instagram and Facebook Live. All sorts of folks are managing social distancing and the stress of being stuck at home alone (or with family) by distant socializing — signing up to remote book clubs and discos; joining virtual dance parties and exercise sessions from bedrooms. Taking a few classes together. The quiet pub night with friends has morphed seamlessly into a bring-your-own-bottle group video chat.

This is not normal — but nor is it surprising. We’re living in the most extraordinary time. And it seems a very human response to mass disruption and physical separation (not to mention the trauma of an ongoing public health emergency that’s killing thousands of people a day) to reach for even a moving pixel of human comfort. Contactless human contact is better than none at all.

Yet the fact all these tools are already out there, ready and waiting for us to log on and start streaming, should send a dehumanizing chill down society’s backbone.

It underlines quite how much consumer technology is being designed to reprogram how we connect with each other, individually and in groups, in order that uninvited third parties can cut a profit.

Back in the pre-COVID-19 era, a key concern being attached to social media was its ability to hook users and encourage passive feed consumption — replacing genuine human contact with voyeuristic screening of friends’ lives. Studies have linked the tech to loneliness and depression. Now we’re literally unable to go out and meet friends the loss of human contact is real and stark. So being popular online in a pandemic really isn’t any kind of success metric.

Houseparty, for example, self-describes as a “face to face social network” — yet it’s quite the literal opposite; you’re foregoing face-to-face contact if you’re getting virtually together in app-wrapped form.

While the implication of Facebook’s COVID-19 traffic bump is that the company’s business model thrives on societal disruption and mainstream misery. Which, frankly, we knew already. Data-driven adtech is another way of saying it’s been engineered to spray you with ad-flavored dissatisfaction by spying on what you get up to. The coronavirus just hammers the point home.

The fact we have so many high-tech tools on tap for forging digital connections might feel like amazing serendipity in this crisis — a freemium bonanza for coping with terrible global trauma. But such bounty points to a horrible flip side: It’s the attention economy that’s infectious and insidious. Before ‘normal life’ plunged off a cliff all this sticky tech was labelled ‘everyday use’; not ‘break out in a global emergency’.

It’s never been clearer how these attention-hogging apps and services are designed to disrupt and monetize us; to embed themselves in our friendships and relationships in a way that’s subtly dehumanizing; re-routing emotion and connections; nudging us to swap in-person socializing for virtualized fuzz that designed to be data-mined and monetized by the same middlemen who’ve inserted themselves unasked into our private and social lives.

Captured and recompiled in this way, human connection is reduced to a series of dilute and/or meaningless transactions. The platforms deploying armies of engineers to knob-twiddle and pull strings to maximize ad opportunities, no matter the personal cost.

It’s also no accident we’re also seeing more of the vast and intrusive underpinnings of surveillance capitalism emerge, as the COVID-19 emergency rolls back some of the obfuscation that’s used to shield these business models from mainstream view in more normal times. The trackers are rushing to seize and colonize an opportunistic purpose.

Tech and ad giants are falling over themselves to get involved with offering data or apps for COVID-19 tracking. They’re already in the mass surveillance business so there’s likely never felt like a better moment than the present pandemic for the big data lobby to press the lie that individuals don’t care about privacy, as governments cry out for tools and resources to help save lives.

First the people-tracking platforms dressed up attacks on human agency as ‘relevant ads’. Now the data industrial complex is spinning police-state levels of mass surveillance as pandemic-busting corporate social responsibility. How quick the wheel turns.

But platforms should be careful what they wish for. Populations that find themselves under house arrest with their phones playing snitch might be just as quick to round on high tech gaolers as they’ve been to sign up for a friendly video chat in these strange and unprecedented times.

Oh and Zoom (and others) — more people might actually read your ‘privacy policy‘ now they’ve got so much time to mess about online. And that really is a risk.

*Source is a private Twitter account called @MBA_ish

Maybe we shouldn’t use Zoom after all

Now that we’re all stuck at home thanks to the coronavirus pandemic, video calls have gone from a novelty to a necessity. Zoom, the popular videoconferencing service, seems to be doing better than most and has quickly become one of, if not the most, popular option going.

But should it be?

Zoom’s recent popularity has also shone a spotlight on the company’s security protections and privacy promises. Just today, The Intercept reported that Zoom video calls are not end-to-end encrypted, despite the company’s claims that they are.

And Motherboard reports that Zoom is leaking the email addresses of “at least a few thousand” people because personal addresses are treated as if they belong to the same company.

It’s the latest examples of the company having to spend the last year mopping up after a barrage of headlines examining the company’s practices and misleading marketing. To wit:

  • Apple was forced to step in to secure millions of Macs after a security researcher found Zoom failed to disclose that it installed a secret web server on users’ Macs, which Zoom failed to remove when the client was uninstalled. The researcher, Jonathan Leitschuh, said the web server meant any malicious website could activate Mac webcam with Zoom installed without the user’s permission. The researcher declined a bug bounty payout because Zoom wanted Leitschuh to sign a non-disclosure agreement, which would have prevented him from disclosing details of the bug.
  • Zoom was quietly sending data to Facebook about a user’s Zoom habits — even when the user does not have a Facebook account. Motherboard reported that the iOS app was notifying Facebook when they opened the app, the device model, which phone carrier they opened the app, and more. Zoom removed the code in response, but not fast enough to prevent a class action lawsuit or New York’s attorney general from launching an investigation.
  • Zoom came under fire again for its “attendee tracking” feature, which, when enabled, lets a host check if participants are clicking away from the main Zoom window during a call.
  • A security researcher found that the Zoom uses a “shady” technique to install its Mac app without user interaction. “The same tricks that are being used by macOS malware,” the researcher said.
  • On the bright side and to some users’ relief, we reported that it is in fact possible to join a Zoom video call without having to download or use the app. But Zoom’s “dark patterns” doesn’t make it easy to start a video call using just your browser.
  • Zoom has faced questions over its lack of transparency on law enforcement requests it receives. Access Now, a privacy and rights group, called on Zoom to release the number of requests it receives, just as Amazon, Google, Microsoft and many more tech giants report on a semi-annual basis.
  • Then there’s Zoombombing, where trolls take advantage of open or unprotected meetings and poor default settings to take over screen-sharing and broadcast porn or other explicit material. The FBI this week warned users to adjust their settings to avoid trolls hijacking video calls.
  • And Zoom tightened its privacy policy this week after it was criticized for allowing Zoom to collect information about users’ meetings — like videos, transcripts and shared notes — for advertising.

There are many more privacy-focused alternatives to Zoom. Motherboard noted several options, but they all have their pitfalls. FaceTime and WhatsApp are end-to-end encrypted, but FaceTime works only on Apple devices and WhatsApp is limited to just four video callers at a time. A lesser known video calling platform, Jitsi, is not end-to-end encrypted but it’s open source — so you can look at the code to make sure there are no backdoors — and it works across all devices and browsers. You can run Jitsi on a server you control for greater privacy.

In fairness, Zoom is not inherently bad and there are many reasons why Zoom is so popular. It’s easy to use, reliable and for the vast majority it’s incredibly convenient.

But Zoom’s misleading claims give users a false sense of security and privacy. Whether it’s hosting a virtual happy hour or a yoga class, or using Zoom for therapy or government cabinet meetings, everyone deserves privacy.

Now more than ever Zoom has a responsibility to its users. For now, Zoom at your own risk.

Maybe we shouldn’t use Zoom after all

Now that we’re all stuck at home thanks to the coronavirus pandemic, video calls have gone from a novelty to a necessity. Zoom, the popular videoconferencing service, seems to be doing better than most and has quickly become one of, if not the most, popular option going.

But should it be?

Zoom’s recent popularity has also shone a spotlight on the company’s security protections and privacy promises. Just today, The Intercept reported that Zoom video calls are not end-to-end encrypted, despite the company’s claims that they are.

And Motherboard reports that Zoom is leaking the email addresses of “at least a few thousand” people because personal addresses are treated as if they belong to the same company.

It’s the latest examples of the company having to spend the last year mopping up after a barrage of headlines examining the company’s practices and misleading marketing. To wit:

  • Apple was forced to step in to secure millions of Macs after a security researcher found Zoom failed to disclose that it installed a secret web server on users’ Macs, which Zoom failed to remove when the client was uninstalled. The researcher, Jonathan Leitschuh, said the web server meant any malicious website could activate Mac webcam with Zoom installed without the user’s permission. The researcher declined a bug bounty payout because Zoom wanted Leitschuh to sign a non-disclosure agreement, which would have prevented him from disclosing details of the bug.
  • Zoom was quietly sending data to Facebook about a user’s Zoom habits — even when the user does not have a Facebook account. Motherboard reported that the iOS app was notifying Facebook when they opened the app, the device model, which phone carrier they opened the app, and more. Zoom removed the code in response, but not fast enough to prevent a class action lawsuit or New York’s attorney general from launching an investigation.
  • Zoom came under fire again for its “attendee tracking” feature, which, when enabled, lets a host check if participants are clicking away from the main Zoom window during a call.
  • A security researcher found that the Zoom uses a “shady” technique to install its Mac app without user interaction. “The same tricks that are being used by macOS malware,” the researcher said.
  • On the bright side and to some users’ relief, we reported that it is in fact possible to join a Zoom video call without having to download or use the app. But Zoom’s “dark patterns” doesn’t make it easy to start a video call using just your browser.
  • Zoom has faced questions over its lack of transparency on law enforcement requests it receives. Access Now, a privacy and rights group, called on Zoom to release the number of requests it receives, just as Amazon, Google, Microsoft and many more tech giants report on a semi-annual basis.
  • Then there’s Zoombombing, where trolls take advantage of open or unprotected meetings and poor default settings to take over screen-sharing and broadcast porn or other explicit material. The FBI this week warned users to adjust their settings to avoid trolls hijacking video calls.
  • And Zoom tightened its privacy policy this week after it was criticized for allowing Zoom to collect information about users’ meetings — like videos, transcripts and shared notes — for advertising.

There are many more privacy-focused alternatives to Zoom. Motherboard noted several options, but they all have their pitfalls. FaceTime and WhatsApp are end-to-end encrypted, but FaceTime works only on Apple devices and WhatsApp is limited to just four video callers at a time. A lesser known video calling platform, Jitsi, is not end-to-end encrypted but it’s open source — so you can look at the code to make sure there are no backdoors — and it works across all devices and browsers. You can run Jitsi on a server you control for greater privacy.

In fairness, Zoom is not inherently bad and there are many reasons why Zoom is so popular. It’s easy to use, reliable and for the vast majority it’s incredibly convenient.

But Zoom’s misleading claims give users a false sense of security and privacy. Whether it’s hosting a virtual happy hour or a yoga class, or using Zoom for therapy or government cabinet meetings, everyone deserves privacy.

Now more than ever Zoom has a responsibility to its users. For now, Zoom at your own risk.

No proof of a Houseparty breach, but its privacy policy is still gatecrashing your data

Houseparty has been a smashing success with people staying home during the coronavirus pandemic who still want to connect with friends.

The group video chat app, interspersed with games and other bells and whistles, raises it above the more mundane Zooms and Hangouts (fun only in their names, otherwise pretty serious tools used by companies, schools and others who just need to work) when it comes to creating engaged leisure time, amid a climate where all of them are seeing a huge surge in growth.

All that looked like it could possibly fall apart for Houseparty and its new owner Epic Games when a series of reports appeared Monday claiming Houseparty was breached, and that malicious hackers were using users’ data to access their accounts on other apps such as Spotify and Netflix.

Houseparty was swift to deny the reports and even go so far as to claim — without evidence — it was investigating indications that the “breach” was a “paid commercial smear to harm Houseparty,” offering a $1 million reward to whoever could prove its theory.

For now, there is no proof that there was a breach, nor proof that there was a paid smear campaign, and when we reached out to ask Houseparty and Epic about this investigation, a spokesperson said: “We don’t have anything to add here at the moment.”

But that doesn’t mean that Houseparty doesn’t have privacy issues.

As the old saying goes, “if the product is free, you are the product.” In the case of the free app Houseparty, the publishers detail a 12,000+ word privacy policy that covers any and all uses of data that it might collect by way of you logging on to or using its service, laying out the many ways that it might use data for promotional or commercial purposes.

There are some clear lines in the policy about what it won’t use. For example, while phone numbers might get shared for tech support, with partnerships that you opt into, to link up contacts to talk with and to authenticate you, “we will never share your phone number or the phone numbers of third parties in your contacts with anyone else.”

But beyond that, there are provisions in there that could see Houseparty selling anonymized and other data, leading Ray Walsh of research firm ProPrivacy to describe it as a “privacy nightmare.”

“Anybody who decides to use the Houseparty application to stay in contact during quarantine needs to be aware that the app collects a worrying amount of personal information,” he said. “This includes geolocation data, which could, in theory, be used to map the location of each user. A closer look at Houseparty’s privacy policy reveals that the firm promises to anonymize and aggregate data before it is shared with the third-party affiliates and partners it works with. However, time and time again, researchers have proven that previously anonymized data can be re-identified.”

There are ways around this for the proactive. Walsh notes that users can go into the settings to select “private mode” to “lock” rooms they use to stop people from joining unannounced or uninvited; switch locations off; use fake names and birthdates; disconnect all other social apps; and launch the app on iOS with a long press to “sneak into the house” without notifying all your contacts.

But with a consumer app, it’s a longshot to assume that most people, and the younger users who are especially interested in Houseparty, will go through all of these extra steps to secure their information.

Telco metadata grab is for modelling COVID-19 spread, not tracking citizens, says EC

As part of its response to the public health emergency triggered by the COVID-19 pandemic, the European Commission has been leaning on Europe’s telcos to share aggregate location data on their users.

The Commission kick-started a discussion with mobile phone operators about the provision of aggregated and anonymised mobile phone location data,” it said today.

“The idea is to analyse mobility patterns including the impact of confinement measures on the intensity of contacts, and hence the risks of contamination. This would be an important — and proportionate — input for tools that are modelling the spread of the virus, and would also allow to assess the current measures adopted to contain the pandemic.”

“We want to work with one operator per Member State to have a representative sample,” it added. “Having one operator per Member State also means the aggregated and anonymised data could not be used to track individual citizens, that is also not at all the intention. Simply because not all have the same operator.

“The data will only be kept as long as the crisis is ongoing. We will of course ensure the respect of the ePrivacy Directive and the GDPR.”

Earlier this week Politico reported that commissioner Thierry Breton held a conference with carriers, including Deutsche Telekom and Orange, asking for them to share data to help predict the spread of the novel coronavirus.

Europe has become a secondary hub for the disease, with high rates of infection in countries including Italy and Spain — where there have been thousands of deaths apiece.

The European Union’s executive is understandably keen to bolster national efforts to combat the virus. Although it’s less clear exactly how aggregated mobile location data can help — especially as more EU citizens are confined to their homes under national quarantine orders. (While police patrols and CCTV offer an existing means of confirming whether or not people are generally moving around.)

Nonetheless, EU telcos have already been sharing aggregate data with national governments.

Such as Orange in France which is sharing “aggregated and anonymized” mobile phone geolocation data with Inserm, a local health-focused research institute — to enable them to “better anticipate and better manage the spread of the epidemic”, as a spokeswoman put it.

“The idea is simply to identify where the populations are concentrated and how they move before and after the confinement in order to be able to verify that the emergency services and the health system are as well armed as possible, where necessary,” she added. “For instance, at the time of confinement, more than 1 million people left the Paris region and at the same time the population of Ile de Ré increased by 30%.

“Other uses of this data are possible and we are currently in discussions with the State on all of these points. But, it must be clear, we are extremely vigilant with regards to concerns and respect for privacy. Moreover, we are in contact with the CNIL [France’s data protection watchdog]… to verify that all of these points are addressed.”

Germany’s Deutsche Telekom is also providing what a spokesperson dubbed “anonymized swarm data” to national health authorities to combat the corona virus.

“European mobile operators are also to make such anonymized mass data available to the EU Commission at its request,” the spokesperson told us. “In fact, we will first provide the EU Commission with a description of data we have sent to German health authorities.”

It’s not entirely clear whether the Commission’s intention is to pool data from such existing local efforts — or whether it’s asking EU carriers for a different, universal data-set to be shared with it during the COVID-19 emergency.

When we asked about this it did not provide an answer. Although we understand discussions are ongoing with operators — and that it’s the Commission’s aim to work with one operator per Member State.

The Commission has said the metadata will be used for modelling the spread of the virus and for looking at mobility patterns to analyze and assess the impact of quarantine measures.

A spokesman emphasized that individual-level tracking of EU citizens is not on the cards.

“The Commission is in discussions with mobile operators’ associations about the provision of aggregated and anonymised mobile phone location data,” the spokesman for Breton told us.

“These data permit to analyse mobility patterns including the impact of confinement measures on the intensity of contacts and hence the risks of contamination. They are therefore an important and proportionate tool to feed modelling tools for the spread of the virus and also assess the current measures adopted to contain the Coronavrius pandemic are effective.”

“These data do not enable tracking of individual users,” he added. “The Commission is in close contact with the European Data Protection Supervisor (EDPS) to ensure the respect of the ePrivacy Directive and the GDPR.”

At this point there’s no set date for the system to be up and running — although we understand the aim is to get data flowing asap. The intention is also to use datasets that go back to the start of the epidemic, with data-sharing ongoing until the pandemic is over — at which point we’re told the data will be deleted.

Breton hasn’t had to lean very hard on EU telcos to share data for a crisis cause.

Earlier this week Mats Granryd, director general of operator association the GSMA, tweeted that its members are “committed to working with the European Commission, national authorities and international groups to use data in the fight against COVID-19 crisis”.

Although he added an important qualifier: “while complying with European privacy standards”.

Europe’s data protection framework means there are limits on how people’s personal data can be used — even during a public health emergency. And while the legal frameworks do quite rightly bake in flexibility for a pressing public purpose, like the COVID-19 pandemic, it does not mean individuals’ privacy rights automatically go out the window.

Individual tracking of mobile users for contact tracing — such as Israel’s government is doing — is unimaginable at the pan-EU level. Certainly unless the regional situation deteriorates drastically.

One privacy lawyer we spoke to last week suggested such a level of tracking and monitoring across Europe would be akin to a “last resort”. Though individual EU countries are choosing to respond differently to the crisis — such as, for example, Poland giving quarantined people a choice between regular police checks up or uploading geotagged selfies to prove they’re not breaking lockdown.

While former EU Member, the UK, has reportedly chosen to invite in the controversial US surveillance-as-a-service tech firm, Palantir, to carry out resource tracking for its National Health Service during the coronavirus crisis.

Under pan-EU law (which the UK remains subject to, until the end of the Brexit transition period), the rule of thumb is that extraordinary data-sharing — such as the Commission asking telcos to share user location data during a pandemic — must be “temporary, necessary and proportionate”, as digital rights group Privacy International recently noted.

This explains why Breton’s request is for “anonymous and aggregated” location data. And why, in background comments to reporters, the claim is that any shared data sets will be deleted at the end of the pandemic.

Not every EU lawmaker appears entirely aware of all the legal limits, however.

Today the bloc’s lead privacy regulator, data protection supervisor (EDPS) Wojciech Wiewiórowski, could be seen tweeting cautionary advice at one former commissioner, Andrus Ansip (now an MEP) — after the latter publicly eyed up a Bluetooth-powered contacts tracing app deployed in Singapore.

“Please be cautious comparing Singapore examples with European situation. Remember Singapore has a very specific legal regime on identification of device holder,” wrote Wiewiórowski.

So it remains to be seen whether pressure will mount for more privacy-intrusive surveillance of EU citizens if regional rates of infection continue to grow.

As we reported earlier this week, governments or EU institutions seeking to make use of mobile phone data to help with the response to the coronavirus must comply with the EU’s ePrivacy Directive — which covers the processing of mobile location data.

The ePrivacy Directive allows for Member States to restrict the scope of the rights and obligations related to location metadata privacy, and retain such data for a limited time — when such restriction constitutes “a necessary, appropriate and proportionate measure within a democratic society to safeguard national security (i.e. State security), defence, public security, and the prevention, investigation, detection and prosecution of criminal offences or of unauthorised use of the electronic communication system” — and a pandemic seems a clear example of a public security issue.

Thing is, the ePrivacy Directive is an old framework. The previous college of commissioners had intended to replace it alongside an update to the EU’s broader personal data protection framework — the General Data Protection Regulation (GDPR) — but failed to reach agreement.

This means there’s some potential mismatch. For example the ePrivacy Directive does not include the same level of transparency requirements as the GDPR.

Perhaps understandably, then, since news of the Commission’s call for carrier metadata emerged concerns have been raised about the scope and limits of the data sharing. Earlier this week, for example, MEP Sophie in’t Veld wrote to Breton asking for more information on the data grab — including querying exactly how the data will be anonymized.

The EDPS confirmed to us that the Commission consulted it on the proposed use of telco metadata.

A spokesman for the regulator pointed to a letter sent by Wiewiórowski to the Commission, following the latter’s request for guidance on monitoring the “spread” of COVID-19.

In the letter the EDPS impresses on the Commission the importance of “effective” data anonymization — which means it’s in effect saying a technique that does genuinely block re-identification of the data must be used. (There are plenty of examples of ‘anonymized’ location data being shown by researchers to be trivially easy to reidentify, given how many individual tells such data typically contains, like home address and workplace address.)

“Effective anonymisation requires more than simply removing obvious identifiers such as phone numbers and IMEI numbers,” warns the EDPS, adding too that aggregated data “can provide an additional safeguard”.

We also asked the Commission for more details on how the data will be anonymized and the level of aggregation that would be used — but it told us it could not provide further information at this stage. 

So far we understand that the anonymization and aggregation process will be undertaken before data is transferred by operators to a Commission science and research advisory body, called the Joint Research Centre (JRC) — which will perform the data analytics and modelling.

The results — in the form of predictions of propagation and so on — will then be shared by the Commission with EU Member States authorities. The datasets feeding the models will be stored on secure JRC servers.

The EDPS is equally clear on the Commission’s commitments vis-a-vis securing the data.

“Information security obligations under Commission Decision 2017/464 still apply [to anonymized data], as do confidentiality obligations under the Staff Regulations for any Commission staff processing the information. Should the Commission rely on third parties to process the information, these third parties have to apply equivalent security measures and be bound by strict confidentiality obligations and prohibitions on further use as well,” writes Wiewiórowski.

“I would also like to stress the importance of applying adequate measures to ensure the secure transmission of data from the telecom providers. It would also be preferable to limit access to the data to authorised experts in spatial epidemiology, data protection and data science.”

Data retention — or rather the need for prompt destruction of data sets after the emergency is over — is another key piece of the guidance.

“I also welcome that the data obtained from mobile operators would be deleted as soon as the current emergency comes to an end,” writes Wiewiórowski. “It should be also clear that these special services are deployed because of this specific crisis and are of temporary character. The EDPS often stresses that such developments usually do not contain the possibility to step back when the emergency is gone. I would like to stress that such solution should be still recognised as extraordinary.”

teresting to note the EDPS is very clear on “full transparency” also being a requirement, both of purpose and “procedure”. So we should expect more details to be released about how the data is being effectively rendered unidentifiable.

“Allow me to recall the importance of full transparency to the public on the purpose and procedure of the measures to be enacted,” writes Wiewiórowski. “I would also encourage you to keep your Data Protection Officer involved throughout the entire process to provide assurance that the data processed had indeed been effectively anonymised.”

The EDPS has also requested to see a copy of the data model. At the time of writing the spokesman told us it’s still waiting to receive that.

“The Commission should clearly define the dataset it wants to obtain and ensure transparency towards the public, to avoid any possible misunderstandings,” Wiewiórowski added in the letter.

A Norwegian school quit Zoom after a naked man ‘guessed’ the meeting link

A school in Norway has stopped using Zoom, the popular video conferencing service, after a naked man apparently “guessed” the link to a video lesson.

According to Norwegian state broadcaster NRK, the man exposed himself in front of several young children over the video call. The theory, according to the report, is that the man guessed the meeting ID and joined the video call.

One expert quoted in the story said some are “looking” for links.

Last year security researchers told TechCrunch that malicious users could access and listen in to Zoom video meetings by cycling through different permutations of meeting IDs in bulk. The researchers said the flaw, tested on both Zoom and Webex calls, worked because many meetings were not protected by a passcode.

Zoom later changed the settings so that private rooms are password protected by default.

School and workplaces across the world are embracing remote teaching as the number of those infected by the coronavirus strain, known as COVID-19, continues to climb. There are some 523,000 confirmed cases of COVID-19 across the world as of Thursday, according to data provided by Johns Hopkins University. Norway currently has over 3,300 confirmed cases.

More than 80% of the world’s population is said to be on some kind of lockdown to help limit the spread of the coronavirus in an effort to prevent the overrunning of health systems.

The ongoing global lockdown has forced companies to embrace their staff working from home, pushing Zoom to become the go-to video conferencing platform for not only remote workers but also for recreation, like book clubs and happy hours.

Some found out the hard way that not setting up Zoom the correct way can lead to “Zoombombing,” where trolls jump into public calls, hijack screens and display obscene imagery to unsuspecting guests.