France releases contact-tracing app StopCovid on Android

After weeks of intense debate, the French government has released its contact-tracing app StopCovid. While it was supposed to be released on both iOS and Android 6 hours ago, it is only available on the Play Store right now.

It’s unclear what’s the bottleneck on iOS as the government sent me a beta version of the iOS app last week ahed of the final release today. I’ve reached out to the French government but haven’t heard back.

France isn’t relying on Apple and Google’s contact-tracing API. A group of researchers and private companies have worked on a separate, centralized architecture. The server assigns you a permanent ID (a pseudonym) and sends a list of ephemeral IDs derivated from that permanent ID to your phone.

Like most contact-tracing apps, StopCovid relies on Bluetooth Low Energy to build a comprehensive list of other app users you’ve interacted with for more than a few minutes. If you’re using the app, it collects the ephemeral IDs of other app users around you.

The implementation then differs from here. If you’re using StopCovid and you’re diagnosed COVID-19-positive, your doctor, hospital or testing facility will hand you a QR code or a string of letters and numbers. You can choose to open the app and enter that code to share the list of ephemeral IDs of people you’ve interacted with over the past two weeks.

The server back end then flags all those ephemeral IDs as belonging to people who have potentially been exposed to the coronavirus. On the server again, each user is associated with a risk score. If it goes above a certain threshold, the user receives a notification. The app then recommends you to get tested and follow official instructions.

From the very beginning, France’s contact-tracing protocol ROBERT has raised some privacy concerns. Hundreds of French academics signed a letter asking for safeguards. In particular, they said it was unclear whether a contact-tracing app would even be useful when it comes to fighting the coronavirus outbreak.

France’s data protection watchdog CNIL stated that the app was complying with the regulatory framework in France. But CNIL also added a long list of recommendations to protect the privacy of the users of the app.

Last week, the parliament voted in favor of the release of the contact-tracing app. But it was a complicated debate, with a lot of misconceptions and some interesting objections.

Now, let’s see how people react to the release of the app and if they actually download the app. The government said there won’t be any negative consequence if you’re not using StopCovid, nor any privilege if you’re using it.

There’s one thing for sure — not releasing the iOS app at the same time as the Android app introduces some confusion. The most downloaded app on the iOS App Store in France is currently STOP COVID19 CAT, a health information app from the government of Catalonia:

Decrypted: iOS 13.5 jailbreak, FBI slams Apple, VCs talk cybersecurity

It was a busy week in security.

Newly released documents shown exclusively to TechCrunch show that U.S. immigration authorities used a controversial cell phone snooping technology known as a “stingray” hundreds of times in the past three years. Also, if you haven’t updated your Android phone in a while, now would be a good time to check. That’s because a brand-new security vulnerability was found — and patched. The bug, if exploited, could let a malicious app trick a user into thinking they’re using a legitimate app that can be used to steal passwords.

Here’s more from the week.


THE BIG PICTURE

Every iPhone now has a working jailbreak

Researchers find exposed data on millions of users of quiz app, TVSmiles

TVSmiles, a Berlin-based mobile native advertising app whose users earn digital currency in exchange for engaging with branded content such as quizzes, apps and videos, has suffered a data breach.

Security researcher UpGuard disclosed in a report today that it found an unsecured Amazon S3 bucket online last month — containing personal and device data tied to millions of the app’s users. According to TVSmiles’ marketing material the quiz app has up to three million users.

The storage bucket UpGuard found exposed to the Internet contained a 306 GB PostgreSQL database backup with “unencrypted personally identifiable information matched to individual users, profiling insights about users’ interests based on quiz responses, associations to smart devices, and accounts and login details for TVSmiles’ business relationships”, according to its report.

UpGuard writes that 261 database tables were present in the exposed repository — including a “core_users” table consisting of more than 6.6 million rows. Of the entries that had an email address tied to them UpGuard says it found 901,000 unique emails.

The exposed backup file appears to date back to August 2017.

Screengrab: UpGuard

After UpGuard reported the breach to TVSmiles, in an email sent May 13, the Berlin -based company responded on May 15, writing in an email that the repository “has been immediately secured” (UpGuard says it independently confirmed this).

TVSmiles co-founder, Gaylord Zach, added in this email to UpGuard that it would “further investigate the contents of the exposed data to take further actions”.

Reached for comment on the incident today, Zach confirmed UpGuard’s report and also confirmed that the exposed repository had been accidentally left unsecured for years.

He said internal analysis of available logs has found no unauthorized access besides UpGuard’s access of the data, adding that TVSmiles has yet to notify users of the incident — but is planning a communication to users within its mobile app and a blog post on its website.  

“Our analysis has revealed that the data consists of a database backup that was created in 2017 and mistakenly stored in a cloud storage repository provided within the cloud hosting environment,” Zach told us. “Allegedly this backup was created as a safety measure ahead of performed maintenance work. Further investigation revealed three independent but severe policy breaches: 1.) The backup was stored in plain format where all backups should have been encrypted; 2.) The affected repository was provisioned as a code repository and never intended to store data; 3.) The affected repository was intended for private use within the organization and never intended to be publicly available.

“The very unfortunate combination of these three factors resulted in the long period that this data remained stored without discovery.”

TVSmiles reported the breach to the German Data protection authorities — filing its report on May 17, per Zach.

Europe’s General Data Protection Regulation (GDPR) requires data controllers to report all breaches of personal data that pose a risk to people’s rights and freedoms to a supervisory authority within at least 72 hours of discovery.

“We are very thankful to UpGuard unveiling this exposure before it has led to material data breaches and harm to our users. We are very much embarrassed by this unnecessary exposure of user data. It is a strong reminder to every developer to do regular security checks and house keeping in order to avoid these incidents,” he added.

Screengrab: UpGuard

Clicks for data

TVSmiles’ business participates in a data-fuelled digital ad ecosystem that operates by linking user IDs to devices, digital activity and tracked interests, building individual profiles for the purpose of targeting screen users with advertising.

Hence the interactive content that the TVSmiles quiz app encourages users to engage with — rewarding activity with a proprietary digital currency (called ‘Smiles’) that can be exchanged for discount vouchers on products in its shop or directly for cash — functions both as direct marketing material to drive deeper engage around branded content; and a data harvesting tool in its own right, enabling the business to gather deeper insights on users’ interests which can in turn be monetized via user profiling and ad targeting.

Such insights enable TVSmiles to plug into a wider digital advertising ecosystem in which mobile users are profiled and tracked at scale across multiple apps, services and devices in order that targeted ads can follow eyeballs as they go — all powered by the background profiling of people’s digital activity and inferred interests.

According to Crunchbase the quiz app has raised a total of $12.6M in funding since being founded around seven years ago when it was pitching itself as a second screen app for TV viewers. It went on to launch its own ad platform, called Kwizzard, which packages ads into “a native, gamified ad format” — with the aim of luring app users to engage with quiz-based ad campaigns.

Given the nature of TVSmiles’ business — and a wider problematic lack of transparency around how the adtech industry functions — this data breach is a fascinating and unnerving glimpse of the breadth and depth of data harvesting that routinely goes on in the background of ad-supported digital services.

Even an app with a relatively small user base (single digit millions) can be sitting atop a massive repository of tracking data.

The online ad industry also continues to face major questions over the legal basis it claims for processing large volumes of personal data under the European Union’s data protection regime.

 

A master database plus access tokens

In terms of the types of data exposed in this breach, UpGuard said the 306 GB PostgreSQL database backup contained “centralized information” about users of the app, alongside what it describes as “large amounts of internal system and partnership information necessary for any business participating in the modern online advertising ecosystem”.

TVSmiles’ LinkedIn page reports the app having in excess of 2M users in Germany and the U.K. — per Google’s Play store the TVSmiles app has had in excess of 1M downloads to date, and while Apple’s iOS does not break out a ballpark figure for app downloads a video on the Play Store app page makes reference to 3M users — so it’s possible the 6.6M figure relates to total downloads over the app’s lifetime since launch back in September 2013.

Zach told us that the discrepancy between the user figures is a result of TVSmiles being a much smaller business now than it was in mid 2017 — when it was spending a lot on marketing and had more active users, including as a result of operating in the UK market (which it left in 2018).

“In general we are now a much smaller organisation compared to 2017,” he added.

Other tables in the repository were found by UpGuard to contain considerably more entries — such as a “tracking_token” table, with more than 235 million entry rows.

“A table in this database called “user_core” contained six million rows, with many users having their “country” field marked for other territories throughout Europe, making this data consistent with being a master database for TVSmiles at the time,” it writes in the report. “The user_core table contained fields for email address, fb_user, fb_access_token, first and last name, gender, date of birth, address, phone number, password, and others.”

UpGuard told us that the user_core table had password hashes filled in for 626,000 of the rows. It said these passwords appear to have been hashed using a type of hashing algorithm that’s known to be vulnerable to brute forcing — the sha256 algorithm — and therefore offers little protection against malicious attackers.

It added that it was able to locate three out of three random hashed passwords it checked in publicly available indexes which are easily searchable online — meaning these password hashes had already been reversed (which in turn suggests they may have been used elsewhere before; or else are commonly guessable).

It also found Facebook user IDs (“fb_user”) and access tokens (“fb_access_token”) stored in the repository for some of the listed TVSmiles users — presumably for those who used a Facebook account to login to the app.

“Not all data points were present for all users – for example, the Facebook specific fields would likely only be present for users who had connected with their Facebook identity, and users who had authenticated via Facebook would not inherently need to create a password for the app due to the functionality of that authentication method,” UpGuard suggests.

The exposed repository contained more than 312,000 access tokens tied to Facebook IDs, according to its analysis.

Screengrab: UpGuard

It also found a large collection of personal data stored in a table related to end user devices — which it said were linked to tracking tokens, ad IDs and user rewards.

“A table called “device_core” contains 7.5 million rows tied to physical devices,” UpGuard writes. “These devices have unique device ids, access tokens, and mappings to the user ids of their owners. Those device ids, in turn, are then relevant to a “tracking_token” table consisting of 235 million entry rows.

“The rows in the tracking_token table include fields such as campaign_id, placement_id, user_payout, and challenge_id, building up a picture of the TVSmiles activity – like which ads and activities users responded to – on each device – which can then be linked back to the user.”

Other personal data found in the repository includes precise location data — “users’ latitude and longitude” — with a related admin view configured for a database named “full device info”, which UpGuard says highlights “the “tracker_name,” a token value, and the nearest weather station”.

It also found a collection of “insights” related to TVSmiles users — listed in the form of “intents, interests, and other psychographic qualities”.

“These subjects ranged from consumer goods (e.g. books, video games, furniture, and clothing) to the user’s education and more esoteric interests,” the report notes.

Redacted screengrab: UpGuard

As well as storing (unencrypted) personal data attached to millions of users of the TVSmiles app, and hashed passwords for more than half a million of these entries, the exposed repository was found to contain information related to a number of the company’s own business clients — also tied to access tokens.

“It is reasonable to interpret these names as business clients, who have paid to publish ads on TVSmiles or have access to insights gleaned from end-user app interaction,” UpGuard writes of the “business_clients” table.

“These business users’ hashed passwords, phone numbers, email addresses, names, and other data points were also present. Conversely, TVSmiles’ own credentials for interacting with vendors necessary to provide the TVSmiles platform, like ad exchanges, fraud detection platforms, and email communication scheduling, were also present.”

UpGuard suggests that a hacker who stumbled across the unsecured bucket may have been able to use the tokens to gain access to a number of additional services where they could potentially obtain further user data — such as by exporting contact data; accessing or sending mail via a third party service; or reading historic service information and performance metrics.

“If this database had been located by malicious entities, prior to UpGuard discovering it and sending appropriate notification, the possibility exists that such credentials could have allowed an attacker to impersonate TVSmiles and collect additional information about arbitrary targets from those other platforms and service providers,” it adds.

Zach confirmed the data contained “legacy access tokens” — but said they stem from a deprecated login methodology that had since been replaced with a OAuth based sign on service.

The data originates from August 2017. Any contained access tokens would therefore have expired by now,” he told us, saying TVSmiles has not yet notified any business partners on account of seeing “no major risk based on the nature and age of the exposed tokens”.

“We would however not hesitate to contact and take action if we have underestimated or overseen risks,” he added. 

 

Questions of consent

After reviewing UpGuard’s report, Wolfie Christl, an EU-based privacy researcher who focuses on adtech and data-driven surveillance, called for EU data protection agencies to launch an immediate investigation.

“This is a massive data breach. But it is about more than that. It provides a glimpse into an opaque industry consisting of thousands of companies that secretly harvest extensive personal information on millions of people for business purposes,” he told TechCrunch.

“According to the leaked database, the company has digital profiles on 6M people and 7.5M devices. It seems that they linked names, email addresses and phone numbers to device identifiers, social media accounts, and to all kinds of behavioral data.

“Data protection authorities in Germany — and perhaps in other European countries — must immediately start an investigation. In addition to the data breach, they must examine whether the company, and its affiliates and partners, processed this extensive amount of personal data in a lawful way. Did they have a legal basis to process it?”

Screengrab: UpGuard

“The wider issue is that, two years after the GDPR came into full force, it has still not been enforced in major areas,” Christl added. “We still see large-scale misuse of personal information all over the digital world, from platforms to digital marketing to mobile apps. EU authorities should have acted years ago, they must do so now.”

In its privacy policy, TVSmiles states that it only uses app users’ personal data “to the extent that this is legally permissible or you have given your consent… for the purposes of advertising, market research or the needs-based design of our offer” [text translated using Google Translate].

“We are obtaining user consent to the use of data and have created a dedicated section within our app to obtain consent like location data, advertising identifier, sharing of personal data with advertising partners,” Zach told us on this, adding that consent information is provided to “various advertising and tracking partners” — assuming users agree to be tracked via responses to its consent flows (shown below).

Screenshots: TVSmiles

References to a number of third party adtech companies can be found in TVSmiles’ repository, per UpGuard, suggesting it was making use of their services for data structuring, enrichment and monetization purposes — including Adex, a data management platform and marketplace whose website touts the “easy selling and purchase of data”; Adjust, a mobile measurement and fraud-prevention firm geared towards mobile marketing; mobile app monetization company, Fyber; and product user behavior analytics platform, Mixpanel.

Another interesting component in this story is how TVSmiles’ business straddles the TV and online advertising realms. Its business began, more than half a decade ago, with a firm focus on the notion of being a ‘second screen’ app for TV viewers — including by using audio technology to automatically identify TV ads in order to serve related in-app content. This means it’s forged links with traditional media giants.

Back in 2014, for instance, it inked a marketing partnership for its app in Austria with European media giant ProSiebenSat.1’s marketing subsidiary, SevenOne Media. At the time ProSiebenSat.1 PULS 4’s MD, Michael Stix, billed the move as astrategic step” to integrate brand communication on the second screen, lauding the tie-up as a way to offer advertising customers “additional novel touchpoints” for their target group.

But the rise of smart TVs and digital sign-ins has paved the way for deeper linking of Internet activity and TV viewing. Especially as traditional mass media giants have been looking for ways to diversify their media businesses, with more competition for viewers’ eyeballs than ever before.

Behind all these screens a complex mass of adtech pipes is exchanging data linked to individual users — trading IDs and insights to join dots and serve targeted ads. So connected “touchpoints” are now very much integral, not secondary, these days.

UpGuard found labels (see below screengrab) in the exposed TVSmiles repository that refer to “seven_pass”: Aka a single sign-on solution for all ProSieben.Sat1’s digital services, called 7Pass.

An FAQ on TVSmiles’ website confirms TVSmiles users are able to use the 7Pass service to log in to the app.

Screengrab: UpGuard

In its privacy policy, TVSmiles states that “pseudonymized” data of users of the 7Pass login is sent to ProSiebenSat.1 Digital & its affiliates and to other affiliated companies of ProSiebenSat.1 Media SE — including quiz response data.

“In addition, survey data collected and provided by you via survey cards in the app are also transmitted pseudonymised to ProSiebenSat.1 Digital & Adjacent GmbH and other affiliated companies of ProSiebenSat.1 Media SE in order to enable you to use special quiz cards in the app, bring in more smilies and be able to offer special promotions in cooperation with ProSiebenSat.1,” it adds.

Of course, much like weak password hashes, “pseudonymised” personal data can be trivially easy to re-identify — such as by unifying tracking IDs.

Asked about the 7Pass service, Zach said TVSmiles had replaced its legacy user management with ProSiebenSat.1’s digital sign-on service — claiming the main objective was “to leverage a trustworthy well maintained sign-on service by a larger partner and remove the need of self managed credentials and access tokens”. 

“Given the sensitivity of user data and access credentials it seems like a wise choice in light of this case,” he added. 

In a more recent business development, TVSmiles sold its development division and adtech to a company called PubNative in December 2019. PubNative is a mobile SSP and programmatic ad exchange owned by a German holding company called MGI Media that’s made a large number of media and adtech acquisitions in recent years (as well as being majority owner of free-to-play gamesmaker, Gamigo).

At the time of this “acqui-hire” TVSmiles and PubNative suggested an ongoing business partnership. “As we recently branched into Connected TV, PubNative’s advanced tech stack supports our continued growth and allows us to expand our business internationally. Advancements on demand-side business development will be introduced gradually across the entire product line,” said Zach in a press statement at the end of last year.

Asked about the nature of the business relationship between TVSmiles and PubNative, Zach confirmed it sold “certain people and technology” to PubNative but retained its mobile apps and user base, adding: “No user data has been shared with PubNative and they have no involvement in this case.”

However he confirmed TVSmiles uses advertising technology provided by PubNative. 

“This technology (SDK) is built into the TVSmiles app. Data sharing is limited to those authorized by user consent for advertising use,” he added.

A static analysis by Exodus suggests the TVSmiles app contains more than 40 trackers — including PubNative’s. This plus the fact the TVSmiles repository was found by UpGuard to be storing precise user location data is interesting in light of a separate report, published in January, by Norway’s Consumer Council (NCC) — which delved into how the adtech industry non-transparently exploits app users’ data.

The NCC report identified PubNative as one of the entities receiving GPS data from a number of apps it tested (NB: it did not test the TVSmiles app). The Council found a majority of apps that it did test transmitted data to entities it characterized as “unexpected third parties” — meaning users were not being clearly informed about who was getting their information and what was being done with it, in its view.

Other SDKs contained in the TVSmiles app, per Exodus and a list of software suppliers in TVSmiles’ privacy policy, include Facebook Ads, Analytics and Places; Google Ads, Analytics, DoubleClick & more; and Twitter MoPub. Also present: A longer list of smaller adtech and mobile marketing/monetization players, from AdBuddiz to Vungle.

Looking through the Exodus report most of these trackers stem from advertising technology that is being used within TVSmiles app,” Zach also told us. 

Facebook finally makes it way easier to trash your old posts

Facebook is introducing a new tool to help users batch-delete old posts and shrink their digital footprint on the aging social network.

Called “Manage Activity,” the new feature lets users prune their posts in bulk, making it less of a headache to delete content aging badly or anything else unnecessary that’s built up from years of using the platform. The feature will be available to some users on the Facebook app today and will roll out more broadly in the next few weeks.

“Whether you’re entering the job market after college or moving on from an old relationship, we know things change in people’s lives, and we want to make it easy for you to curate your presence on Facebook to more accurately reflect who you are today,” Facebook wrote in the tool’s announcement.

Anyone who’d like to batch-delete or archive old content will be able search their entire trove of Facebook posts using filters for dates, people tagged, and content type (photo, video, text updates, et cetera). In a preview of the tool, it looked like a vastly more useful way to control aging content without having to manually scroll through years of old posts.

Users skittish about getting rid of content permanently can opt to archive their old posts rather than deleting them outright. Archived posts stick around in a kind of purgatory, remaining viewable to their creator, like the Stories archive feature on Instagram. Deleted posts will hang out for 30 days before being wiped and users can either restore them or manually delete them from there.

In the past, users were stuck either batch-deleting old posts manually or installing third-party browser add-ons, which are notoriously rife with malware.

While it’s actually fairly shocking Facebook didn’t already have this tool, the platform’s privacy controls have a history of being somewhat fussy and difficult to navigate. Facebook has made improvements—not all voluntary—to its user privacy controls in recent years, particularly as more users wake up to the concerns of sharing vast amounts of personal data online. Old content poses similar problems and can also be a goldmine for anyone looking to compromise an account, whether for harassment purposes, identity theft or whatever else.

As social networks age, old posts and tagged content builds up, like a kind of digital plaque. For privacy purposes, scraping that stuff off regularly and cleaning things up is a good idea. And while you can’t really truly pry any information you’ve given up online away from companies like Facebook, getting more control over personal data that’s already out there is probably the next best thing.

Twitter, Reddit challenge US rules forcing visa applicants to disclose their social media handles

Twitter and Reddit have filed an amicus brief in support of a lawsuit challenging a U.S. government rule change compelling visa applicants to disclose their social media handles.

The lawsuit, brought by the Knight First Amendment Institute at Columbia University, the Brennan Center for Justice, and law firm Simpson Thacher & Bartlett, seeks to undo both the State Department’s requirement that visa applicants must disclose their social media handles prior to obtaining a U.S. visa, as well as related rules over the retention and dissemination of those records.

Last year, the State Department began asking visa applicants for their current and former social media usernames, a move that affects millions non-citizens applying to travel to the United States each year. The rule change was part of the Trump administration’s effort to expand its “enhanced” screening protocols. At the time, it was reported that the information would be used if the State Department determines that “such information is required to confirm identity or conduct more rigorous national security vetting.”

In a filing supporting the lawsuit, both Twitter and Reddit said the social media policies “unquestionably chill a vast quantity of speech” and that the rules violate the First Amendment rights “to speak anonymously and associate privately.”

Twitter and Reddit, which collectively have more than 560 million users, said their users — many of which don’t use their real names on their platforms — are forced to “surrender their anonymity in order to travel to the United States,” which “violates the First Amendment rights to speak anonymously and associate privately.”

“Twitter and Reddit vigorously guard the right to speak anonymously for people on their platforms, and anonymous individuals correspondingly communicate on these platforms with the expectation that their identities will not be revealed without a specific showing of compelling need,” the brief said.

“That expectation allows the free exchange of ideas to flourish on these platforms.”

Jessica Herrera-Flanigan, Twitter’s policy chief for the Americas, said the social media rule “infringes both of those rights and we are proud to lend our support on these critical legal issues.” Reddit’s general counsel Ben Lee called the rule a “intrusive overreach” by the government.

It’s not known how many, if any, visa applicants have been denied a visa because of their social media content. But since the social media rule went into effect, cases emerged of approved visa holders denied entry to the U.S. for other people’s social media postings. Ismail Ajjawi, a then 17-year-old freshman at Harvard University, was turned away at Boston Logan International Airport after U.S. border officials searched his phone after taking issue with social media postings of Ajjawi’s friends — and not his own.

Abed Ayoub, legal and policy director at the American-Arab Anti-Discrimination Committee, told TechCrunch at the time that Ajjawi’s case was not isolated. A week later, TechCrunch learned of another man who was denied entry to the U.S. because of a WhatsApp message sent by a distant acquaintance.

A spokesperson for the State Department did not immediately comment on news of the amicus brief.

German federal court squashes consent opt-outs for non-functional cookies

Yet another stake through the dark-patterned heart of consentless online tracking. Following a key cookie consent ruling by Europe’s top court last year, Germany’s Federal Court (BGH) has today handed down its own ‘Planet49’ decision — overturning an earlier appeal ruling when judges in a district court had allowed a pre-checked box to stand for consent.

That clearly now won’t wash even in Germany, where there had been confusion over the interpretation of a local law which had suggested an opt-in for non-functional cookies might be legally valid in some scenarios. Instead, the federal court ruling aligns with last October’s CJEU decision (which we reported on in detail here).

The ‘Planet49’ legal challenge was originally lodged by vzbz, a German consumer rights organization, which had complained about a lottery website, Planet49, that — back in 2013 — had required users to consent to the storage of cookies in order to play a promotional game. (Whereas EU law generally requires consent to be freely given and purpose limited if it’s to be legally valid.)

In a statement today following the BGH’s decision, board member Klaus Müller said: “This is a good judgment for consumers and their privacy. Internet users are again given more decision-making authority and transparency. So far, it has been common practice in this country for website providers to track, analyze, and market the interests and behaviors of users until they actively contradict them. This is no longer possible. If a website operator wants to screen his users, he must at least ask for permission beforehand. This clarification was long overdue.”

There is one looming wrinkle, however, in the shape of Europe’s ePrivacy reform — a piece of legislation which deals with online tracking. In recent years, European institutions have failed to reach agreement on an update to this — with negotiations ongoing and lobbyists seeking ways to dilute Europe’s strict consent standard.

Should any future reform of ePrivacy weaken the rules on tracking consent that could undo hard won progress to secure European citizens’ rights, under the General Data Protection Regulation (GDPR), which deals with personal data more broadly.

vzbz’s statement warns about this possibility, with the consumer rights group urging the EU to “ensure that the currently negotiated European ePrivacy Regulation does not weaken these strict regulations”.

“We reject the Croatian Presidency’s proposal to allow user tracking in the future on the legal basis of a balance of interests,” added Müller. “The end devices of the consumers allow a deep insight into complex emotional, political and social aspects of a person. Protecting this privacy is a great asset. We therefore require tight and clear rules for user tracking for advertising purposes. This may only be permitted with consent or under strict conditions defined in the law.”

In the meanwhile, there will be legal pressure on data controllers in German to clean up any fuzzy cookie notices to ensure they are complying with consent requirements.

“As the implementation of these new requirements are easily visible (and technically identifiable) on the website, incompliance bears a high risk of cease-and-desist and supervisory procedures,” warns law firm TaylorWessing in a blog post commenting on the BGH decision.

Separately today, another long running legal challenge brought by vzbz against the social networking giant Facebook — for allegedly failing to gain proper consent to process user data related to games hosted on its app platform, back in 2012 — is set to get even longer after the BGH sought a referral on a legal question to Europe’s top court.

The German federal court is seeking clarification on whether consumer protection organizations can bring a lawsuit before the country’s civil courts seeking redress for data protection breaches. “This question is controversial in the case law of the instance courts and the legal literature,” the court notes in a press release.

We’ve reached out to Facebook for comment on the CJEU referral.

French contact-tracing app StopCovid passes first vote

Following a debate in the National Assembly, the lower house of the French parliament, deputies have voted in favor of the release of contact-tracing app StopCovid and the decree related to the app.

While a vote in the parliament wasn’t a mandatory step, the government wants to rally as many people as possible around the contact-tracing app. At first, French President Emmanuel Macron said there would be a debate, but not necessarily followed by a vote. The government then reversed its stance and said deputies would vote.

“If members of the parliament vote against the release of the application, we won’t release StopCovid,” France’s digital minister Cédric O said in a radio interview earlier today.

It’s still unclear whether a contact-tracing app is efficient. But there’s one thing for sure — the app would be inefficient if only a small fraction of people living in France choose to download it. Hence today’s debate.

There were two important points discussed in the National Assembly. First, is StopCovid a surveillance app and is there a risk when it comes to privacy? Second, is StopCovid useful and efficient?

Privacy

“Tomorrow, I want to be free to download or not to download the application. I want to be free to protect myself,” France’s minister of health Olivier Véran said. It doesn’t really answer the privacy risks of a contact-tracing app but it’s true that the government changed the decree at the last minute to say that there won’t be any negative consequence if you’re not using StopCovid, nor any privilege if you’re using it.

“StopCovid isn’t a project for peacetime. It’s a project for a historical crisis — it wouldn’t exist without it and it’s not going to exist after it,” Cédric O said.

StopCovid relies on Bluetooth like most contact-tracing app. But a group of research institutes and private companies have worked on a homemade solution that doesn’t rely on Apple and Google’s contact-tracing API. It is based on a centralized contact-tracing protocol that computes matches on a central server. It isn’t anonymous but pseudonymous.

It has been a controversial topic over the past few weeks. Cédric O defended France’s centralized solution by saying it guarantees the digital sovereignty of the country.

“22 countries have chosen to develop a contact-tracing app that relies on the interface developed by Apple and Google. 22 countries, but not France and the U.K. And it’s not a coincidence because those two countries also have nuclear weapons,” Cédric O said.

Paula Forteza, a deputy from the same party who recently created a separate parliamentary group due to recent disagreements, rightfully said that isn’t as straightforward as that.

“No, the debates on the centralized and decentralized design of the protocol don’t overlap with debates on digital sovereignty and reliance on tech giants,” Forteza said.

Effectiveness

Once again, the government and opponents didn’t have the same take on the potential effectiveness on an optional contact-tracing app.

“The app is systematically and linearly efficient as soon as a few percentage points activate it,” Cédric O said.

“It’s inefficient because 50 to 60% of French people have to install the application. It’s inefficient because 25% of French people don’t have a smartphone, unless you have decided to offer them one,” leader of far-left party La France Insoumise Jean-Luc Mélenchon said.

On this point, nobody really had a clear answer. Contact-tracing using Bluetooth is still uncharted territory. Moreover, on iOS, you’ll have to leave the app open to enable Bluetooth.

“The only ones who are able to decide whether this applications is efficient, useful or not useful, are epidemiologists,” Cédric O concluded after the debate.

Next steps

Later today, the upper house of the French parliament, the Senate, will also discuss the pros and cons of StopCovid and vote. After that, StopCovid will be released on the App Store and Play Store early next week.

Here’s what it looks like:

D-ID, the Israeli company that digitally de-identifies faces in videos and still images, raises $13 million

If only Facebook had been using the kind of technology that TechCrunch Startup Battlefield alumnus D-ID was pitching, it could have avoided exposing all of our faces to privacy destroying software services like Clearview AI.

At least, that’s the pitch that D-ID’s founder and chief executive, Gil Perry makes when he’s talking about the significance of his startup’s technology.

D-ID, which stands for de-identification, is a pretty straightforward service that’s masking some highly involved and very advanced technology to blur digital images so they can’t be cross-referenced to determine someone’s identity.

It’s a technology whose moment has come as governments and private companies around the world ramp up their use of surveillance technologies as the world adjusts to a new reality in the wake of the COVID-19 epidemic.

“Governments around the world and organizations have used this new reality basically as an excuse for mass surveillance,” says Perry. His own government has used a track and trace system that monitors interactions between Israeli citizens using cell phone location data to determine whether anyone had been in contact with a person who had COVID-19.

While awareness of the issue may be increasing among consumers and regulators alike, the damage has, in many cases, already been done. Social media companies have already had their troves of images scraped by companies like Clearview AI, ClearView, HighQ, and NTechLabs and much of our personal information is already circulating online.

D-ID is undeterred. Founded by Perry and two other members of the Israeli army’s cybersecurity and offensive cyber unit, 8200, Sella Blondheim and Eliran Kuta, D-ID thinks the need for anonymizing technologies will continue to expand — thanks to new privacy legislation in Europe and certain states in the U.S. 

Meanwhile the company is also exploring other applications for its technology. The services that D-ID uses to mask and blur faces can also be used to create deepfakes of images and video.

The market for these types of digital manipulations are still in their earliest days according to Perry. Still, the company’s pitch managed to persuade lead investor AXA Ventures, and backers including Pitango, Y-Combinator, AI Alliance, Hyundai, Omron, Maverick (U.S.), and Mindset to participate in the company’s $13 million round.

D-ID already sees demand coming from automakers who want to use the technology to anonymize their driving monitoring systems — enabling them to record drivers’ reactions, but not any public identifying information. Security technologies that monitor for threats are another potential customer, according to the company. While closed circuit television monitors a physical space, it doesn’t need to collect the identifying information of people entering and exiting buildings.

The technical wizardry that D-ID has mastered is impressive — and a necessary defensive tool to ensure privacy in the modern world, according to its founders. Consumers are demanding it, according to D-ID’s chief executive. “Privacy awareness and the importance of privacy enhancing technologies have increased,” Perry said.

AI can battle coronavirus, but privacy shouldn’t be a casualty

South Korea has successfully slowed down the spread of coronavirus. Alongside widespread quarantine measures and testing, the country’s innovative use of technology is credited as a critical factor in combating the spread of the disease. As Europe and the United States struggle to cope, many governments are turning to AI tools to both advance the medical research and manage public health, now and in the long term: technical solutions for contact tracing, symptom tracking, immunity certificates and other applications are underway. These technologies are certainly promising, but they must be implemented in ways that do not undermine human rights.

Seoul has collected extensively and intrusively the personal data of its citizens, analyzing millions of data points from credit card transactions, CCTV footage and cellphone geolocation data. South Korea’s Ministry of the Interior and Safety even developed a smartphone app that shares with officials GPS data of self-quarantined individuals. If those in quarantine cross the “electronic fence” of their assigned area, the app alerts officials. The implications for privacy and security of such widespread surveillance are deeply concerning.

South Korea is not alone in leveraging personal data in containment efforts. China, Iran, Israel, Italy, Poland, Singapore, Taiwan and others have used location data from cellphones for various applications tasked with combating coronavirus. Supercharged with artificial intelligence and machine learning, this data cannot only be used for social control and monitoring, but also to predict travel patterns, pinpoint future outbreak hot spots, model chains of infection or project immunity.

Implications for human rights and data privacy reach far beyond the containment of COVID-19. Introduced as short-term fixes to the immediate threat of coronavirus, widespread data-sharing, monitoring and surveillance could become fixtures of modern public life. Under the guise of shielding citizens from future public health emergencies, temporary applications may become normalized. At the very least, government decisions to hastily introduce immature technologies — and in some cases to oblige citizens by law to use them — set a dangerous precedent.

Nevertheless, such data  and AI-driven applications could be useful advances in the fight against coronavirus, and personal data — anonymized and unidentifiable — offers valuable insights for governments navigating this unprecedented public health emergency. The White House is reportedly in active talks with a wide array of tech companies about how they can use anonymized aggregate-level location data from cellphones. The U.K. government is in discussion with cellphone operators about using location and usage data. And even Germany, which usually champions data rights, introduced a controversial app that uses data donations from fitness trackers and smartwatches to determine the geographical spread of the virus.

Big tech too is rushing to the rescue. Google makes available “Community Mobility Reports” for more than 140 countries, which offer insights into mobility trends in places such as retail and recreation, workplaces and residential areas. Apple and Google collaborate on a contact-tracing app and have just launched a developer toolkit including an API. Facebook is rolling out “local alerts” features that allow municipal governments, emergency response organizations and law enforcement agencies to communicate with citizens based on their location.

It is evident that data revealing the health and geolocation of citizens is as personal as it gets. The potential benefits weigh heavy, but so do concerns about the abuse and misuse of these applications. There are safeguards for data protection — perhaps, the most advanced one being the European GDPR — but during times of national emergency, governments hold rights to grant exceptions. And the frameworks for the lawful and ethical use of AI in democracy are much less developed — if at all.

There are many applications that could help governments enforce social controls, predict outbreaks and trace infections — some of them more promising than others. Contact-tracing apps are at the center of government interest in Europe and the U.S. at the moment. Decentralized Privacy-Preserving Proximity Tracing, or “DP3T,” approaches that use Bluetooth may offer a secure and decentralized protocol for consenting users to share data with public health authorities. Already, the European Commission released a guidance for contact-tracing applications that favors such decentralized approaches. Whether centralized or not, evidently, EU member states will need to comply with the GDPR when implementing such tools.

Austria, Italy and Switzerland have announced they plan to use the decentralized frameworks developed by Apple and Google. Germany, after ongoing public debate, and stern warnings from privacy experts, recently ditched plans for a centralized app opting for a decentralized solution instead. But France and Norway are using centralized systems where sensitive personal data is stored on a central server.

The U.K. government, too, has been experimenting with an app that uses a centralized approach and that is currently being tested in the Isle of Wight: The NHSX of the National Health Service will allow health officials to reach out directly and personally to potentially infected people. To this point, it remains unclear how the data collected will be used and if it will be combined with other sources of data. Under current provisions, the U.K. is still bound to comply with the GDPR until the end of the Brexit transition period in December 2020.

Aside from government-led efforts, worryingly, a plethora of apps and websites for contact tracing and other forms of outbreak control are mushrooming, asking citizens to volunteer their personal data yet offering little — if any — privacy and security features, let alone functionality. Certainly well-intentioned, these tools often come from hobby developers and often originate from amateur hackathons.

Sorting the wheat from the chaff is not an easy task, and our governments are most likely not equipped to accomplish it. At this point, artificial intelligence, and especially its use in governance, is still new to public agencies. Put on the spot, regulators struggle to evaluate the legitimacy and wider-reaching implications of different AI systems for democratic values. In the absence of sufficient procurement guidelines and legal frameworks, governments are ill-prepared to make these decisions now, when they are most needed.

And worse yet, once AI-driven applications are let out of the box, it will be difficult to roll them back, not unlike increased safety measures at airports after 9/11. Governments may argue that they require data access to avoid a second wave of coronavirus or another looming pandemic.

Regulators are unlikely to generate special new terms for AI during the coronavirus crisis, so at the very least we need to proceed with a pact: all AI applications developed to tackle the public health crisis must end up as public applications, with the data, algorithms, inputs and outputs held for the public good by public health researchers and public science agencies. Invoking the coronavirus pandemic as a sop for breaking privacy norms and reason to fleece the public of valuable data can’t be allowed.

We all want sophisticated AI to assist in delivering a medical cure and managing the public health emergency. Arguably, the short-term risks to personal privacy and human rights of AI wane in light of the loss of human lives. But when coronavirus is under control, we’ll want our personal privacy back and our rights reinstated. If governments and firms in democracies are going to tackle this problem and keep institutions strong, we all need to see how the apps work, the public health data needs to end up with medical researchers and we must be able to audit and disable tracking systems. AI must, over the long term, support good governance.

The coronavirus pandemic is a public health emergency of most pressing concern that will deeply impact governance for decades to come. And it also sheds a powerful spotlight on gaping shortcomings in our current systems. AI is arriving now with some powerful applications in stock, but our governments are ill-prepared to ensure its democratic use. Faced with the exceptional impacts of a global pandemic, quick and dirty policymaking is insufficient to ensure good governance, but may be the best solution we have.

France’s data protection watchdog reviews contact-tracing app StopCovid

France's data protection watchdog CNIL has released its second review of StopCovid, the contact-tracing app backed by the French government. The CNIL says there’s no major issue with the technical implementation and legal framework around StopCovid, with some caveats.

France isn’t relying on Apple and Google’s contact-tracing API. Instead, a group of research institutes and private companies have worked on a separate solution.

At the heart of StopCovid, there’s a centralized contact-tracing protocol called ROBERT. It relies on a central server to assign a permanent ID and generate ephemeral IDs attached to this permanent ID. Your phone collects the ephemeral IDs of other app users around you. When somebody is diagnosed COVID-19-positive, the server receives all the ephemeral IDs associated with people with whom they’ve interacted. If one or several of your ephemeral IDs get flagged, you receive a notification.

ROBERT has been a controversial topic as it isn’t an anonymous system — it relies on pseydonymization. It means that you have to trust your government that it isn’t collecting too much information and it doesn’t plan to put names on permanent IDs.

But the CNIL says that ROBERT focuses on exposed users instead of users who are diagnosed COVID-19-positive — it is “a choice that protects the privacy of those persons,” the agency says. The CNIL also says that ROBERT tries to minimize data collection as much as possible.

Inria released a small portion of the source code that is going to power StopCovid a couple of weeks ago. The research institute originally said that some parts wouldn’t be open-sourced. The CNIL contested this decision and Inria has now reversed its stance and the government promises that everything will be released, eventually.

The StopCovid development team is also launching a bug bounty program in partnership with YesWeHack following recommendations from France’s national cybersecurity agency (ANSSI).

On the legal front, the draft decree excludes data aggregation in general. For instance, the government won’t be able to generate a heat map based on StopCovid data — StopCovid doesn’t collect your location anyway.

The CNIL says that the government promises that there won’t be any negative consequence if you’re not using StopCovid, nor any privilege if you’re using it. The government also promises that you’ll be able to delete pseudonymized data from the server. All of this is still ‘to be confirmed’ with the final decree.

Finally, the CNIL recommends some changes when it comes to informing users about data collection and data retention — it’s hard to understand what happens with your data right now. There should be some specific wording for underage people and their parents as well.

In other news, the government has sent me some screenshots of the app. Here’s what it looks like on iOS:

France’s digital minister, Cédric O, will be in front of parliament members tomorrow to debate the pros and cons of StopCovid. It’s going to be interesting to see whether the French government has managed to convince parliament members that a contact-tracing app is useful to fight the spread of COVID-19.