Facebook will pay $650 million to settle class action suit centered on Illinois privacy law

Facebook was ordered to pay $650 million Friday for running afoul of an Illinois law designed to protect the state’s residents from invasive privacy practices.

That law, the Biometric Information Privacy Act (BIPA), is a powerful state measure that’s tripped up tech companies in recent years. The suit against Facebook was first filed in 2015, alleging that Facebook’s practice of tagging people in photos using facial recognition without their consent violated state law.

1.6 million Illinois residents will receive at least $345 under the final settlement ruling in California federal court. The final number is $100 higher than the $550 million Facebook proposed in 2020, which a judge deemed inadequate. Facebook disabled the automatic facial recognition tagging features in 2019, making it opt-in instead and addressing some of the privacy criticisms echoed by the Illinois class action suit.

A cluster of lawsuits accused Microsoft, Google and Amazon of breaking the same law last year after Illinois residents’ faces were used to train their facial recognition systems without explicit consent.

The Illinois privacy law has tangled up some of tech’s giants, but BIPA has even more potential to impact smaller companies with questionable privacy practices. The controversial facial recognition software company Clearview AI now faces its own BIPA-based class action lawsuit in the state after the company failed to dodge the suit by pushing it out of state courts.

A $650 million settlement would be enough to crush any normal company, though Facebook can brush it off much like it did with the FTC’s record-setting $5 billion penalty in 2019. But the Illinois law isn’t without teeth. For Clearview, it was enough to make the company pull out of business in the state altogether.

The law can’t punish a behemoth like Facebook in the same way, but it is one piece in a regulatory puzzle that poses an increasing threat to the way tech’s data brokers have done business for years. With regulators at the federal, state and legislative level proposing aggressive measures to rein in tech, the landmark Illinois law provides a compelling framework that other states could copy and paste. And if big tech thinks navigating federal oversight will be a nightmare, a patchwork of aggressive state laws governing how tech companies do business on a state-by-state basis is an alternate regulatory future that could prove even less palatable.

 

Jamaica’s JamCOVID pulled offline after third security lapse exposed travelers’ data

Jamaica’s JamCOVID app and website were taken offline late on Thursday following a third security lapse, which exposed quarantine orders on more than half a million travelers to the island.

JamCOVID was set up last year to help the government process travelers arriving on the island. Quarantine orders are issued by the Jamaican Ministry of Health, and instruct travelers to stay in their accommodation for two weeks to prevent the spread of COVID-19.

These orders contain the traveler’s name and the address of where they are ordered to stay.

But a security researcher told TechCrunch that the quarantine orders were publicly accessible from the JamCOVID website but were not protected with a password. Although the files were accessible from anyone’s web browser, the researcher asked not to be named for fear of legal repercussions from the Jamaican government.

More than 500,000 quarantine orders were exposed, some dating back to March 2020.

TechCrunch shared these details with the Jamaica Cleaner, which was first to report on the security lapse after the news outlet verified the data spillage with local cybersecurity experts.

Amber Group, which was contracted to build and maintain the JamCOVID coronavirus dashboard and immigration service, pulled the service offline a short time after TechCrunch and the Jamaica Gleaner contacted the company on Thursday evening. JamCOVID’s website was replaced with a holding page that said the site was “under maintenance.” At the time of publication, the site had returned.

Amber Group’s chief executive Dushyant Savadia did not return a request for comment.

Matthew Samuda, a minister in Jamaica’s Ministry of National Security, also did not respond to a request for comment or our questions — including if the Jamaican government plans to continue its contract or relationship with Amber Group.

This is the third security lapse involving JamCOVID in the past two weeks.

Last week, Amber Group secured an exposed cloud storage server hosted on Amazon Web Services that was left open and public, despite containing more than 70,000 negative COVID-19 lab results and over 425,000 immigration documents authorizing travel to the island. Savadia said in response that there were “no further vulnerabilities” with the app. Days later, the company fixed a second security lapse after leaving a file containing private keys and passwords for the service on the JamCOVID server.

The Jamaican government has repeatedly defended Amber Group, which says it provided the JamCOVID technology to the government “for free.” Amber Group’s Savadia has previously been quoted as saying that the company built the service in “three days.”

In a statement on Thursday, Jamaica’s prime minister Andrew Holness said JamCOVID “continues to be a critical element” of the country’s immigration process and that the government was “accelerating” to migrate the JamCOVID database — though specifics were not given.

Mozilla beefs up anti-cross-site tracking in Firefox, as Chrome still lags on privacy

Mozilla has further beefed up anti-tracking measures in its Firefox browser. In a blog post yesterday it announced that Firefox 86 has an extra layer of anti-cookie tracking built into the enhanced tracking protection (ETP) strict mode — which it’s calling ‘Total Cookie Protection’.

This “major privacy advance”, as it bills it, prevents cross-site tracking by siloing third party cookies per website.

Mozilla likens this to having a separate cookie jar for each site — so, for e.g., Facebook cookies aren’t stored in the same tub as cookies for that sneaker website where you bought your latest kicks and so on.

The new layer of privacy wrapping “provides comprehensive partitioning of cookies and other site data between websites in Firefox”, explains Mozilla.

Along with another anti-tracking feature it announced last month — targeting so called ‘supercookies’ — aka sneaky trackers that store user IDs in “increasingly obscure” parts of the browser (like Flash storageETags, and HSTS flags), i.e. where it’s difficult for users to delete or block them — the features combine to “prevent websites from being able to ‘tag’ your browser, thereby eliminating the most pervasive cross-site tracking technique”, per Mozilla.

There’s a “limited exception” for cross-site cookies when they are needed for non-tracking purposes — Mozilla gives the example of popular third-party login providers.

“Only when Total Cookie Protection detects that you intend to use a provider, will it give that provider permission to use a cross-site cookie specifically for the site you’re currently visiting. Such momentary exceptions allow for strong privacy protection without affecting your browsing experience,” it adds.

Tracker blocking has long been an arms race against the adtech industry’s determination to keep surveilling web users — and thumbing its nose at the notion of consent to spy on people’s online business — pouring resource into devising fiendish new techniques to try to keep watching what Internet users are doing. But this battle has stepped up in recent years as browser makers have been taking a tougher pro-privacy/anti-tracker stance.

Mozilla, for example, started making tracker blocking the default back in 2018 — going on make ETP the default in Firefox in 2019, blocking cookies from companies identified as trackers by its partner, Disconnect.

While Apple’s Safari browser added an ‘Intelligent Tracking Prevention’ (ITP) feature in 2017 — applying machine learning to identify trackers and segregate the cross-site scripting data to protect users’ browsing history from third party eyes.

Google has also put the cat among the adtech pigeons by announcing a planned phasing out of support for third party cookies in Chrome — which it said would be coming within two years back in January 2020 — although it’s still working on this ‘privacy sandbox’ project, as it calls it (now under the watchful eye of UK antitrust regulators).

Google has been making privacy strengthening noises since 2019, in response to the rest of the browser market responding to concern about online privacy.

In April last year it rolled back a change that had made it harder for sites to access third-party cookies, citing concerns that sites were able to perform essential functions during the pandemic — though this was resumed in July. But it’s fair to say that the adtech giant remains the laggard when it comes to executing on its claimed plan to beef up privacy.

Given Chrome’s marketshare, that leaves most of the world’s web users exposed to more tracking than they otherwise would be by using a different, more privacy-pro-active browser.

And as Mozilla’s latest anti-cookie tracking feature shows the race to outwit adtech’s allergy to privacy (and consent) also isn’t the sort that has a finish line. So being slow to do privacy protection arguably isn’t very different to not offering much privacy protection at all.

To wit: One worrying development — on the non-cookie based tracking front — is detailed in this new paper by a group of privacy researchers who conducted an analysis of CNAME tracking (aka a DNS-based anti-tracking evasion technique) and found that use of the sneaky anti-tracking evasion method had grown by around a fifth in just under two years.

The technique has been raising mainstream concerns about ‘unblockable’ web tracking since around 2019 — when developers spotted the technique being used in the wild by a French newspaper website. Since then use has been rising, per the research.

In a nutshell the CNAME tracking technique cloaks the tracker by injecting it into the first-party context of the visited website — via the content being embedded through a subdomain of the site which is actually an alias for the tracker domain.

“This scheme works thanks to a DNS delegation. Most often it is a DNS CNAME record,” writes one of the paper authors, privacy and security researcher Lukasz Olejnik, in a blog post about the research. “The tracker technically is hosted in a subdomain of the visited website.

“Employment of such a scheme has certain consequences. It kind of fools the fundamental web security and privacy protections — to think that the user is wilfully browsing the tracker website. When a web browser sees such a scheme, some security and privacy protections are relaxed.”

Don’t be fooled by the use of the word ‘relaxed’ — as Olejnik goes on to emphasize that the CNAME tracking technique has “substantial implications for web security and privacy”. Such as browsers being tricked into treating a tracker as legitimate first-party content of the visited website (which, in turn, unlocks “many benefits”, such as access to first-party cookies — which can then be sent on to remote, third-party servers controlled by the trackers so the surveilling entity can have its wicked way with the personal data).

So the risk is that a chunk of the clever engineering work being done to protect privacy by blocking trackers can be sidelined by getting under the anti-trackers’ radar.

The researchers found one (infamous) tracker provider, Criteo, reverting its tracking scripts to the custom CNAME cloak scheme when it detected the Safari web browser in use — as, presumably, a way to circumvent Apple’s ITP.

There are further concerns over CNAME tracking too: The paper details how, as a consequence of current web architecture, the scheme “unlocks a way for broad cookie leaks”, as Olejnik puts it — explaining how the upshot of the technique being deployed can be “many unrelated, legitimate cookies” being sent to the tracker subdomain.

Olejnik documented this concern in a study back in 2014 — but he writes that the problem has now exploded: “As the tip of the iceberg, we found broad data leaks on 7,377 websites. Some data leaks happen on almost every website using the CNAME scheme (analytics cookies commonly leak). This suggests that this scheme is actively dangerous. It is harmful to web security and privacy.”

The researchers found cookies leaking on 95% of the studies websites.

They also report finding leaks of cookies set by other third-party scripts, suggesting leaked cookies would in those instances allow the CNAME tracker to track users across websites.

In some instances they found that leaked information contained private or sensitive information — such as a user’s full name, location, email address and (in an additional security concern) authentication cookie.

The paper goes on to raise a number of web security concerns, such as when CNAME trackers are served over HTTP not HTTPS, which they found happened often, and could facilitate man-in-the-middle attacks.

Defending against the CNAME cloaking scheme will require some major browsers to adopt new tricks, per the researchers — who note that while Firefox (global marketshare circa 4%) does offer a defence against the technique Chrome does not.

Engineers on the WebKit engine that underpins Apple’s Safari browser have also been working on making enhancements to ITP aimed at counteracting CNAME tracking.

In a blog post last November, IPT engineer John Wilander wrote that as defence against the sneaky technique “ITP now detects third-party CNAME cloaking requests and caps the expiry of any cookies set in the HTTP response to 7 days. This cap is aligned with ITP’s expiry cap on all cookies created through JavaScript.”

The Brave browser also announced changes last fall aimed at combating CNAME cloaking.

“In version 1.25.0, uBlock Origin gained the ability to detect and block CNAME-cloaked requests using Mozilla’s terrific browser.dns API. However, this solution only works in Firefox, as Chromium does not provide the browser.dns API. To some extent, these requests can be blocked using custom DNS servers. However, no browsers have shipped with CNAME-based adblocking protection capabilities available and on by default,” it wrote.

“In Brave 1.17, Brave Shields will now recursively check the canonical name records for any network request that isn’t otherwise blocked using an embedded DNS resolver. If the request has a CNAME record, and the same request under the canonical domain would be blocked, then the request is blocked. This solution is on by default, bringing enhanced privacy protections to millions of users.”

But the browser with the largest marketshare, Chrome, has work to do, per the researchers, who write:

Because Chrome does not support a DNS resolution API for extensions, the [uBlock version 1.25 under Firefox] defense could not be applied to this browser. Consequently, we find that four of the CNAME-based trackers (Oracle Eloqua, Eulerian, Criteo, and Keyade) are blocked by uBlock Origin on Firefox but not on the Chrome version.

A race to reverse engineer Clubhouse raises security concerns

As live audio chat app Clubhouse ascends in popularity around the world, concerns about its data practices also grow.

The app is currently only available on iOS, so some developers set out in a race to create Android, Windows and Mac versions of the service. While these endeavors may not be ill-intentioned, the fact that it takes programmers little effort to reverse engineer and fork Clubhouse — that is, when developers create new software based on its original code — is sounding an alarm about the app’s security.

The common goal of these unofficial apps, as of now, is to broadcast Clubhouse audio feeds in real-time to users who cannot access the app otherwise because they don’t have an iPhone. One such effort is called Open Clubhouse, which describes itself as a “third-party web application based on flask to play Clubhouse audio.” The developer confirmed to TechCrunch that Clubhouse blocked its service five days after its launch without providing an explanation.

“[Clubhouse] asks a lot of information from users, analyzes those data and even abuses them. Meanwhile, it restricts how people use the app and fails to give them the rights they deserve. To me, this constitutes monopoly or exploitation,” said Open Clubhouse’s developer nicknamed AiX.

Clubhouse cannot be immediately reached for comment on this story.

AiX wrote the program “for fun” and wanted it to broaden Clubhouse’s access to more people. Another similar effort came from a developer named Zhuowei Zhang, who created Hipster House to let those without an invite browse rooms and users, and those with an invite to join rooms as a listener though they can’t speak — Clubhouse is invite-only at the moment. Zhang stopped developing the project, however, after noticing a better alternative.

These third-party services, despite their innocuous intentions, can be exploited for surveillance purposes, as Jane Manchun Wong, a researcher known for uncovering upcoming features in popular apps through reverse engineering, noted in a tweet.

“Even if the intent of that webpage is to bring Clubhouse to non-iOS users, without a safeguard, it could be abused,” said Wong, referring to a website rerouting audio data from Clubhouse’s public rooms.

Clubhouse lets people create public chat rooms, which are available to any user who joins before a room reaches its maximum capacity, and private rooms, which are only accessible to room hosts and users authorized by the hosts.

But not all users are aware of the open nature of Clubhouse’s public rooms. During its brief window of availability in China, the app was flooded with mainland Chinese debating politically sensitive issues from Taiwan to Xinjiang, which are heavily censored in the Chinese cybserspace. Some vigilant Chinese users speculated the possibility of being questioned by the police for delivering sensitive remarks. While no such event has been publicly reported, the Chinese authorities have banned the app since February 8.

Clubhouse’s design is by nature at odds with the state of communication it aims to achieve. The app encourages people to use their real identity — registration requires a phone number and an existing user’s invite. Inside a room, everyone can see who else is there. This setup instills trust and comfort in users when they speak as if speaking at a networking event.

But the third-party apps that are able to extract Clubhouse’s audio feeds show that the app isn’t even semi-public: It’s public.

More troublesome is that users can “ghost listen,” as developer Zerforschung found. That is, users can hear a room’s conversation without having their profile displayed to the room participants. Eavesdropping is made possible by establishing communication directly with Agora, a service provider employed by Clubhouse. As multiple security researchers found, Clubhouse relies on Agora’s real-time audio communication technology. Sources have also confirmed the partnership with TechCrunch.

Some technical explanation is needed here. When a user joins a chatroom on Clubhouse, it makes a request to Agora’s infrastructure, as the Stanford Internet Observatory discovered. To make the request, the user’s phone contacts Clubhouse’s application programming interface (API), which then creates “tokens”, the basic building block in programming that authenticates an action, to establish a communication pathway for the app’s audio traffic.

Now, the problem is there can be a disconnect between Clubhouse and Agora, allowing the Clubhouse end, which manages user profiles, to be inactive while the Agora end, which transmits audio data, remains active, as technology analyst Daniel Sinclair noted. That’s why users can continue to eavesdrop on a room without having their profile displayed to the room’s participants.

The Agora partnership has sparked other forms of worries. The company, which operates mainly from the U.S. and China, noted in its IPO prospectus that its data may be subject to China’s cybersecurity law, which requires network operators in China to assist police investigations. That possibility, as the Stanford Internet Observatory points out, is contingent on whether Clubhouse stores its data in China.

While the Clubhouse API is banned in China, the Agora API appears unblocked. Tests by TechCrunch find that users currently need a VPN to join a room, an action managed by Clubhouse, but can listen to the room conversation, which is facilitated by Agora, with the VPN off. What’s the safest way for China-based users to access the app, given the official attitude is that it should not exist? It’s also worth noting that the app was not available on the Chinese App Store even before its ban, and Chinese users had downloaded the app through workarounds.

The Clubhouse team may be overwhelmed by data questions in the past few days, but these early observations from researchers and hackers may urge it to fix its vulnerabilities sooner, paving its way to grow beyond its several million loyal users and $1 billion valuation mark.

Following backlash, WhatsApp to roll out in-app banner to better explain its privacy update

Last month, Facebook-owned WhatsApp announced it would delay enforcement of its new privacy terms, following a backlash from confused users which later led to a legal challenge in India and various regulatory investigations. WhatsApp users had misinterpreted the privacy updates as an indication that the app would begin sharing more data — including their private messages — with Facebook. Today, the company is sharing the next steps it’s taking to try to rectify the issue and clarify that’s not the case.

The mishandling of the privacy update on WhatsApp’s part led to widespread confusion and misinformation. In reality, WhatsApp had been sharing some information about its users with Facebook since 2016, following its acquisition by Facebook.

But the backlash is a solid indication of much user trust Facebook has since squandered. People immediately suspected the worst, and millions fled to alternative messaging apps, like Signal and Telegram, as a result.

Following the outcry, WhatsApp attempted to explain that the privacy update was actually focused on optional business features on the app, which allow business to see the content of messages between it and the end user, and give the businesses permission to use that information for its own marketing purposes, including advertising on Facebook. WhatsApp also said it labels conversations with businesses that are using hosting services from Facebook to manage their chats with customers, so users were aware.

Image Credits: WhatsApp

In the weeks since the debacle, WhatsApp says it spent time gathering user feedback and listening to concerns from people in various countries. The company found that users wanted assurance that WhatsApp was not reading their private messages or listening to their conversations, and that their communications were end-to-end encrypted. Users also said they wanted to know that WhatsApp wasn’t keeping logs of who they were messaging or sharing contact lists with Facebook.

These latter concerns seem valid, given that Facebook recently made its messaging systems across Facebook, Messenger and Instagram interoperable. One has to wonder when similar integrations will make their way to WhatsApp.

Today, WhatsApp says it will roll out new communications to users about the privacy update, which follows the Status update it offered back in January aimed at clarifying points of confusion. (See below).

Image Credits: WhatsApp

In a few weeks, WhatsApp will begin to roll out a small, in-app banner that will ask users to re-review the privacy policies — a change the company said users have shown to prefer over the pop-up, full-screen alert it displayed before.

When users click on “to review,” they’ll be shown a deeper summary of the changes, including added details about how WhatsApp works with Facebook. The changes stress that WhatsApp’s update don’t impact the privacy of users’ conversations, and reiterate the information about the optional business features.

Eventually, WhatsApp will begin to remind users to review and accept its updates to keep using WhatsApp. According to its prior announcement, it won’t be enforcing the new policy until May 15.

Image Credits: WhatsApp

Users will still need to be aware that their communications with businesses are not as secure as their private messages. This impacts a growing number of WhatsApp users, 175 million of which now communicate with businesses on the app, WhatsApp said in October.

In today’s blog post about the changes, WhatsApp also took a big swipe at rival messaging apps that used the confusion over the privacy update to draw in WhatsApp’s fleeing users by touting their own app’s privacy.

“We’ve seen some of our competitors try to get away with claiming they can’t see people’s messages – if an app doesn’t offer end-to-end encryption by default that means they can read your messages,” WhatsApp’s blog post read.

This seems to be a comment directed specifically towards Telegram, which often touts its “heavily encrypted” messaging app as more private alternative. But Telegram doesn’t offer end-to-end encryption by default, as apps like WhatsApp and Signal do. It uses “transport layer” encryption that protects the connection from the user to the server, a Wired article citing cybersecurity professionals explained in January. When users want an end-to-end encrypted experience for their one-on-one chats, they can enable the “secret chats” feature instead. (And this feature isn’t even available for group chats.)

In addition, WhatsApp fought back against the characterization that it’s somehow less safe because it has some limited data on users.

“Other apps say they’re better because they know even less information than WhatsApp. We believe people are looking for apps to be both reliable and safe, even if that requires WhatsApp having some limited data,” the post read. “We strive to be thoughtful on the decisions we make and we’ll continue to develop new ways of meeting these responsibilities with less information, not more,” it noted.

Jamaica’s immigration website exposed thousands of travelers’ data

A security lapse by a Jamaican government contractor has exposed immigration records and COVID-19 test results for hundreds of thousands of travelers who visited the island over the past year.

The Jamaican government contracted Amber Group to build the JamCOVID19 website and app, which the government uses to publish daily coronavirus figures and allows residents to self-report their symptoms. The contractor also built the website to pre-approve travel applications to visit the island during the pandemic, a process that requires travelers to upload a negative COVID-19 test result before they board their flight if they come from high-risk countries, including the United States.

But a cloud storage server storing those uploaded documents was left unprotected and without a password, and was publicly spilling out files onto the open web.

Many of the victims whose information was found on the exposed server are Americans.

The data is now secure after TechCrunch contacted Amber Group’s chief executive Dushyant Savadia, who did not comment when reached prior to publication.

The storage server, hosted on Amazon Web Services, was set to public. It’s not known for how long the data was unprotected, but contained more than 70,000 negative COVID-19 lab results, over 425,000 immigration documents authorizing travel to the island — which included the traveler’s name, date of birth and passport numbers — and over 250,000 quarantine orders dating back to June 2020, when Jamaica reopened its borders to visitors after the pandemic’s first wave. The server also contained more than 440,000 images of travelers’ signatures.

Two U.S. travelers whose lab results were among the exposed data told TechCrunch that they uploaded their COVID-19 results through the Visit Jamaica website before their travel. Once lab results are processed, travelers receive a travel authorization that they must present before boarding their flight.

Both of these documents, as well as quarantine orders that require visitors to shelter in place and several passports, were on the exposed storage server.

Travelers who are staying outside Jamaica’s so-called “resilient corridor,” a zone that covers a large portion of the island’s population, are told to install the app built by Amber Group that tracks their location and is tracked by the Ministry of Health to ensure visitors stay within the corridor. The app also requires that travelers record short “check-in” videos with a daily code sent by the government, along with their name and any symptoms.

The server exposed more than 1.1 million of those daily updating check-in videos.

An airport information flyer given to travelers arriving in Jamaica. Travelers may be required to install the JamCOVID19 app to allow the government to monitor their location and to require video check-ins. (Image: Jamaican government)

The server also contained dozens of daily timestamped spreadsheets named “PICA,” likely for the Jamaican passport, immigration and citizenship agency, but these were restricted by access permissions. But the permissions on the storage server were set so that anyone had full control of the files inside, such as allowing them to be downloaded or deleted altogether. (TechCrunch did neither, as doing so would be unlawful.)

Stephen Davidson, a spokesperson for the Jamaican Ministry of Health, did not comment when reached, or say if the government planned to inform travelers of the security lapse.

Savadia founded Amber Group in 2015 and soon launched its vehicle-tracking system, Amber Connect.

According to one report, Amber’s Savadia said the company developed JamCOVID19 “within three days” and made it available to the Jamaican government in large part for free. The contractor is billing other countries, including Grenada and the British Virgin Islands, for similar implementations, and is said to be looking for other government customers outside the Caribbean.

Savadia would not say what measures his company put in place to protect the data of paying governments.

Jamaica has recorded at least 19,300 coronavirus cases on the island to date, and more than 370 deaths.


Send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send files or documents using our SecureDrop. Learn more.

Facebook fined again in Italy for misleading users over what it does with their data

Facebook has been fined again by Italy’s competition authority — this time the penalty is €7 million (~$8.4M) — for failing to comply with an earlier order related to how it informs users about the commercial uses it makes of their data.

The AGCM began investigating certain commercial practices by Facebook back in 2018, including the information it provided to users at sign up and the lack of an opt out for advertising. Later the same year it went on to fine Facebook €10M for two violations of the country’s Consumer Code.

But the watchdog’s action did not stop there. It went on to launch further proceedings against Facebook in 2020 — saying the tech giant was still failing to inform users “with clarity and immediacy” about how it monetizes their data.

“Facebook Ireland Ltd. and Facebook Inc. have not complied with the warning to remove the incorrect practice on the use of user data and have not published the corrective declaration requested by the Authority,” the AGCM writes in a press release today (issued in Italian; which we’ve translated with Google Translate).

The authority said Facebook is still misleading users who register on its platform by not informing them — “immediately and adequately” — at the point of sign up that it will collect and monetize their personal data. Instead it found Facebook emphasizes its service’s ‘gratuitousness’.

“The information provided by Facebook was generic and incomplete and did not provide an adequate distinction between the use of data necessary for the personalization of the service (with the aim of facilitating socialization with other users) and the use of data to carry out targeted advertising campaigns,” the AGCM goes on.

It had already fined Facebook €5M over the same issue of failing to provide adequate information about its use of people’s data. But it also ordered it to correct the practice — and publish an “amendment” notice on its website and apps for users in Italy. Neither of which Facebook has done, per the regulator.

Facebook, meanwhile, has been fighting the AGCM’s order via the Italian legal system — making a petition to the Council of State.

A hearing of Facebook’s appeal against the non-compliance proceedings took place in September last year and a decision is still pending.

Reached for comment on AGCM’s action, a Facebook spokesperson told us: “We note the Italian Competition Authority’s announcement today, but we await the Council of State decision on our appeal against the Authority’s initial findings.”

“Facebook takes privacy extremely seriously and we have already made changes, including to our Terms of Service, to further clarify how Facebook uses data to provide its service and to provide tailored advertising,” it added.

Last year, at the time the AGCM instigated further proceedings against it, Facebook told us it had amended the language of its terms of service back in 2019 — to “further clarify” how it makes money, as it put it.

However while the tech giant appears to have removed a direct “claim of gratuity” it had previously been presenting users at the point of registration, the Italian watchdog is still not happy with how far it’s gone in its presentation to new users — saying it’s still not being “immediate and clear” enough in how it provides information on the collection and use of their data for commercial purposes.

The authority points out that this is key information for people to weigh up in deciding whether or not to join Facebook — given the economic value Facebook gains via the transfer of their personal data.

For its part, Facebook argues that it’s fair to describe a service as ‘free’ if there’s no monetary charge for use. Although it has also made changes to how it describes this value exchange to users — including dropping its former slogan that “Facebook is free and always will be” in favor of some fuzzier phrasing.

On the arguably more salient legal point that Facebook is also appealing — related to the lack of a direct opt out for Facebook users to prevent their data being used for targeted ads — Facebook denies there’s any lack of consent to see here, claiming it does not give any user information to third parties unless the person has chosen to share their information and give consent.

Rather it says this consent process happens off its own site, on a case by case basis, i.e. when people decide whether or not to install third party apps or use Facebook Login to log into a third-party websites etc — and where, it argues, they will be asked by those third parties whether they want Facebook to share their data.

(Facebook’s lead data supervisor in Europe, Ireland’s DPC, has an open investigation into Facebook on exactly this issue of so-called ‘forced consent’ — with complaints filed the moment Europe’s General Data Protection Regulation begun being applied in May 2018.)

The tech giant also flags on-site tools and settings it does offer its own users — such as ‘Why Am I Seeing This Ad’, ‘Ads Preferences’ and ‘Manage Activity’ — which it claims increase transparency and control for Facebook users.

It also points to the ‘Off Facebook Activity‘ setting it launched last year — which shows users some information about which third party services are sending their data to Facebook and lets them disconnect that information from their account. Though there’s no way for users to request the third party delete their data via Facebook. (That requires going to each third party service individually to make a request.)

Last year a German court ruled against a consumer rights challenge to Facebook’s use of the self-promotional slogan that its service is “free and always will be” — on the grounds that the company does not require users to literally hand over monetary payments in exchange for using the service. Although the court found against Facebook on a number of other issues bundled into the challenge related to how it handles user data.

In another interesting development last year, Germany’s federal court also unblocked a separate legal challenge to Facebook’s use of user data which has been brought by the country’s competition watchdog. If that landmark challenge prevails Facebook could be forced to stop combining user data across different services and from the social plug-ins and tracking pixels it embeds in third parties’ digital services.

The company is also now facing rising challenges to its unfettered use of people’s data via the private sector, with Apple set to switch on an opt-in consent mechanism for app tracking on iOS this spring. Browser makers have also been long stepping up action against consentless tracking — including Google, which is working on phasing out support for third party cookies on Chrome.

 

TikTok hit with consumer, child safety and privacy complaints in Europe

TikTok is facing a fresh round of regulatory complaints in Europe where consumer protection groups have filed a series of coordinated complaints alleging multiple breaches of EU law.

The European Consumer Organisation (BEUC) has lodged a complaint against the video sharing site with the European Commission and the bloc’s network of consumer protection authorities, while consumer organisations in 15 countries have alerted their national authorities and urged them to investigate the social media giant’s conduct, BEUC said today.

The complaints include claims of unfair terms, including in relation to copyright and TikTok’s virtual currency; concerns around the type of content children are being exposed to on the platform; and accusations of misleading data processing and privacy practices.

Details of the alleged breaches are set out in two reports associated with the complaints: One covering issues with TikTok’s approach to consumer protection, and another focused on data protection and privacy.

Child safety

On child safety, the report accuses TikTok of failing to protect children and teenagers from hidden advertising and “potentially harmful” content on its platform.

“TikTok’s marketing offers to companies who want to advertise on the app contributes to the proliferation of hidden marketing. Users are for instance triggered to participate in branded hashtag challenges where they are encouraged to create content of specific products. As popular influencers are often the starting point of such challenges the commercial intent is usually masked for users. TikTok is also potentially failing to conduct due diligence when it comes to protecting children from inappropriate content such as videos showing suggestive content which are just a few scrolls away,” the BEUC writes in a press release.

TikTok has already faced a regulatory intervention in Italy this year in response to child safety concerns — in that instance after the death of a ten year old girl in the country. Local media had reported that the child died of asphyxiation after participating in a ‘black out’ challenge on TikTok — triggering the emergency intervention by the DPA.

Soon afterwards TikTok agreed to reissue an age gate to verify the age of every user in Italy, although the check merely asks the user to input a date to confirm their age so seems trivially easy to circumvent.

In the BEUC’s report, the consumer rights group draws attention to TikTok’s flimsy age gate, writing that: “In practice, it is very easy for underage users to register on the platform as the age verification process is very loose and only self-declaratory.”

And while it notes TikTok’s privacy policy claims the service is “not directed at children under the age of 13” the report cites a number of studies that found heavy use of TikTok by children under 13 — with BEUC suggesting that children in fact make up “a very big part” of TikTok’s user base.

From the report:

In France, 45% of children below 13 have indicated using the app. In the United Kingdom, a 2020 study from the Office for Telecommunications (OFCOM) revealed that 50% of children between eight and 15 upload videos on TikTok at least weekly. In Czech Republic, a 2019 study found out that TikTok is very popular among children aged 11-12. In Norway, a news article reported that 32% of children aged 10-11 used TikTok in 2019. In the United States, The New York Times revealed that more than one-third of daily TikTok users are 14 or younger, and many videos seem to come from children who are below 13. The fact that many underage users are active on the platform does not come as a surprise as recent studies have shown that, on average, a majority of children owns mobile phones earlier and earlier (for example, by the age of seven in the UK).

A recent EU-backed study also found that age checks on popular social media platforms are “basically ineffective” as they can be circumvented by children of all ages simply by lying about their age.

Terms of use

Another issue raised by the complaints centers on a claim of unfair terms of use — including in relation to copyright, with BEUC noting that TikTok’s T&Cs give it an “irrevocable right to use, distribute and reproduce the videos published by users, without remuneration”.

A virtual currency feature it offers is also highlighted as problematic in consumer rights terms.

TikTok lets users purchase digital coins which they can use to buy virtual gifts for other users (which can in turn be converted by the user back to fiat). But BEUC says its ‘Virtual Item Policy’ contains “unfair terms and misleading practices” — pointing to how it claims an “absolute right” to modify the exchange rate between the coins and the gifts, thereby “potentially skewing the financial transaction in its own favour”.

While TikTok displays the price to buy packs of its virtual coins there is no clarity over the process it applies for the conversion of these gifts into in-app diamonds (which the gift-receiving user can choose to redeem for actual money, remitted to them via PayPal or another third party payment processing tool).

“The amount of the final monetary compensation that is ultimately earned by the content provider remains obscure,” BEUC writes in the report, adding: “According to TikTok, the compensation is calculated ‘based on various factors including the number of diamonds that the user has accrued’… TikTok does not indicate how much the app retains when content providers decide to convert their diamonds into cash.”

“Playful at a first glance, TikTok’s Virtual Item Policy is highly problematic from the point of view of consumer rights,” it adds.

Privacy

On data protection and privacy, the social media platform is also accused of a whole litany of “misleading” practices — including (again) in relation to children. Here the complaint accuses TikTok of failing to clearly inform users about what personal data is collected, for what purpose, and for what legal reason — as is required under Europe’s General Data Protection Regulation (GDPR).

Other issues flagged in the report include the lack of any opt-out from personal data being processed for advertising (aka ‘forced consent’ — something tech giants like Facebook and Google have also been accused); the lack of explicit consent for processing sensitive personal data (which has special protections under GDPR); and an absence of security and data protection by design, among other issues.

We’ve reached out to the Irish Data Protection Commission (DPC), which is TikTok’s lead supervisor for data protection issues in the EU, about the complaint and will update this report with any response.

France’s data watchdog, the CNIL, already opened an investigation into TikTok last year — prior to the company shifting its regional legal base to Ireland (meaning data protection complaints must now be funnelled through the Irish DPC as a result of via the GDPR’s one-stop-shop mechanism — adding to the regulatory backlog).

Jef Ausloos, a postdoc researcher who worked on the legal analysis of TikTok’s privacy policy for the data protection complaints, told TechCrunch researchers had been ready to file data protection complaints a year ago — at a time when the platform had no age check at all — but it suddenly made major changes to how it operates.

Ausloos suggests such sudden massive shifts are a deliberate tactic to evade regulatory scrutiny of data-exploiting practices — as “constant flux” can have the effect of derailing and/or resetting research work being undertaken to build a case for enforcement — also pointing out that resource-strapped regulators may be reluctant to bring cases against companies ‘after the fact’ (i.e. if they’ve since changed a practice).

The upshot of breaches that iterate is that repeat violations of the law may never be enforced.

It’s also true that a frequent refrain of platforms at the point of being called out (or called up) on specific business practices is to claim they’ve since changed how they operate — seeking to use that a defence to limit the impact of regulatory enforcement or indeed a legal ruling. (Aka: ‘Move fast and break regulatory accountability’.)

Nonetheless, Ausloos says the complainants’ hope now is that the two years of documentation undertaken on the TikTok case will help DPAs build cases.

Commenting on the complaints in a statement, Monique Goyens, DG of BEUC, said: “In just a few years, TikTok has become one of the most popular social media apps with millions of users across Europe. But TikTok is letting its users down by breaching their rights on a massive scale. We have discovered a whole series of consumer rights infringements and therefore filed a complaint against TikTok.

“Children love TikTok but the company fails to keep them protected. We do not want our youngest ones to be exposed to pervasive hidden advertising and unknowingly turned into billboards when they are just trying to have fun.

“Together with our members — consumer groups from across Europe — we urge authorities to take swift action. They must act now to make sure TikTok is a place where consumers, especially children, can enjoy themselves without being deprived of their rights.”

Reached for comment on the complaints, a TikTok spokesperson told us:

Keeping our community safe, especially our younger users, and complying with the laws where we operate are responsibilities we take incredibly seriously. Every day we work hard to protect our community which is why we have taken a range of major steps, including making all accounts belonging to users under 16 private by default. We’ve also developed an in-app summary of our Privacy Policy with vocabulary and a tone of voice that makes it easier for teens to understand our approach to privacy. We’re always open to hearing how we can improve, and we have contacted BEUC as we would welcome a meeting to listen to their concerns.

Minneapolis bans its police department from using facial recognition software

Minneapolis voted Friday to ban the use of facial recognition software for its police department, growing the list of major cities that have implemented local restrictions on the controversial technology. After an ordinance on the ban was approved earlier this week, 13 members of the city council voted in favor of the ban, with no opposition.

The new ban will block the Minneapolis Police Department from using any facial recognition technology, including software by Clearview AI. That company sells access to a large database of facial images, many scraped from major social networks, to federal law enforcement agencies, private companies and a number of U.S. police departments. The Minneapolis Police Department is known to have a relationship with Clearview AI, as is the Hennepin County Sheriff’s Office, which will not be restricted by the new ban.

The vote is a landmark decision in the city that set off racial justice protests around the country after a Minneapolis police officer killed George Floyd last year. The city has been in the throes of police reform ever since, leading the nation by pledging to defund the city’s police department in June before backing away from that commitment into more incremental reforms later that year.

Banning the use of facial recognition is one targeted measure that can rein in emerging concerns about aggressive policing. Many privacy advocates are concerned that the AI-powered face recognition systems would not only disproportionately target communities of color, but that the tech has been demonstrated to have technical shortcomings in discerning non-white faces.

Cities around the country are increasingly looking to ban the controversial technology and have implemented restrictions in many different ways. In Portland, Oregon, new laws passed last year block city bureaus from using facial recognition but also forbid private companies from deploying the technology in public spaces. Previous legislation in San Francisco, Oakland and Boston restricted city governments from using facial recognition systems, though didn’t include a similar provision for private companies.

Sweden’s data watchdog slaps police for unlawful use of Clearview AI

Sweden’s data protection authority, the IMY, has fined the local police authority €250,000 ($300k+) for unlawful use of the controversial facial recognition software, Clearview AI, in breach of the country’s Criminal Data Act.

As part of the enforcement the police must conduct further training and education of staff in order to avoid any future processing of personal data in breach of data protection rules and regulations.

The authority has also been ordered to inform people whose personal data was sent to Clearview — when confidentiality rules allow it to do so, per the IMY.

Its investigation found that the police had used the facial recognition tool on a number of occasions and that several employees had used it without prior authorization.

Earlier this month Canadian privacy authorities found Clearview had breached local laws when it collected photos of people to plug into its facial recognition database without their knowledge or permission.

“IMY concludes that the Police has not fulfilled its obligations as a data controller on a number of accounts with regards to the use of Clearview AI. The Police has failed to implement sufficient organisational measures to ensure and be able to demonstrate that the processing of personal data in this case has been carried out in compliance with the Criminal Data Act. When using Clearview AI the Police has unlawfully processed biometric data for facial recognition as well as having failed to conduct a data protection impact assessment which this case of processing would require,” the Swedish data protection authority writes in a press release.

The IMY’s full decision can be found here (in Swedish).

“There are clearly defined rules and regulations on how the Police Authority may process personal data, especially for law enforcement purposes. It is the responsibility of the Police to ensure that employees are aware of those rules,” added Elena Mazzotti Pallard, legal advisor at IMY, in a statement.

The fine (SEK2.5M in local currency) was decided on the basis of an overall assessment, per the IMY, though it falls quite a way short of the maximum possible under Swedish law for the violations in question — which the watchdog notes would be SEK10M. (The authority’s decision notes that not knowing the rules or having inadequate procedures in place are not a reason to reduce a penalty fee so it’s not entirely clear why the police avoided a bigger fine.)

The data authority said it was not possible to determine what had happened to the data of the people whose photos the police authority had sent to Clearview — such as whether the company still stored the information. So it has also ordered the police to take steps to ensure Clearview deletes the data.

The IMY said it investigated the police’s use of the controversial technology following reports in local media.

Just over a year ago, US-based Clearview AI was revealed by the New York Times to have amassed a database of billions of photos of people’s faces — including by scraping public social media postings and harvesting people’s sensitive biometric data without individuals’ knowledge or consent.

European Union data protection law puts a high bar on the processing of special category data, such as biometrics.

Ad hoc use by police of a commercial facial recognition database — with seemingly zero attention paid to local data protection law — evidently does not meet that bar.

Last month it emerged that the Hamburg data protection authority had instigating proceedings against Clearview following a complaint by a German resident over consentless processing of his biometric data.

The Hamburg authority cited Article 9 (1) of the GDPR, which prohibits the processing of biometric data for the purpose of uniquely identifying a natural person, unless the individual has given explicit consent (or for a number of other narrow exceptions which it said had not been met) — thereby finding Clearview’s processing unlawful.

However the German authority only made a narrow order for the deletion of the individual complainant’s mathematical hash values (which represent the biometric profile).

It did not order deletion of the photos themselves. It also did not issue a pan-EU order banning the collection of any European resident’s photos as it could have done and as European privacy campaign group, noyb, had been pushing for.

noyb is encouraging all EU residents to use forms on Clearview AI’s website to ask the company for a copy of their data and ask it to delete any data it has on them, as well as to object to being included in its database. It also recommends that individuals who finds Clearview holds their data submit a complaint against the company with their local DPA.

European Union lawmakers are in the process of drawing up a risk-based framework to regulate applications of artificial intelligence — with draft legislation expected to be put forward this year although the Commission intends it to work in concert with data protections already baked into the EU’s General Data Protection Regulation (GDPR).

Earlier this month the controversial facial recognition company was ruled illegal by Canadian privacy authorities — who warned they would “pursue other actions” if the company does not follow recommendations that include stopping the collection of Canadians’ data and deleting all previously collected images.

Clearview said it had stopped providing its tech to Canadian customers last summer.

It is also facing a class action lawsuit in the U.S. citing Illinois’ biometric protection laws.

Last summer the UK and Australian data protection watchdogs announced a joint investigation into Clearview’s personal data handling practices. That probe is ongoing.