Facebook to exclude US users from some privacy enhancements

There’s no way to sugarcoat this message: Facebook’s founder Mark Zuckerberg believes North America users of his platform deserve a lower data protection standard than people everywhere else in the world.

In a phone interview with Reuters yesterday Mark Zuckerberg declined to commit to universally implementing changes to the platform that are necessary to comply with the European Union’s incoming General Data Protection Regulation (GDPR).

Rather, he said the company was working on a version of the law that would bring some European privacy guarantees worldwide — declining to specify to the reporter which parts of the law would not extend worldwide.

“We’re still nailing down details on this, but it should directionally be, in spirit, the whole thing,”  Reuters quotes Zuckerberg on the GDPR question.

This is a subtle shift of line. Facebook’s leadership has previously implied the product changes it’s making to comply with GDPR’s incoming data protection standard would be extended globally.

Back in January, COO Sheryl Sandberg said the company would be rolling out “a new privacy center globally” — putting “the core privacy settings for Facebook in one place and make it much easier for people to manage their data”.

A spokeswoman for Facebook confirmed to TechCrunch today that the changes it revealed late last month — including finally reducing its historical settings sprawl from 20 screens to just one — were what Sandberg was talking about in those earlier comments. Ergo, even those basic tweaks are a direct result of the EU regulation.

However that universal privacy center looks to be just one portion of the changes Facebook needs to make to comply with the new EU standard. And not all these changes are going to be made available to US and Canadian Facebook users — per Zuckerberg’s remarks.

In a blog about the new privacy center late last month, Facebook flagged additional incoming changes to its terms of service — including “commitments” to users, and the language it uses to explain how it’s processing people’s data.

It said these incoming changes would be “about transparency”.

And indeed transparency is a key underlying principle of GDPR, which places requirements on data controllers to clearly explain to people what personal data they intend to collect and for what exact purpose — in order to gain informed consent for processing the data (or, if not consent, another valid basis is required for the data processing to be legal).

What’s less clear is exactly which portions of GDPR Facebook believes it can safely separate out for users on its platform and not risk accidentally mishandling the personal data of an international user — say who might be visiting or living in the US — thereby running the risk of privacy complaints and, ultimately, financial sanctions (penalties for violations can be very large under GDPR).

Facebook did not respond to additional questions about its GDPR compliance intentions so we can but speculate at this stage.

It’s even just a risky strategy in pure PR terms. As we wrote in January in our GDPR explainer: “[S]ome US companies might prefer to swallow the hassle and expense of fragmenting their data handling processes… But doing so means managing multiple data regimes. And at very least runs the risk of bad PR if you’re outed as deliberately offering a lower privacy standard to your home users vs customers abroad.”

Safe to say, the calls for equal application of GDPR in the US have started already…

On the speculation front, consent under GDPR for processing personal data means offering individuals “genuine choice and control”, as the UK’s data watchdog explains it. So perhaps Facebook isn’t comfortable about giving North American users that kind of autonomy to revoke specific consents at will.

Or maybe Zuckerberg is unwilling to let Americans ask for their personal data in an adequately portable form — so they could go and plug it into a rival service. (Though it does already let users download their data.)

Or it could be that Facebook isn’t comfortable with what GDPR has to say about profiling — which is, after all, the core of the company’s ad targeting business model.

The regulation’s transparency requirements do extend to profiling — meaning Facebook will need to inform (at least its international) users they are being profiled when they use the platform, and explain what it means for them.

So perhaps Zuckerberg thinks Americans might balk if they really understood how pervasively it tracks them when it has to explain exactly what it’s doing — as indeed some Facebook users did recently, when they found out Messenger had been logging their call and SMS metadata, for example.

The EU regulation also places some restrictions on the practice of using data to profile individuals if the data is sensitive data — such as health data, political belief, religious affiliation and so on — requiring an even higher standard of explicit consent for doing so.

And of course, with the Cambridge Analytica data misuse scandal, we’ve seen how massive amounts of Facebook data were expressly used to try to infer US voters’ political beliefs.

Let’s not forget that Facebook itself ploughs its own resources into engaging politicians to use its platform for campaigning too. So perhaps it’s worried it might risk losing this chunk of elite business in the US if American Facebook users have to give explicit consent to their political leanings being fair game for ad targeting purposes. (And when many people would probably say ‘no thanks Mark; that’s none of your business’.)

But, as I say, we can but speculate what kind of GDPR carve outs Zuckerberg has planned for users on his home turf at this stage. The regulation comes into force on May 25 — so Facebookers don’t have long to wait to play a game of ‘spot the privacy standard discrepancy’.

What’s most curious about the Facebook founder demurring on an universal application of GDPR is the timing of it — in the midst of arguably the company’s biggest ever privacy scandal.

And if he feels North Americans’ privacy can be handled as a backburner consideration even now, by revealing he plans to work really hard to make sure domestic Facebook users are given second tier privacy status below everyone else in the rest of the world, well, you have to question the authenticity of his recent apology for the “mistakes” that he claimed led to the Cambridge Analytica scandal.

Facebook was actually warned over app permissions in 2011, as we’ve reported before. Yet it did not shut down the developer access that was used to pass personal data on 50M+ Facebook users to Cambridge Analytica until mid 2015. So, frankly, if that was a mistake, it was a very, very, slow moving one.

Some might say it looks rather more like reluctance to comply with data protection standards.

Here’s one of the core architects of GDPR — European MEP Jan Philipp Albrecht — asking the key question now: How long will consumers in North America take being put in privacy coach class? Over to you…

Grindr hit with privacy complaint in Europe over sharing user data

The Norwegian Consumer Council has filed a privacy complaint about Grindr, arguing it’s in breach of national and European data protection laws after it emerged the dating app has been sharing personal information about its users with third parties.

As we reported earlier, Norwegian research outfit SINTEF analyzed the app’s traffic and found that — if set — a user’s HIV status is included in packets sent to two app optimization firms, Apptimize and Localytics.

This data was sent via an encrypted transmission. But users were not informed their HIV status was being shared.

Grindr has claimed HIV status data is being shared only for testing and platform optimization purposes — and that the third parties in question are “under strict contractual terms that provide for the highest level of confidentiality, data security, and user privacy” .

But, in SINTEF’s assessment, it is not strictly necessary to transmit such data for analytics and functionality testing (A/B testing) purposes.

As well as HIV statuses, SINTEF found Grindr transmits a raft of other personal data points to third party ad firms — this time via unencrypted transmissions — namely: precise GPS position, gender, age, “tribe” (aka group-affiliation, e.g. trans, bear), intention (e.g. friends, relationship), ethnicity, relationship status, language and device characteristics.

The Council is objecting to both the sharing of highly sensitive HIV statuses and other personal information with third parties without Grindr gaining explicit user consent for the data sharing.

“Information about sexual orientation and health status is regarded as sensitive personal data according to European law, and has to be treated with great care. In our opinion, Grindr fails to do so,” said Finn Myrstad, director of digital services at the Council in a statement on its action.

“We expect the company to ensure that its users receive both the privacy protection and security that they are entitled to. This also applies to how the information is used by Grindr’s service partners.”

The Council argues that by transmitting sensitive personal data to third parties for ad purposes this is outside the original purposes for the data collection — thereby constituting a breach of the principle of purpose limitation.

To be legal under European law Grindr would need to gain separate and clear consent from users for their personal info to be shared, it argues.

“If such data sharing is to be in accordance with European law, the service has to obtain a separate and clearly given consent from the user. Grindr, who only mention sharing user data in their privacy policy, does not obtain clear consent,” Myrstad asserts.

The Council is basing its complaint on the published report from the technical test by SINTEF (available on Github) and Grindr’s privacy policy, dated August 9, 2017.

In the complaint it also takes aim at Grindr for adding what it describes as an “unfortunate” disclaimer to its privacy policy which warns users their personal data may be processed in other countries — “including the United States, where laws regarding Personal Data may be less stringent than the laws in your country”.

“The Consumer Council regard this disclaimer as unfortunate, especially when Grindr is transferring sensitive personal data about European users. European users of the app have the right to have their personal data protected according to European law,” it writes. “The Consumer Council cannot see that Grindr is registered under the trans-Atlantic data transfer agreement Privacy Shield, which is meant to ensure that personal data that is transferred to the United States is protected in line with European data protection law. The Consumer Council see this as a cause for concern regarding whether the privacy rights of European Grindr users are sufficiently respected.”

It also argues Grindr is not gaining sufficient consent from users to their personal data being processed because the app asks for consent to the terms of service as a whole — “without individual elements being emphasized or singled out”.

“In the view of the Consumer Council, information about sensitive personal data being shared with third parties should not be hidden away in long terms of service and privacy policies. The Consumer Council cannot see that Grindr fulfill the conditions for gathering an informed and explicitly given consent,” it adds.

We’ve reached out to Grindr for comment and will update this story with any response.

It was not consent, it was concealment 

Facebook’s response to the clutch of users who are suddenly woke — triggered to delve into their settings by the Facebook data misuse scandal and #DeleteFacebook backlash — to the fact the social behemoth is, quietly and continuously, harvesting sensitive personal data about them and their friends tells you everything you need to know about the rotten state of tech industry ad-supported business models.

“People have to expressly agree to use this feature,” the company wrote in a defensively worded blog post at the weekend, defending how it tracks some users’ SMS and phone call metadata — a post it had the impressive brass neck to self-describe as a “fact check”.

“Call and text history logging is part of an opt-in feature for people using Messenger or Facebook Lite on Android . This helps you find and stay connected with the people you care about, and provides you with a better experience across Facebook.”

So, tl;dr, if you’re shocked to see what Facebook knows about you, well, that’s your own dumb fault because you gave Facebook permission to harvest all that personal data.

Not just Facebook either, of course. A fair few Android users appear to be having a similarly rude awakening about how Google’s mobile platform (and apps) slurp location data pervasively — at least unless the user is very, very careful to lock everything down.

But the difficulty of A) knowing exactly what data is being collected for what purposes and B) finding the cunning concealed/intentionally obfuscated master setting which will nix all the tracking is by design, of course.

Privacy hostile design.

No accident then that Facebook has just given its settings pages a haircut — as it scrambles to rein in user outrage over the still snowballing Cambridge Analytica data misuse scandal — consolidating user privacy controls onto one screen instead of the full TWENTY they had been scattered across before.


Insert your ‘stable door being bolted’ GIF of choice right here.

Another example of Facebook’s privacy hostile design: As my TC colleague Romain Dillet pointed out last week, the company deploys misleading wording during the Messenger onboarding process which is very clearly intended to push users towards clicking on a big blue “turn on” (data-harvesting) button — inviting users to invite the metaphorical Facebook vampire over the threshold so it can perpetually suck data.

Facebook does this by implying that if they don’t bare their neck and “turn on” the continuous contacts uploading they somehow won’t be able to message any of their friends…

An image included with Facebook’s statement.

That’s complete nonsense of course. But opportunistic emotional blackmail is something Facebook knows a bit about — having been previously caught experimenting on users without their consent to see if it could affect their mood.

Add to that, the company has scattered its social plugins and tracking pixels all around the World Wide Web, enabling it to expand its network of surveillance signals — again, without it being entirely obvious to Internet users that Facebook is watching and recording what they are doing and liking outside its walled garden.

According to pro-privacy search engine DuckDuckGo Facebook’s trackers are on around a quarter of the top million websites. While Google’s are on a full ~three-quarters.

So you don’t even have to be a user to be pulled into this surveillance dragnet.

In its tone-deaf blog post trying to defang user concerns about its SMS/call metadata tracking, Facebook doesn’t go into any meaningful detail about exactly why it wants this granular information — merely writing vaguely that: “Contact importers are fairly common among social apps and services as a way to more easily find the people you want to connect with.”

It’s certainly not wrong that other apps and services have also been sucking up your address book.

But that doesn’t make the fact Facebook has been tracking who you’re calling and messaging — how often/for how long — any less true or horrible.

This surveillance is controversial not because Facebook gained permission to data mine your phone book and activity — which, technically speaking, it will have done, via one of the myriad socially engineered, fuzzily worded permission pop-ups starring cutesy looking cartoon characters.

But rather because the consent was not informed.

Or to put it more plainly, Facebookers had no idea what they were agreeing to let the company do.

Which is why people are so horrified now to find what the company has been routinely logging — and potentially handing over to third parties on its ad platform.

Phone calls to your ex? Of course Facebook can see them. Texts to the number of a health clinic you entered into your phonebook? Sure. How many times you phoned a law firm? Absolutely. And so on and on it goes.

This is the rude awakening that no number of defensive ‘fact checks’ from Facebook — nor indeed defensive tweet storms from current CSO Alex Stamos — will be able to smooth away.

“There are long-standing issues with organisations of all kinds, across multiple sectors, misapplying, or misunderstanding, the provisions in data protection law around data subject consent,” says data protection expert Jon Baines, an advisor at UK law firm Mishcon de Reya LLP and also chair of NADPO, when we asked what the Facebook-Cambridge Analytica data misuse scandal says about how broken the current system of online consent is.

“The current European Data Protection Directive (under which [the UK] Data Protection Act sits) says that consent means any freely given specific and informed indication of their wishes by which a data subject signifies agreement to their personal data being processed. In a situation under which a data subject legitimately later claims that they were unaware what was happening with their data, it is difficult to see how it can reasonably be said that they had “consented” to the use.”

Ironically, given recent suggestions by defunct Facebook rival Path’s founder of a latent reboot to cater to the #DeleteFacebook crowd — Path actually found itself in an uncomfortable privacy hotseat all the way back in 2012, when it was discovered to have been uploading users’ address book information without asking for permission to do so.

Having been caught with its fingers in the proverbial cookie jar, Path apologized and deleted the data.

The irony is that while Path suffered a moment of outrage, Facebook is only facing a major privacy backlash now — after it’s spent so many years calmly sucking up people’s contacts data, also without them being aware because Facebook nudged them to think they needed to tap that big blue ‘turn on’ button.

Exploiting users’ trust — and using a technicality to unhook people’s privacy — is proving pretty costly for Facebook right now though.

And the risks of attempting to hoodwink consent out of your users are about to step up sharply too, at least in Europe.

Baines points out that the EU’s updated privacy framework, GDPR, tightens the existing privacy standard — adding the words “clear affirmative act” and “unambiguous” to consent requirements.

More importantly, he notes it introduces “more stringent requirements, and certain restrictions, which are not, or are not explicit, in current law, such as the requirement to be able to demonstrate that a data subject has given (valid) consent” (emphasis his).

“Consent must also now be separable from other written agreements, and in an intelligible and easily accessible form, using clear and plain language. If these requirements are enforced by data protection supervisory authorities and the courts, then we could well see a significant shift in habits and practices,” he adds.

The GDPR framework is also backed up by a new regime of major penalties for data protection violations which can scale up to 4% of a company’s global turnover.

And the risk of fines so large will be much harder for companies to ignore — and thus playing fast and loose with data, and moving fast and breaking things (as Facebook used to say), doesn’t sound so smart anymore.

As I wrote back in 2015, the online privacy lie is unraveling.

It’s taken a little longer than I’d hoped, for sure. But here we are in 2018 — and it’s not just the #MeToo movement that’s turned consent into a buzzword.

FTC confirms probe into Facebook data misuse scandal

The Federal Trade Commission (FTC) has confirmed it’s investigating Facebook’s privacy practices in light of the data misuse scandal currently engulfing the company.

In a statement regarding reported concerns about Facebook’s privacy practices today, Tom Pahl, acting director of the Federal Trade Commission’s Bureau of Consumer Protection, said:

The FTC is firmly and fully committed to using all of its tools to protect the privacy of consumers. Foremost among these tools is enforcement action against companies that fail to honor their privacy promises, including to comply with Privacy Shield, or that engage in unfair acts that cause substantial injury to consumers in violation of the FTC Act. Companies who have settled previous FTC actions must also comply with FTC order provisions imposing privacy and data security requirements. Accordingly, the FTC takes very seriously recent press reports raising substantial concerns about the privacy practices of Facebook. Today, the FTC is confirming that it has an open non-public investigation into these practices.

On March 17, reports in the New York Times and the Observer of London suggested Trump-campaign linked UK-based political consultancy, Cambridge Analytica, had been able to buy data on 50 million Facebook users in 2015 to use for microtarging political advertising.

The data had been gathered via a third party survey app running on Facebook’s platform and using its API to pull data on the friends of the 270,000 respondents who answered the survey.

At the weekend Facebook took out full page newspaper ads in the US and UK apologizing for failing to protect users’ information.

Billions have been wiped off the company’s share price since the scandal broke.

The company’s share price also fell more than 5% today, on the news of the FTC announcement.

The consumer watchdog previously investigated Facebook over deceptive privacy claims, back in 2011.

And in 2012, as part of an FTC settlement, Facebook promised it would give users “clear and prominent notice” and get their consent before sharing their information beyond their privacy settings.

It also agree to two decades of biennial audits.

Facebook avoided a financial penalty at the time, though if the FTC goes on to decide the company violated that earlier initial agreement it could certainly be on the hook for fines — fines which the Washington Post has suggested could scale to trillions of dollars if the FTC’s $40,000 per privacy violation is multiplied by the 50M Facebook users whose data was passed to Cambridge Analytica.

Whatever any eventual financial penalty might be, the reputational damage Facebook is suffering as a result of this snowballing data handling scandal is bad enough, adding to calls for policymakers to regulate social media platforms.

We’ve reached out to Facebook for comment on the FTC’s announcement — at the time of writing the company had not responded.

See more of our coverage on the Facebook-Cambridge Analytica scandal here:

Mozilla pulls ads off Facebook over data access concerns

Mozilla has announced it’s suspending its advertising on Facebook in the wake of the Cambridge Analytica privacy controversy — saying it has concerns the current default privacy settings remain risky, and having decided to take a fresh look at Facebook’s app permissions following the latest user data handling scandal.

This week the New York Times and The Observer of London reported that a researcher’s app had pulled personal information on about 270,000 Facebook users and 50 million of their friends back in 2015, and then passed that data haul to political consulting firm Cambridge Analytica in violation of Facebook’s policies.

Facebook’s policies previously allowed developers to siphon off app users’ Facebook friends data — though Facebook tightened up these permissions in 2014 — “to dramatically reduce data access”, as founder Mark Zuckerberg has now claimed — though evidently not dramatically enough for Mozilla.

Mozilla writes: “This news caused us to take a closer look at Facebook’s current default privacy settings given that we support the platform with our advertising dollars. While we believe there is still more to learn, we found that its current default settings leave access open to a lot of data – particularly with respect to settings for third party apps.”

It is also running a petition calling for Facebook to lock down app permission settings to ensure users’ privacy is “protected by default”, saying the current default settings “leave a lot of questions and a lot of data flying around”.

“Facebook’s current app permissions leave billions of its users vulnerable without knowing it,” it writes. “If you play games, read news or take quizzes on Facebook, chances are you are doing those activities through third-party apps and not through Facebook itself. The default permissions that Facebook gives to those third parties currently include data from your education and work, current city and posts on your timeline.

“We’re asking Facebook to change its policies to ensure third parties can’t access the information of the friends of people who use an app.”

Mozilla says it will “consider returning” to advertising on Facebook when — or presumably if — the company makes adequate changes to bolster default privacy settings.

“We are encouraged that Mark Zuckerberg has promised to improve the privacy settings and make them more protective. When Facebook takes stronger action in how it shares customer data, specifically strengthening its default privacy settings for third party apps, we’ll consider returning,” it writes. “We look forward to Facebook instituting some of the things that Zuckerberg promised today.”

We’ve reached out to Facebook for comment on Mozilla’s action and will update this story with any response.

At the time of writing Mozilla had not responded to questions about the move.

Even setting aside the current Facebook-Cambridge Analytica data handling scandal, big privacy-related changes are incoming to Facebook thanks to the European Union’s updated data protection framework, GDPR, which will apply from May 25 to any company that processes EU citizens’ personal data.

As part of those changes — and as Facebook tries to comply with the new EU privacy standard — in January the company announced it would be rolling out a new privacy center globally that would put core privacy settings in one place. That one-stop hub is yet to launch but must arrive before May 25.

Also in January Facebook published a set of privacy principles — including grand claims that: “We help people understand how their data is used”; “We design privacy into our products from the outset”; “We work hard to keep your information secure”; “You own and can delete your information”; and “We are accountable”.

Given the last of its published principles, it will be interesting to see which executive Facebook chooses to send to testify in front of Congress — to explain things like how it failed to protect the privacy of ~50M users nor even inform people their data had been siphoned off for illicit purposes.

Asked by CNN whether he will personally testify, Zuckerberg said he will do so “if it’s the right thing to do”. So we’ll soon find out how much that privacy accountability ‘principle’ is really worth.