Facebook’s latest ‘transparency’ tool doesn’t offer much — so we went digging

Just under a month ago Facebook switched on global availability of a tool which affords users a glimpse into the murky world of tracking that its business relies upon to profile users of the wider web for ad targeting purposes.

Facebook is not going boldly into transparent daylight — but rather offering what privacy rights advocacy group Privacy International has dubbed “a tiny sticking plaster on a much wider problem”.

The problem it’s referring to is the lack of active and informed consent for mass surveillance of Internet users via background tracking technologies embedded into apps and websites, including as people browse outside Facebook’s own content garden.

The dominant social platform is also only offering this feature in the wake of the 2018 Cambridge Analytica data misuse scandal, when Mark Zuckerberg faced awkward questions in Congress about the extent of Facebook’s general web tracking. Since then policymakers around the world have dialled up scrutiny of how its business operates — and realized there’s a troubling lack of transparency in and around adtech generally and Facebook specifically

Facebook’s tracking pixels and social plugins — aka the share/like buttons that pepper the mainstream web — have created a vast tracking infrastructure which silently informs the tech giant of Internet users’ activity, even when a person hasn’t interacted with any Facebook-branded buttons.

Facebook claims this is just ‘how the web works’. And other tech giants are similarly engaged in tracking Internet users (notably Google). But as a platform with 2.2BN+ users Facebook has got a march on the lion’s share of rivals when it comes to harvesting people’s data and building out a global database of person profiles.

It’s also positioned as a dominant player in an adtech ecosystem which means it’s the one being fed with intel by data brokers and publishers who deploy tracking tech to try to survive in such a skewed system.

Meanwhile the opacity of online tracking means the average Internet user is none the wiser that Facebook can be following what they’re browsing all over the Internet. Questions of consent loom very large indeed.

Facebook is also able to track people’s usage of third party apps if a person chooses a Facebook login option which the company encourages developers to implement in their apps — again the carrot being to be able to offer a lower friction choice vs requiring users create yet another login credential.

The price for this ‘convenience’ is data and user privacy as the Facebook login gives the tech giant a window into third part app usage.

The company has also used a VPN app it bought and badged as a security tool to glean data on third party app usage — though it’s recently stepped back from the Onavo app after a public backlash (though that did not stop it running a similar tracking program targeted at teens).

Background tracking is how Facebook’s creepy ads function (it prefers to call such behaviorally targeted ads ‘relevant’) — and how they have functioned for years

Yet it’s only in recent months that it’s offered users a glimpse into this network of online informers — by providing limited information about the entities that are passing tracking data to Facebook, as well as some limited controls.

From ‘Clear History’ to “Off-Facebook Activity”

Originally briefed in May 2018, at the crux of the Cambridge Analytica scandal, as a ‘Clear History’ option this has since been renamed ‘Off-Facebook Activity’ — a label so bloodless and devoid of ‘call to action’ that the average Facebook user, should they stumble upon it buried deep in unlovely settings menus, would more likely move along than feel moved to carry out a privacy purge.

(For the record you can access the setting here — but you do need to be logged into Facebook to do so.)

The other problem is that Facebook’s tool doesn’t actually let you purge your browsing history, it just delinks it from being associated with your Facebook ID. There is no option to actually clear your browsing history via its button. Another reason for the name switch. So, no, Facebook hasn’t built a clear history ‘button’.

“While we welcome the effort to offer more transparency to users by showing the companies from which Facebook is receiving personal data, the tool offers little way for users to take any action,” said Privacy International this week, criticizing Facebook for “not telling you everything”.

As the saying goes, a little knowledge can be a dangerous thing. So a little transparency implies — well — anything but clarity. And Privacy International sums up the Off-Facebook Activity tool with an apt oxymoron — describing it as “a new window to the opacity”.

“This tool illustrates just how impossible it is for users to prevent external data from being shared with Facebook,” it writes, warning with emphasis: “Without meaningful information about what data is collected and shared, and what are the ways for the user to opt-out from such collection, Off-Facebook activity is just another incomplete glimpse into Facebook’s opaque practices when it comes to tracking users and consolidating their profiles.”

It points out, for instance, that the information provided here is limited to a “simple name” — thereby preventing the user from “exercising their right to seek more information about how this data was collected”, which EU users at least are entitled to.

“As users we are entitled to know the name/contact details of companies that claim to have interacted with us. If the only thing we see, for example, is the random name of an artist we’ve never heard before (true story), how are we supposed to know whether it is their record label, agent, marketing company or even them personally targeting us with ads?” it adds.

Another criticism is Facebook is only providing limited information about each data transfer — with Privacy International noting some events are marked “under a cryptic CUSTOM” label; and that Facebook provides “no information regarding how the data was collected by the advertiser (Facebook SDK, tracking pixel, like button…) and on what device, leaving users in the dark regarding the circumstances under which this data collection took place”.

“Does Facebook really display everything they process/store about those events in the log/export?” queries privacy researcher Wolfie Christl, who tracks the adtech industry’s tracking techniques. “They have to, because otherwise they don’t fulfil their SAR [Subject Access Request] obligations [under EU law].”

Christl notes Facebook makes users jump through an additional “download” hoop in order to view data on tracked events — and even then, as Privacy International points out, it gives up only a limited view of what has actually been tracked…

“For example, why doesn’t Facebook list the specific sites/URLs visited? Do they infer data from the domains e.g. categories? If yes, why is this not in the logs?” Christl asks.

We reached out to Facebook with a number of questions, including why it doesn’t provide more detail by default. It responded with this statement attributed to spokesperson:

We offer a variety of tools to help people access their Facebook information, and we’ve designed these tools to comply with relevant laws, including GDPR. We disagree with this [Privacy International] article’s claims and would welcome the chance to discuss them with Privacy International.

Facebook also said it’s continuing to develop which information it surfaces through the Off-Facebook Activity tool — and said it welcomes feedback on this.

We also asked it about the legal bases it uses to process people’s information that’s been obtained via its tracking pixels and social plug-ins. It did not provide a response to those questions.

Six names, many questions…

When the company launched the Off-Facebook Activity tool a snap poll of available TechCrunch colleagues showed very diverse results for our respective tallies (which also may not show the most recent activity, per other Facebook caveats) — ranging from one colleague who had an eye-watering 1,117 entities (likely down to doing a lot of app testing); to several with several/a few hundred apiece; to a couple in the middle tens.

In my case I had just six. But from my point of view — as an EU citizen with a suite of rights related to privacy and data protection; and as someone who aims to practice good online privacy hygiene, including having a very locked down approach to using Facebook (never using its mobile app for instance) — it was still six too many. I wanted to find out how these entities had circumvented my attempts not to be tracked.

And in the case of the first one in the list who on earth it was…

Turns out cloudfront is an Amazon Web Services Content Delivery Network subdomain. But I had to go searching online myself to figure out that the owner of that particular domain is (now) a company called Nativo.

Facebook’s list provided only very bare bones information. I also clicked to delink the first entity, since it immediately looked so weird, and found that by doing that Facebook wiped all the entries — which meant I was unable to retain access to what little additional info it had provided about the respective data transfers.

Undeterred I set out to contact each of the six companies directly with questions — asking what data of mine they had transferred to Facebook and what legal basis they thought they had for processing my information.

(On a practical level six names looked like a sample size I could at least try to follow up manually — but remember I was the TechCrunch exception; imagine trying to request data from 1,117 companies, or 450 or even 57, which were the lengths of lists of some of my colleagues.)

This process took about a month and a lot of back and forth/chasing up. It likely only yielded as much info as it did because I was asking as a journalist; an average Internet user may have had a tougher time getting attention on their questions — though, under EU law, citizens have a right to request a copy of personal data held on them.

Eventually, I was able to obtain confirmation that tracking pixels and Facebook share buttons had been involved in my data being passed to Facebook in certain instances. Even so I remain in the dark on many things. Such as exactly what personal data Facebook received.

In one case I was told by a listed company that it doesn’t know itself what data was shared — only Facebook knows because it’s implemented the company’s “proprietary code”. (Insert your own ‘WTAF’ there.)

The legal side of these transfers also remains highly opaque. From my point of view I would not intentionally consent to any of this tracking — but in some instances the entities involved claim that (my) consent was (somehow) obtained (or implied).

In other cases they said they are relying on a legal basis in EU law that’s referred to as ‘legitimate interests’. However this requires a balancing test to be carried out to ensure a business use does not have a disproportionate impact on individual rights.

I wasn’t able to ascertain whether such tests had ever been carried out.

Meanwhile, since Facebook is also making use of the tracking information from its pixels and social plug ins (and seemingly more granular use, since some entities claimed they only get aggregate not individual data), Christl suggests it’s unlikely such a balancing test would be easy to pass for that tiny little ‘platform giant’ reason.

Notably he points out Facebook’s Business Tool terms state that it makes use of so called “event data” to “personalize features and content and to improve and secure the Facebook products” — including for “ads and recommendations”; for R&D purposes; and “to maintain the integrity of and to improve the Facebook Company Products”.

In a section of its legal terms covering the use of its pixels and SDKs Facebook also puts the onus on the entities implementing its tracking technologies to gain consent from users prior to doing so in relevant jurisdictions that “require informed consent” for tracking cookies and similar — giving the example of the EU.

“You must ensure, in a verifiable manner, that an end user provides the necessary consent before you use Facebook Business Tools to enable us to store and access cookies or other information on the end user’s device,” Facebook writes, pointing users of its tools to its Cookie Consent Guide for Sites and Apps for “suggestions on implementing consent mechanisms”.

Christl flags the contradiction between Facebook claiming users of its tracking tech needing to gain prior consent vs claims I was given by some of these entities that they don’t because they’re relying on ‘legitimate interests’.

“Using LI as a legal basis is even controversial if you use a data analytics company that reliably processes personal data strictly on behalf of you,” he argues. “I guess, industry lawyers try to argue for a broader applicability of LI, but in the case of FB business tools I don’t believe that the balancing test (a businesses legitimate interests vs. the impact on the rights and freedoms of data subjects) will work in favor of LI.”

Those entities relying on legitimate interests as a legal base for tracking would still need to offer a mechanism where users can object to the processing — and I couldn’t immediately see such a mechanism in the cases in question.

One thing is crystal clear: Facebook itself does not provide a mechanism for users to object to its processing of tracking data nor opt out of targeted ads. That remains a long-standing complaint against its business in the EU which data protection regulators are still investigating.

One more thing: Non-Facebook users continue to have no way of learning what data of theirs is being tracked and transferred to Facebook. Only Facebook users have access to the Off-Facebook Activity tool, for example. Non-users can’t even access a list.

Facebook has defended its practice of tracking non-users around the Internet as necessary for unspecified ‘security purposes’. It’s an inherently disproportionate argument of course. The practice also remains under legal challenge in the EU.

Tracking the trackers

SimpleReach (aka d8rk54i4mohrb.cloudfront.net)

What is it? A California-based analytics platform (now owned by Nativo) used by publishers and content marketers to measure how well their content/native ads performs on social media. The product began life in the early noughties as a simple tool for publishers to recommend similar content at the bottom of articles before the startup pivoted — aiming to become ‘the PageRank of social’ — offering analytics tools for publishers to track engagement around content in real-time across the social web (plugging into platform APIs). It also built statistical models to predict which pieces of content will be the most social and where, generating a proprietary per article score. SimpleReach was acquired by Nativo last year to complement analytics tools the latter already offered for tracking content on the publisher/brand’s own site.

Why did it appear in your Off-Facebook Activity list? Given it’s a b2b product it does not have a visible consumer brand of its own. And, to my knowledge, I have never visited its own website prior to investigating why it appeared in my Off-Facebook Activity list. Clearly, though, I must have visited a site (or sites) that are using its tracking/analytics tools. Of course an Internet user has no obvious way to know this — unless they’re actively using tools to monitor which trackers are tracking them.

In a further quirk, neither the SimpleReach (nor Nativo) brand names appeared in my Off-Facebook Activity list. Rather a domain name was listed — d8rk54i4mohrb.cloudfront.net — which looked at first glance weird/alarming.

I found this is owned by SimpleReach by using a tracker analytics service.

Once I knew the name I was able to connect the entry to Nativo — via news reports of the acquisition — which led me to an entity I could direct questions to.  

What happened when you asked them about this? There was a bit of back and forth and then they sent a detailed response to my questions in which they claim they do not share any data with Facebook — “or perform ‘off site activity’ as described on Facebook’s activity tool”.

They also suggested that their domain had appeared as a result of their tracking code being implemented on a website I had visited which had also implemented Facebook’s own trackers.

“Our technology allows our Data Controllers to insert other tracking pixels or tags, using us as a tag manager that delivers code to the page. It is possible that one of our customers added a Facebook pixel to an article you visited using our technology. This could lead Facebook to attribute this pixel to our domain, though our domain was merely a ‘carrier’ of the code,” they told me.

In terms of the data they collect, they said this: “The only Personal Data that is collected by the SimpleReach Analytics tag is your IP Address and a randomly generated id.  Both of these values are processed, anonymized, and aggregated in the SimpleReach platform and not made available to anyone other than our sub-processors that are bound to process such data only on our behalf. Such values are permanently deleted from our system after 3 months. These values are used to give our customers a general idea of the number of users that visited the articles tracked.”

So, again, they suggested the reason why their domain appeared in my Off-Facebook Activity list is a combination of Nativo/SimpleReach’s tracking technologies being implemented on a site where Facebook’s retargeting pixel is also embedded — which then resulted in data about my online activity being shared with Facebook (which Facebook then attributes as coming from SimpleReach’s domain).

Commenting on this, Christl agreed it sounds as if publishers “somehow attach Facebook pixel events to SimpleReach’s cloudfront domain”.

“SimpleReach probably doesn’t get data from this. But the question is 1) is SimpleReach perhaps actually responsible (if it happens in the context of their domain); 2) The Off-Facebook activity is a mess (if it contains events related to domains whose owners are not web or app publishers).”

Nativo offered to determine whether they hold any personal information associated with the unique identifier they have assigned to my browser if I could send them this ID. However I was unable to locate such an ID (see below).

In terms of legal base to process my information the company told me: “We have the right to process data in accordance with provisions set forth in the various Data Processor agreements we have in place with Data Controllers.”

Nativo also suggested that the Offsite Activity in question might have predated its purchase of the SimpleReach technology — which occurred on March 20, 2019 — saying any activity prior to this would mean my query would need to be addressed directly with SimpleReach, Inc. which Nativo did not acquire. (However in this case the activity registered on the list was dated later than that.)

Here’s what they said on all that in full:

Thank you for submitting your data access request.  We understand that you are a resident of the European Union and are submitting this request pursuant to Article 15(1) of the GDPR.  Article 15(1) requires “data controllers” to respond to individuals’ requests for information about the processing of their personal data.  Although Article 15(1) does not apply to Nativo because we are not a data controller with respect to your data, we have provided information below that will help us in determining the appropriate Data Controllers, which you can contact directly.

First, for details about our role in processing personal data in connection with our SimpleReach product, please see the SimpleReach Privacy Policy.  As the policy explains in more detail, we provide marketing analytics services to other businesses – our customers.  To take advantage of our services, our customers install our technology on their websites, which enables us to collect certain information regarding individuals’ visits to our customers’ websites. We analyze the personal information that we obtain only at the direction of our customer, and only on that customer’s behalf.

SimpleReach is an analytics tracker tool (Similar to Google Analytics) implemented by our customers to inform them of the performance of their content published around the web.  “d8rk54i4mohrb.cloudfront.net” is the domain name of the servers that collect these metrics.  We do not share data with Facebook or perform “off site activity” as described on Facebook’s activity tool.  Our technology allows our Data Controllers to insert other tracking pixels or tags, using us as a tag manager that delivers code to the page.  It is possible that one of our customers added a Facebook pixel to an article you visited using our technology.  This could lead Facebook to attribute this pixel to our domain, though our domain was merely a “carrier” of the code.

The SimpleReach tool is implemented on articles posted by our customers and partners of our customers.  It is possible you visited a URL that has contained our tracking code.  It is also possible that the Offsite Activity you are referencing is activity by SimpleReach, Inc. before Nativo purchased the SimpleReach technology. Nativo, Inc. purchased certain technology from SimpleReach, Inc. on March 20, 2019, but we did not purchase the SimpleReach, Inc. entity itself, which remains a separate entity unaffiliated with Nativo, Inc. Accordingly, any activity that occurred before March 20, 2019 pre-dates Nativo’s use of the SimpleReach technology and should be addressed directly with SimpleReach, Inc. If, for example, TechCrunch was a publisher partner of SimpleReach, Inc. and had SimpleReach tracking code implemented on TechCrunch articles or across the TechCrunch website prior to March 20, 2019, any resulting data collection would have been conducted by SimpleReach, Inc., not by Nativo, Inc.

As mentioned above, our tracking script collects and sends information to our servers based on the articles it is implemented on. The only Personal Data that is collected by the SimpleReach Analytics tag is your IP Address and a randomly generated id.  Both of these values are processed, anonymized, and aggregated in the SimpleReach platform and not made available to anyone other than our sub-processors that are bound to process such data only on our behalf. Such values are permanently deleted from our system after 3 months.  These values are used to give our customers a general idea of the number of users that visited the articles tracked.

We do not, nor have we ever, shared ANY information with Facebook with regards to the information we collect from the SimpleReach Analytics tag, be it Personal Data or otherwise. However, as mentioned above, it is possible that one of our customers added a Facebook retargeting pixel to an article you visited using our technology. If that is the case, we would not have received any information collected from such pixel or have knowledge of whether, and to what extent, the customer shared information with Facebook. Without more information, we are unable to determine the specific customer (if any) on behalf of which we may have processed your personal information. However, if you send us the unique identifier we have assigned to your browser… we can determine whether we have any personal information associated with such browser on behalf of a customer controller, and, if we have, we can forward your request on to the controller to respond directly to your request.

As a Data Processor we have the right to process data in accordance with provisions set forth in the various Data Processor agreements we have in place with Data Controllers.  This type of agreement is designed to protect Data Subjects and ensure that Data Processors are held to the same standards that both the GDPR and the Data Controller have put forth.  This is the same type of agreement used by all other analytics tracking tools (as well as many other types of tools) such as Google Analytics, Adobe Analytics, Chartbeat, and many others.

I also asked Nativo to confirm whether Insider.com (see below) is a customer of Nativo/SimpleReach.

The company told me it could not disclose this “due to confidentiality restrictions” and would only reveal the identity of customers if “required by applicable law”.

Again, it said that if I provided the “unique identifier” assigned to my browser it would be “happy to pull a list of personal information the SimpleReach/Nativo systems currently have stored for your unique identifier (if any), including the appropriate Data Controllers”. (“If we have any personal data collected from you on behalf of Insider.com, it would come up in the list of DataControllers,” it suggested.)

I checked multiple browsers that I use on multiple devices but was unable to locate an ID attached to a SimpleReach cookie. So I also asked whether this might appear attached to any other cookie.

Their response:

Because our data is either pseudonymized or anonymized, and we do not record of any other pieces of Personal Data about you, it will not be possible for us to locate this data without the cookie value.  The SimpleReach user cookie is, and has always been, in the “__srui” cookie under the “.simplereach.com” domain or any of its sub-domains. If you are unable to locate a SimpleReach user cookie by this name on your browser, it may be because you are using a different device or because you have cleared your cookies (in which case we would no longer have the ability to map any personal data we have previously collected from you to your browser or device). We do have other cookies (under the domains postrelease.com, admin.nativo.com, and cloud.nativo.com) but those cookies would not be related to the appearance of SimpleReach in the list of Off Site Activity on your Facebook account, per your original inquiry.

What did you learn from their inclusion in the Off-Facebook Activity list? There appeared to be a correlation between this domain and a publisher, Insider.com, which also appeared in my Off-Facebook Activity list — as both logged events bear the same date; plus Insider.com is a publisher so would fall into the right customer category for using Nativo’s tool.

Given those correlations I was able to guess Insider.com is a customer of Nativo. (I confirmed this when I spoke to Insider.com) — so Facebook’s tool is able to leak relational inferences related to the tracking industry by surfacing/mapping business connections that might not have been otherwise evident.

Insider.com

What is it? A New York based business media company which owns brands such as Business Insider and Markets Insider

Why did it appear in your Off-Facebook Activity list? I imagine I clicked on a technology article that appeared in my Facebook News Feed or elsewhere but when I was logged into Facebook

What happened when you asked them about this? After about a week of radio silence an employee in Insider’com’s legal department got in touch to say they could discuss the issue on background.

This person told me the information in the Off-Facebook Activity tool came from the Facebook share button which is embedded on all articles it runs on its media websites. They confirmed that the share button can share data with Facebook regardless of whether the site visitor interacts with the button or not.

In my case I certainly would not have interacted with the Facebook share button. Nonetheless data was passed, simply by merit of loading the article page itself.

Insider.com said the Facebook share button widget is integrated into its sites using a standard set-up that Facebook intends publishers to use. If the share button is clicked information related to that action would be shared with Facebook and would also be received by Insider.com (though, in this scenario, it said it doesn’t get any personalized information — but rather gets aggregate data).

Facebook can also automatically collect other information when a user visits a webpage which incorporates its social plug-ins.

Asked whether Insider.com knows what information Facebook receives via this passive route the company told me it does not — noting the plug-in runs proprietary Facebook code. 

Asked how it’s collecting consent from users for their data to be shared passively with Facebook, Insider.com said its Privacy Policy stipulates users consent to sharing their information with Facebook and other social media sites. It also said it uses the legal ground known as legitimate interests to provide functionality and derive analytics on articles.

In the active case (of a user clicking to share an article) Insider.com said it interprets the user’s action as consent.

Insider.com confirmed it uses SimpleReach/Nativo analytics tools, meaning site visitor data is also being passed to Nativo when a user lands on an article. It said consent for this data-sharing is included within its consent management platform (it uses a CMP made by Forcepoint) which asks site visitors to specify their cookie choices.

Here site visitors can choose for their data not to be shared for analytics purposes (which Insider.com said would prevent data being passed).

I usually apply all cookie consent opt outs, where available, so I’m a little surprised Nativo/SimpleReach was passed my data from an Insider.com webpage. Either I failed to click the opt out one time or failed to respond to the cookie notice and data was passed by default.

It’s also possible I did opt out but data was passed anyway — as there has been research which has found a proportion of cookie notifications ignore choices and pass data anyway (unintentionally or otherwise).

Follow up questions I sent to Insider.com after we talked:

1) Can you confirm whether Insider has performed a legitimate interests assessment?
2) Does Insider have a site mechanism where users can object to the passive data transfer to Facebook from the share buttons?

Insider.com did not respond to my additional questions.

What did you learn from their inclusion in the Off-Facebook Activity list? That Insider.com is a customer of Nativo/SimpleReach.

Rei.com

What is it? A California-based ecommerce website selling outdoor gear

Why did it appear in your Off-Facebook Activity list? I don’t recall ever visiting their site prior to looking into why it appeared in the list so I’m really not sure

What happened when you asked them about this? After saying it would investigate it followed up with a statement, rather than detailed responses to my questions, in which it claims it does not hold any personal data associated with — presumably — my TechCrunch email, since it did not ask me what data to check against.

It also appeared to be claiming that it uses Facebook tracking pixels/tags on its website, without explicitly saying as much, writing that: “Facebook may collect information about your interactions with our websites and mobile apps and reflect that information to you through their Off-Facebook Activity tool.”

It claims it has no access to this information — which it says is “pseudonymous to us” but suggested that if I have a Facebook account Facebook could link any browsing on Rei’s site to my Facebook’s identity and therefore track my activity.

The company also pointed me to a Facebook Help Center post where the company names some of the activities that might have resulted in Rei’s website sending activity data on me to Facebook (which it could then link to my Facebook ID) — although Facebook’s list is not exhaustive (included are: “viewing content”, “searching for an item”, “adding an item to a shopping cart” and “making a donation” among other activities the company tracks by having its code embedded on third parties’ sites).

Here’s Rei’s statement in full:

Thank you for your patience as we looked into your questions.  We have checked our systems and determined that REI does not maintain any personal data associated with you based on the information you provided.  Note, however, that Facebook may collect information about your interactions with our websites and mobile apps and reflect that information to you through their Off-Facebook Activity tool. The information that Facebook collects in this manner is pseudonymous to us — meaning we cannot identify you using the information and we do not maintain the information in a manner that is linked to your name or other identifying information. However, if you have a Facebook account, Facebook may be able to match this activity to your Facebook account via a unique identifier unavailable to REI. (Funnily enough, while researching this I found TechCrunch in MY list of Off-Facebook activity!)

For a complete list of activities that could have resulted in REI sharing pseudonymous information about you with Facebook, this Facebook Help Center article may be useful.  For a detailed description of the ways in which we may collect and share customer information, the purposes for which we may process your data, and rights available to EEA residents, please refer to our Privacy Policy.  For information about how REI uses cookies, please refer to our Cookie Policy.

As a follow up question I asked Rei to tell me which Facebook tools it uses, pointing out that: “Given that, just because you aren’t (as I understand it) directly using my data yourself that does not mean you are not responsible for my data being transferred to Facebook.”

The company did not respond to that point.

I also previously asked Rei.com to confirm whether it has any data sharing arrangements with the publisher of Rock & Ice magazine (see below). And, if so, to confirm the processes involved in data being shared. Again, I got no response to that.

What did you learn from their inclusion in the Off-Facebook Activity list? Given that Rei.com appeared alongside Rock & Ice on the list — both displaying the same date and just one activity apiece — I surmised they have some kind of data-sharing arrangement. They are also both outdoors brands so there would be obvious commercial ‘synergies’ to underpin such an arrangement.

That said, neither would confirm a business relationship to me. But Facebook’s list heavily implies there is some background data-sharing going on

Rock & Ice magazine 

What is it? A climbing magazine produced by a California-based publisher, Big Stone Publishing

Why did it appear in your Off-Facebook Activity list? I imagine I clicked on a link to a climbing-related article in my Facebook feed or else visited Rock & Ice’s website while I was logged into Facebook in the same browser session

What happened when you asked them about this? After ignoring my initial email query I subsequently received a brief response from the publisher after I followed up — which read:

The Rock and Ice website is opt in, where you have to agree to terms of use to access the website. I don’t know what private data you are saying Rock and Ice shared, so I can’t speak to that. The site terms are here. As stated in the terms you can opt out.

Following up, I asked about the provision in the Rock & Ice website’s cookie notice which states: “By continuing to use our site, you agree to our cookies” — asking whether it’s passing data without waiting for the user to signal their consent.

(Relevant: In October Europe’s top court issued a ruling that active consent is necessary for tracking cookies, so you can’t drop cookies prior to a user giving consent for you to do so.)

The publisher responded:

You have to opt in and agree to the terms to use the website. You may opt out of cookies, which is covered in the terms. If you do not want the benefits of these advertising cookies, you may be able to opt-out by visiting: http://www.networkadvertising.org/optout_nonppii.asp.

If you don’t want any cookies, you can find extensions such as Ghostery or the browser itself to stop and refuse cookies. By doing so though some websites might not work properly.

I followed up again to point out that I’m not asking about the options to opt in or opt out but, rather, the behavior of the website if the visitor does not provide a consent response yet continues browsing — asking for confirmation Rock & Ice’s site interprets this state as consent and therefore sends data.

The publisher stopped responding at that point.

Earlier I had asked it to confirm whether its website shares visitor data with Rei.com? (As noted above, the two appeared with the same date on the list which suggests data may be being passed between them.) I did not get a respond to that question either.

What did you learn from their inclusion in the Off-Facebook Activity list? That the magazine appears to have a data-sharing arrangement with outdoor retailer Rei.com, given how the pair appeared at the same point in my list. However neither would confirm this when I asked

MatterHackers

What is it? A California-based retailer focused on 3D printing and digital manufacturing

Why did it appear in your Off-Facebook Activity list? I honestly have no idea. I have never to my knowledge visited their site prior to investigating why they should appear on my Off Site Activity list.

I remain pretty interested to know how/why they managed to track me. I can only surmise I clicked on some technology-related content in my Facebook feed, either intentionally or by accident.

What happened when you asked them about this? They first asked me for confirmation that they were on my list. After I had sent a screenshot, they followed up to say they would investigate. I pushed again after hearing nothing for several weeks. At this point they asked for additional information from the Off-Facebook Activity tool — namely more granular metrics, such as a time and date per event and some label information — to help with tracking down this particular data-exchange.

I had previously provided them with the date (as it appears in the screenshot) but it’s possible to download additional an additional level of information about data transfers which includes per event time/date-stamps and labels/tags, such as “VIEW_CONTENT” .

However, as noted above, I had previously selected and deleted one item off of my Off-Facebook Activity list, after which Facebook’s platform had immediately erased all entries and associated metrics. There was no obvious way I could recover access to that information.

“Without this information I would speculate that you viewed an article or product on our site — we publish a lot of ‘How To’ content related to 3D printing and other digital manufacturing technologies — this information could have then been captured by Facebook via Adroll for ad retargeting purposes,” a MatterHackers spokesman told me. “Operationally, we have no other data sharing mechanism with Facebook.”

Subsequently, the company confirmed it implements Facebook’s tracking pixel on every page of its website.

Of the pixel Facebook writes that it enables website owners to track “conversions” (i.e. website actions); create custom audiences which segment site visitors by criteria that Facebook can identify and match across its user-base, allowing for the site owner to target ads via Facebook’s platform at non-customers with a similar profile/criteria to existing customers that are browsing its site; and for creating dynamic ads where a template ad gets populated with product content based on tracking data for that particular visitor.

Regarding the legal base for the data sharing, MatterHackers had this to say: “MatterHackers is not an EU entity, nor do we conduct business in the EU and so have not undertaken GDPR compliance measures. CCPA [California’s Consumer Privacy Act] will likely apply to our business as of 2021 and we have begun the process of ensuring that our website will be in compliance with those regulations as of January 1st.”

I pointed out that GDPR is extraterritorial in scope — and can apply to non-EU based entities, such as if they’re monitoring individuals in the EU (as in this case).

Also likely relevant: A ruling last year by Europe’s top court found sites that embed third party plug-ins such as Facebook’s like button are jointly responsible for the initial data processing — and must either obtain informed consent from site visitors prior to data being transferred to Facebook, or be able to demonstrate a legitimate interest legal basis for processing this data.

Nonetheless it’s still not clear what legal base the company is relying on for implementing the tracking pixel and passing data on EU Facebook users.

When asked about this MatterHacker COO, Kevin Pope, told me:

While we appreciate the sentiment of GDPR, in this case the EU lacks the legal standing to pursue an enforcement action. I’m sure you can appreciate the potential negative consequences if any arbitrary country (or jurisdiction) were able to enforce legal penalties against any website simply for having visitors from that country. Techcrunch would have been fined to oblivion many times over by China or even Thailand (for covering the King in a negative light). In this way, the attempted overreach of the GDPR’s language sets a dangerous precedent.
To provide a little more detail – MatterHackers, at the time of your visit, wouldn’t have known that you were from the EU until we cross-referenced your session with  Facebook, who does know. At that point you would have been filtered from any advertising by us. MatterHackers makes money when our (U.S.) customers buy 3D printers or materials and then succeed at using them (hence the how-to articles), we don’t make any money selling advertising or data.
Given that Facebook does legally exist in the EU and does have direct revenues from EU advertisers, it’s entirely appropriate that Facebook should comply with EU regulations. As a global solution, I believe more privacy settings options should be available to its users. However, given Facebook’s business model, I wouldn’t expect anything other than continued deflection (note the careful wording on their tool) and avoidance from them on this issue.

What did you learn from their inclusion in the Off-Facebook Activity List? I found out that an ecommerce company I had never heard of had been tracking me

Wallapop

What is it? A Barcelona-based peer-to-peer marketplace app that lets people list secondhand stuff for sale and/or to search for things to buy in their proximity. Users can meet in person to carry out a transaction paying in cash or there can be an option to pay via the platform and have an item posted

Why did it appear in your Off-Facebook Activity list? This was the only digital activity that appeared in the list that was something I could explain — figuring out I must have used a Facebook sign-in option when using the Wallapop app to buy/sell. I wouldn’t normally use Facebook sign-in but for trust-based marketplaces there may be user benefits to leveraging network effects.

What happened when you asked them about this? After my query was booted around a bit a PR company that works with Wallapop responded asking to talk through what information I was trying to ascertain.

After we chatted they sent this response — attributed to sources from Wallapop:

Same as it happens with other apps, wallapop can appear on our users’ Facebook Off Site Activity page if they have interacted in any way with the platform while they were logged in their Facebook accounts. Some interaction examples include logging in via Facebook, visiting our website or having both apps opened and logged.

As other apps do, wallapop only shares activity events with Facebook to optimize users’ ad experience. This includes if a user is registered in wallapop, if they have uploaded an item or if they have started a conversation. Under no circumstance wallapop shares with Facebook our users’ personal data (including sex, name, email address or telephone number).

At wallapop, we are thoroughly committed with the security of our community and we do a safe treatment of the data they choose to share with us, in compliance with EU’s General Data Protection Regulation. Under no circumstance these data are shared with third parties without explicit authorization.

I followed up to ask for further details about these “activity events” — asking whether, for instance, Wallapop shares messaging content with Facebook as well as letting the social network know which items a user is chatting about.

“Under no circumstance the content of our users’ messages is shared with Facebook,” the spokesperson told me. “What is shared is limited to the fact that a conversation has been initiated with another user in relation to a specific item, this is, activity events. Under no circumstance we would share our users’ personal information either.”

Of course the point is Facebook is able to link all app activity with the user ID it already has — so every piece of activity data being shared is personal data.

I also asked what legal base Wallapop relies on to share activity data with Facebook. They said the legal basis is “explicit consent given by users” at the point of signing up to use the app.

“Wallapop collects explicit consent from our users and at any time they can exercise their rights to their data, which include the modification of consent given in the first place,” they said.

“Users give their explicit consent by clicking in the corresponding box when they register in the app, where they also get the chance to opt out and not do it. If later on they want to change the consent they gave in first instance, they also have that option through the app. All the information is clearly available on our Privacy Policy, which is GDPR compliant.”

“At wallapop we take our community’s privacy and security very seriously and we follow recommendations from the Spanish Data Protection Agency,” it added

What did you learn from their inclusion in the Off-Facebook Activity list? Not much more than I would have already guessed — i.e. that using a Facebook sign-in option in a third party app grants the social media giant a high degree of visibility into your activity within another service.

In this case the Wallapop app registered the most activity events of all six of the listed apps, displaying 13 vs only one apiece for the others — so it gave a bit of a suggestive glimpse into the volume of third party app data that can be passed if you opt to open a Facebook login wormhole into a separate service.

Forensic Architecture redeploys surveillance state tech to combat state-sponsored violence

The specter of constant surveillance hangs over all of us in ways we don’t even fully understand, but it is also possible to turn the tools of the watchers against them. Forensic Architecture is exhibiting several long-term projects at the Museum of Art and Design in Miami that use the omnipresence of technology as a way to expose crimes and violence by oppressive states.

Over seven years Eyal Weizman and his team have performed dozens of investigations into instances of state-sponsored violence, from drone strikes to police brutality. Often these events are minimized at all levels by the state actors involved, denied or no-commented until the media cycle moves on. But sometimes technology provides ways to prove a crime was committed and occasionally even cause the perpetrator to admit it — hoisted by their own electronic petard.

Sometimes this is actual state-deployed kit, like body cameras or public records, but it also uses private information co-opted by state authorities to track individuals, like digital metadata from messages and location services.

For instance, when Chicago police shot and killed Harith Augustus in 2018, the department released some footage of the incident, saying that it “speaks for itself.” But Forensic Architecture’s close inspection of the body cam footage and cross reference with other materials makes it obvious that the police violated numerous rules (including in the operation of the body cams) in their interaction with him, escalating the situation and ultimately killing a man who by all indications — except the official account — was attempting to comply. It also helped additional footage see the light which was either mistakenly or deliberately left out of a FOIA release.

In another situation, a trio of Turkish migrants seeking asylum in Greece were shown, by analysis of their WhatsApp messages, images, and location and time stamps, to have entered Greece and been detained by Greek authorities before being “pushed back” by unidentified masked escorts, having been afforded no legal recourse to asylum processes or the like. This is one example of several recently that appear to be private actors working in concert with the state to deprive people of their rights.

Situated testimony for survivors

I spoke with Weizman before the opening of this exhibition in Miami, where some of the latest investigations are being shown off. (Shortly after our interview he would be denied entry to the U.S. to attend the opening, with a border agent explaining that this denial was algorithmically determined; We’ll come back to this.)

The original motive for creating Forensic Architecture, he explained, was to elicit testimony from those who had experienced state violence.

“We started using this technique when in 2013 we met a drone survivor, a German woman who had survived a drone strike in Pakistan that killed several relatives of hers,” Weizman explained. “She has wanted to deliver testimony in a trial regarding the drone strike, but like many survivors her memory was affected by the trauma she has experienced. The memory of the event was scattered, it had lacunae and repetitions, as you often have with trauma. And her condition is like many who have to speak out in human rights work: The closer you get to the core of the testimony, the description of the event itself, the more it escapes you.”

The approach they took to help this woman, and later many others, jog her own memory, was something called “situated testimony.” Essentially it amounts to exposing the person to media from the experience, allowing them to “situate” themselves in that moment. This is not without its own risks.

“Of course you must have the appropriate trauma professionals present,” Weizman said. “We only bring people who are willing to participate and perform the experience of being again at the scene as it happened. Sometimes details that would not occur to someone to be important come out.”

A digital reconstruction of a drone strike’s explosion was recreated physically for another exhibition.

But it’s surprising how effective it can be, he explained. One case exposed American involvement hitherto undisclosed.

“We were researching a Cameroon special forces detention center, torture and death in custody occurred, for Amnesty International,” he explained. “We asked detainees to describe to us simply what was outside the window. How many trees, or what else they could see.” Such testimony could help place their exact location and orientation in the building and lead to more evidence, such as cameras across the street facing that room.

“And sitting in a room based on a satellite image of the area, one told us: ‘yes, there were two trees, and one was over by the fence where the American soldiers were jogging.’ We said, ‘wait, what, can you repeat that?’ They had been interviewed many times and never mentioned American soldiers,” Weizman recalled. “When we heard there were American personnel, we found Facebook posts from service personnel who were there, and were able to force the transfer of prisoners there to another prison.”

Weizman noted that the organization only goes where help is requested, and does not pursue what might be called private injustices, as opposed to public.

“We require an invitation, to be invited into this by communities that invite state violence. We’re not a forensic agency, we’re a counter-forensic agency. We only investigate crimes by state authorities.”

Using virtual reality: “Unparalleled. It’s almost tactile.”

In the latest of these investigations, being exhibited for the first time at MOAD, the team used virtual reality for the first time in their situated testimony work. While VR has proven to be somewhat less compelling than most would like on the entertainment front, it turns out to work quite well in this context.

“We worked with an Israeli whistleblower soldier regarding testimony of violence he committed against Palestinians,” Weizman said. “It has been denied by the Israeli prime minister and others, but we have been able to find Palestinian witnesses to that case, and put them in VR so we could cross reference them. we had victim and perpetrator testifying to the same crime in the same space, and their testimonies can be overlaid on each other.”

Dean Issacharoff – the soldier accused by Israel of giving false testimony – describes the moment he illegally beat a Palestinian civilian. (Caption and image courtesy of Forensic Architecture)

One thing about VR is that the sense of space is very real; If the environment is built accurately, things like sight-lines and positional audio can be extremely true to life. If someone says they saw the event occur here, but the state says it was here, and a camera this far away saw it at this angle… these incomplete accounts can be added together to form something more factual, and assembled into a virtual environment.

“That project is the first use of VR interviews we have done —  it’s still in a very experimental stage. But it didn’t involve fatalities, so the level of trauma was a bit more controlled,” Weizman explained. “We have learned that the level and precision we can arrive at in reconstructing and incident is unparalleled. It’s almost tactile; You can walk through the space, you can see every object: guns, cars, civilians. And you can populate it until the witness is satisfied that this is what they experienced. I think this is a first, definitely in forensic terms, as far as uses of VR.”

A photogrammetry-based reconstruction of the area of Hebron where the incident took place.

In video of the situated testimony, you can see witnesses describing locations more exactly than they likely or even possibly could have without the virtual reconstruction. “I stood with the men at exactly that point,” says one, gesturing towards an object he recognized, then pointing upwards: “There were soldiers on the roof of this building, where the writing is.”

Of course it is not the digital recreation itself that forces the hand of those involved, but the incontrovertible facts it exposes. No one would ever have know that the U.S. had a presence at that detainment facility, and the country had no reason to say it did. The testimony wouldn’t even have been enough, except that it put the investigators onto a line of inquiry that produced data. And in the case of the Israeli whistleblower, the situated testimony defies official accounts that the organization he represented had lied about the incident.

Avoiding “product placement” and tech incursion

Sophie Landres, MOAD’s Curator of Public Programs and Education, was eager to add that the museum is not hosting this exhibit as a way to highlight how wonderful technology is. It’s important to put the technology and its uses in context rather than try to dazzle people with its capabilities. You may find yourself playing into someone else’s agenda that way.

“For museum audiences, this might be one of their first encounters with VR deployed in this way. The companies that manufacture these technologies know that people will have their first experiences with this tech in a cultural or entertainment contrast, and they’re looking for us to put a friendly face on these technologies that have been created to enable war and surveillance capitalism,” she told me. “But we’re not interested in having our museum be a showcase for product placement without having a serious conversation about it. It’s a place where artists embrace new technologies, but also where they can turn it towards existing power structures.”

Boots on backs mean this not an advertisement for VR headsets or 3D modeling tools.

She cited a tongue-in-cheek definition of “mixed reality” referring to both digital crossover into the real world and the deliberate obfuscation of the truth at a greater scale.

“On the one hand you have mixing the digital world and the real, and on the other you have the mixed reality of the media environment, where there’s no agreement on reality and all these misinformation campaigns. What’s important about Forensic Architecture is they’re not just presenting evidence of the facts, but also the process used to arrive at these truth claims, and that’s extremely important.”

In openly presenting the means as well as the ends, Weizman and his team avoid succumbing to what he calls the “dark epistemology” of the present post-truth era.

“The arbitrary logic of the border”

As mentioned earlier, Weizman was denied entry to the U.S. for reasons unknown, but possibly related to the network of politically active people with whom he has associated for the sake of his work. Disturbingly, his wife and children were also stopped while entering the states a day before him and separated at the airport for questioning.

In a statement issued publicly afterwards, Weizman dissected the event.

In my interview the officer informed me that my authorization to travel had been revoked because the “algorithm” had identified a security threat. He said he did not know what had triggered the algorithm but suggested that it could be something I was involved in, people I am or was in contact with, places to which I had traveled… I was asked to supply the Embassy with additional information, including fifteen years of travel history, in particular where I had gone and who had paid for it. The officer said that Homeland Security’s investigators could assess my case more promptly if I supplied the names of anyone in my network whom I believed might have triggered the algorithm. I declined to provide this information.

This much we know: we are being electronically monitored for a set of connections – the network of associations, people, places, calls, and transactions – that make up our lives. Such network analysis poses many problems, some of which are well known. Working in human rights means being in contact with vulnerable communities, activists and experts, and being entrusted with sensitive information. These networks are the lifeline of any investigative work. I am alarmed that relations among our colleagues, stakeholders, and staff are being targeted by the US government as security threats.

This incident exemplifies – albeit in a far less intense manner and at a much less drastic scale – critical aspects of the “arbitrary logic of the border” that our exhibition seeks to expose. The racialized violations of the rights of migrants at the US southern border are of course much more serious and brutal than the procedural difficulties a UK national may experience, and these migrants have very limited avenues for accountability when contesting the violence of the US border.

The works being exhibited, he said, “seek to demonstrate that we can invert the forensic gaze and turn it against the actors—police, militaries, secret services, border agencies—that usually seek to monopolize information. But in employing the counter-forensic gaze one is also exposed to higher level monitoring by the very state agencies investigated.”

Forensic Architecture’s investigations are ongoing; you can keep up with them at the organization’s website. And if you’re in Miami, drop by MOAD to see some of the work firsthand.

Google’s new T&Cs include a Brexit ‘easter egg’ for UK users

Google has buried a major change in legal jurisdiction for its UK users as part of a wider update to its terms and conditions that’s been announced today and which it says is intended to make its conditions of use clearer for all users.

It says the update to its T&Cs is the first major revision since 2012 — with Google saying it wanted to ensure the policy reflects its current products and applicable laws.

Google says it undertook a major review of the terms, similar to the revision of its privacy policy in 2018, when the EU’s General Data Protection Regulation started being applied. But while it claims the new T&Cs are easier for users to understand — rewritten using simpler language and a clearer structure — there are no other changes involved, such as to how it handles people’s data.

“We’ve updated our Terms of Service to make them easier for people around the world to read and understand — with clearer language, improved organization, and greater transparency about changes we make to our services and products. We’re not changing the way our products work, or how we collect or process data,” Google spokesperson Shannon Newberry said in a statement.

Users of Google products are being asked to review and accept the new terms before March 31 when they are due to take effect.

Reuters reported on the move late yesterday — citing sources familiar with the update who suggested the change of jurisdiction for UK users will weaken legal protections around their data.

However Google disputes there will be any change in privacy standards for UK users as a result of the shift. it told us there will be no change to how it process UK users’ data; no change to their privacy settings; and no change to the way it treats their information as a result of the move.

We asked the company for further comment on this — including why it chose not to make a UK subsidiary the legal base for UK users — and a spokesperson told us it is making the change as part of its preparations for the UK to leave the European Union (aka Brexit).

Like many companies, we have to prepare for Brexit,” Google said. “Nothing about our services or our approach to privacy will change, including how we collect or process data, and how we respond to law enforcement demands for users’ information. The protections of the UK GDPR will still apply to these users.”

Heather Burns, a tech policy specialist based in Glasgow, Scotland — who runs a website dedicated to tracking UK policy shifts around the Brexit process — also believes Google has essentially been forced to make the move because the UK government has recently signalled its intent to diverge from European Union standards in future, including on data protection.

“What has changed since January 31 has been [UK prime minister] Boris Johnson making a unilateral statement that the UK will go its own way on data protection, in direct contrast to everything the UK’s data protection regulator and government has said since the referendum,” she told us. “These bombastic, off-the-cuff statements play to his anti-EU base but businesses act on them. They have to.”

“Google’s transfer of UK accounts from the EU to the US is an indication that they do not believe the UK will either seek or receive a data protection adequacy agreement at the end of the transition period. They are choosing to deal with that headache now rather than later. We shouldn’t underestimate how strong a statement this is from the tech sector regarding its confidence in the Johnson premiership,” she added.

Asked whether she believes there will be a reduction in protections for UK users in future as a result of the shift Burns suggested that will largely depend on Google.

So — in other words — Brexit means, er, trust Google to look after your data.

“The European data protection framework is based around a set of fundamental user rights and controls over the uses of personal data — the everyday data flows to and from all of our accounts. Those fundamental rights have been transposed into UK domestic law through the Data Protection Act 2018, and they will stay, for now. But with the Johnson premiership clearly ready to jettison the European-derived system of user rights for the US-style anything goes model,” Burns suggested.

“Google saying there is no change to the way we process users’ data, no change to their privacy settings and no change to the way we treat their information can be taken as an indication that they stand willing to continue providing UK users with European-style rights over their data — albeit from a different jurisdiction — regardless of any government intention to erode the domestic legal basis for those rights.”

Reuters’ report also raises concerns about the impact of the Cloud Act agreement between the UK and the US — which is due to come into effect this summer — suggesting it will pose a threat to the safety of UK Google users’ data once it’s moved out of an EU jurisdiction (in this case Ireland) to the US where the Act will apply.

The Cloud Act is intended to make it quicker and easier for law enforcement to obtain data stored in the cloud by companies based in the other legal jurisdiction.

So in future, it might be easier for UK authorities to obtain UK Google users’ data using this legal instrument applied to Google US.

It certainly seems clear that as the UK moves away from EU standards as a result of Brexit it is opening up the possibility of the country replacing long-standing data protection rights for citizens with a regime of supercharged mass surveillance. (The UK government has already legislated to give its intelligence agencies unprecedented powers to snoop on ordinary citizens’ digital comms — so it has a proven appetite for bulk data.)

Again, Google told us the shift of legal base for its UK users will make no difference to how it handles law enforcement requests — a process it talks about here — and further claimed this will be true even when the Cloud Act applies. Which is a weasely way of saying it will do exactly what the law requires.

Google confirmed that GDPR will continue to apply for UK users during the transition period between the old and new terms. After that it said UK data protection law will continue to apply — emphasizing that this is modelled after the GDPR. But of course in the post-Brexit future the UK government might choose to model it after something very different.

Asked to confirm whether it’s committing to maintain current data standards for UK users in perpetuity, the company told us it cannot speculate as to what privacy laws the UK will adopt in the future… 😬

We also asked why it hasn’t chosen to elect a UK subsidiary as the legal base for UK users. To which it gave a nonsensical response — saying this is because the UK is no longer in the EU. Which begs the question when did the UK suddenly become the 51st American State?

Returning to the wider T&Cs revision, Google said it’s making the changes in a response to litigation in the European Union targeted at its terms.

This includes a case in Germany where consumer rights groups successfully sued the tech giant over its use of overly broad terms which the court agreed last year were largely illegal.

In another case a year ago in France a court ordered Google to pay €30,000 for unfair terms — and ordered it to obtain valid consent from users for tracking their location and online activity.

Since at least 2016 the European Commission has also been pressuring tech giants, including Google, to fix consumer rights issues buried in their T&Cs — including unfair terms. A variety of EU laws apply in this area.

In another change being bundled with the new T&Cs Google has added a description about how its business works to the About Google page — where it explains its business model and how it makes money.

Here, among the usual ‘dead cat’ claims about not ‘selling your information’ (tl;dr adtech giants rent attention; they don’t need to sell actual surveillance dossiers), Google writes that it doesn’t use “your emails, documents, photos or confidential information (such as race, religion or sexual orientation) to personalize the ads we show you”.

Though it could be using all that personal stuff to help it build new products it can serve ads alongside.

Even further towards the end of its business model screed it includes the claim that “if you don’t want to see personalized ads of any kind, you can deactivate them at any time”. So, yes, buried somewhere in Google’s labyrinthine setting exists an opt out.

The change in how Google articulates its business model comes in response to growing political and regulatory scrutiny of adtech business models such as Google’s — including on data protection and antitrust grounds.

Google gobbling Fitbit is a major privacy risk, warns EU data protection advisor

The European Data Protection Board (EDPB) has intervened to raise concerns about Google’s plan to scoop up the health and activity data of millions of Fitbit users — at a time when the company is under intense scrutiny over how extensively it tracks people online and for antitrust concerns.

Google confirmed its plan to acquire Fitbit last November, saying it would pay $7.35 per share for the wearable maker in an all-cash deal that valued Fitbit, and therefore the activity, health, sleep and location data it can hold on its more than 28M active users, at ~$2.1 billion.

Regulators are in the process of considering whether to allow the tech giant to gobble up all this data.

Google, meanwhile, is in the process of dialling up its designs on the health space.

In a statement issued after a plenary meeting this week the body that advises the European Commission on the application of EU data protection law highlights the privacy implications of the planned merger, writing: “There are concerns that the possible further combination and accumulation of sensitive personal data regarding people in Europe by a major tech company could entail a high level of risk to the fundamental rights to privacy and to the protection of personal data.”

Just this month the Irish Data Protection Commission (DPC) opened a formal investigation into Google’s processing of people’s location data — finally acting on GDPR complaints filed by consumer rights groups as early as November 2018  which argue the tech giant uses deceptive tactics to manipulate users in order to keep tracking them for ad-targeting purposes.

We’ve reached out to the Irish DPC — which is the lead privacy regulator for Google in the EU — to ask if it shares the EDPB’s concerns.

The latter’s statement goes on to reiterate the importance for EU regulators to asses what it describes as the “longer-term implications for the protection of economic, data protection and consumer rights whenever a significant merger is proposed”.

It also says it intends to remain “vigilant in this and similar cases in the future”.

The EDPB includes a reminder that Google and Fitbit have obligations under Europe’s General Data Protection Regulation to conduct a “full assessment of the data protection requirements and privacy implications of the merger” — and do so in a transparent way, under the regulation’s principle of accountability.

“The EDPB urges the parties to mitigate the possible risks of the merger to the rights to privacy and data protection before notifying the merger to the European Commission,” it also writes.

We reached out to Google for comment but at the time of writing it had not provided a response nor responded to a question asking what commitments it will be making to Fitbit users regarding the privacy of their data.

Fitbit has previously claimed that users’ “health and wellness data will not be used for Google ads”.

However big tech has a history of subsequently steamrollering founder claims that ‘nothing will change’. (See, for e.g.: Facebook’s WhatsApp U-turn on data-linking.)

“The EDPB will consider the implications that this merger may have for the protection of personal data in the European Economic Area and stands ready to contribute its advice on the proposed merger to the Commission if so requested,” the advisory body adds.

We’ve also reached out to the European Commission’s competition unit for a response to the EDPB’s statement.

UCLA backtracks on plan for campus facial recognition tech

After expressing interest in processing campus security camera footage with facial recognition software, UCLA is backing down.

In a letter to Evan Greer of Fight for the Future, a digital privacy advocacy group, UCLA Administrative Vice Chancellor Michael Beck announced the institution would abandon its plans in the face of a backlash from its student body.

“We have determined that the potential benefits are limited and are vastly outweighed by the concerns of the campus community,” Beck wrote.

The decision, deemed a “major victory” for privacy advocates, came as students partnered with Fight for the Future to plan a national day of protest on March 2. UCLA’s interest in facial recognition was a controversial departure from many elite universities that confirmed they have no intention to implement the surveillance technology, including MIT, Brown, and New York University.

UCLA student newspaper the Daily Bruin reported on the school’s interest in facial recognition tech last month, as the university proposed the addition of facial recognition software in a revision of its security camera policy. According to the Daily Bruin, the technology would have been used to screen individuals from restricted campus areas and to identify anyone flagged with a “stay-away order” prohibiting them from being on university grounds. The proposal faced criticism in a January town hall meeting on campus with 200 attendees and momentum against the surveillance technology built from there.

“We hope other universities see that they will not get away with these policies,” Matthew William Richard, UCLA student and vice chair of UCLA’s Campus Safety Alliance, said of the decision. “… Together we can demilitarize and democratize our campuses.”

Lack of big tech GDPR decisions looms large in EU watchdog’s annual report

The lead European Union privacy regulator for most of big tech has put out its annual report which shows another major bump in complaints filed under the bloc’s updated data protection framework, underlining the ongoing appetite EU citizens have for applying their rights.

But what the report doesn’t show is any firm enforcement of EU data protection rules vis-a-vis big tech.

The report leans heavily on stats to illustrate the volume of work piling up on desks in Dublin. But it’s light on decisions on highly anticipated cross-border cases involving tech giants including Apple, Facebook, Google, LinkedIn and Twitter.

The General Data Protection Regulation (GDPR) began being applied across the EU in May 2018 — so is fast approaching its second birthday. Yet its file of enforcements where tech giants are concerned remains very light — even for companies with a global reputation for ripping away people’s privacy.

This despite Ireland having a large number of open cross-border investigations into the data practices of platform and adtech giants — some of which originated from complaints filed right at the moment GDPR came into force.

In the report the Irish Data Protection Commission (DPC) notes it opened a further six statutory inquiries in relation to “multinational technology companies’ compliance with the GDPR” — bringing the total number of major probes to 21. So its ‘big case’ file continues to stack up. (It’s added at least two more since then, with a probe of Tinder and another into Google’s location tracking opened just this month.)

The report is a lot less keen to trumpet the fact that decisions on cross-border cases to date remains a big fat zero.

Though, just last week, the DPC made a point of publicly raising “concerns” about Facebook’s approach to assessing the data protection impacts of a forthcoming product in light of GDPR requirements to do so — an intervention that resulted in a delay to the regional launch of Facebook’s Dating product.

This discrepancy (cross-border cases: 21 – Irish DPC decisions: 0), plus rising anger from civil rights groups, privacy experts, consumer protection organizations and ordinary EU citizens over the paucity of flagship enforcement around key privacy complaints is clearly piling pressure on the regulator. (Other examples of big tech GDPR enforcement do exist. Well, France’s CNIL is one.)

In its defence the DPC does have a horrifying case load. As illustrated by other stats its keen to spotlight — such as saying it received a total of 7,215 complaints in 2019; a 75% increase on the total number (4,113) received in 2018. A full 6,904 of which were dealt with under the GDPR (while 311 complaints were filed under the Data Protection Acts 1988 and 2003).

There were also 6,069 data security breaches notified to it, per the report — representing a 71% increase on the total number (3,542) recorded last year.

While a full 457 cross-border processing complaints were received in Dublin via the GDPR’s One-Stop-Shop mechanism. (This is the device the Commission came up with for the ‘lead regulator’ approach that’s baked into GDPR and which has landed Ireland in the regulatory hot seat. tl;dr other data protection agencies are passing Dublin A LOT of paperwork.)

The DPC necessarily has to do back and forth on cross border cases, as it liaises with other interested regulators. All of which, you can imagine, creates a rich opportunity for lawyered up tech giants to inject extra friction into the oversight process — by asking to review and query everything. [Insert the sound of a can being hoofed down the road]

Meanwhile the agency that’s supposed to regulate most of big tech (and plenty else) — which writes in the annual report that it increased its full time staff from 110 to 140 last year — did not get all the funding it asked for from the Irish government.

So it also has the hard cap of its own budget to reckon with (just €15.3M in 2019) vs — for example — Google’s parent Alphabet’s $46.1BN in full year 2019 revenue. So, er, do the math.

Nonetheless the pressure is firmly now on Ireland for major GDPR enforcements to flow.

One year of major enforcement inaction could be filed under ‘bedding in’; but two years in without any major decisions would not be a good look. (It has previously said the first decisions will come early this year — so seems to be hoping to have something to show for GDPR’s 2nd birthday.)

Some of the high profile complaints crying out for regulatory action include behavioral ads serviced via real-time bidding programmatic advertising (which the UK data watchdog has admitted for half a year is rampantly unlawful); cookie consent banners (which remain a Swiss Cheese of non-compliance); and adtech platforms cynically forcing consent from users by requiring they agree to being microtargeted with ads to access the (‘free’) service. (Thing is GDPR stipulates that consent as a legal basis must be freely given and can’t be bundled with other stuff, so… )

Full disclosure: TechCrunch’s parent company, Verizon Media (née Oath), is also under ongoing investigation by the DPC — which is looking at whether it meets GDPR’s transparency requirements under Articles 12-14 of the regulation.

Seeking to put a positive spin on 2019’s total lack of a big tech privacy reckoning, commissioner Helen Dixon writes in the report: “2020 is going to be an important year. We await the judgment of the CJEU in the SCCs data transfer case; the first draft decisions on big tech investigations will be brought by the DPC through the consultation process with other EU data protection authorities, and academics and the media will continue the outstanding work they are doing in shining a spotlight on poor personal data practices.”

In further remarks to the media Dixon said: “At the Data Protection Commission, we have been busy during 2019 issuing guidance to organisations, resolving individuals’ complaints, progressing larger-scale investigations, reviewing data breaches, exercising our corrective powers, cooperating with our EU and global counterparts and engaging in litigation to ensure a definitive approach to the application of the law in certain areas.

“Much more remains to be done in terms of both guiding on proportionate and correct application of this principles-based law and enforcing the law as appropriate. But a good start is half the battle and the DPC is pleased at the foundations that have been laid in 2019. We are already expanding our team of 140 to meet the demands of 2020 and beyond.”

One notable date this year also falls when GDPR turns two — because a Commission review of how the regulation is functioning is looming in May.

That’s one deadline that may help to concentrate minds on issuing decisions.

Per the DPC report, the largest category of complaints it received last year fell under ‘access request’ issues — whereby data controllers are failing to give up (all) people’s data when asked — which amounted to 29% of the total; followed by disclosure (19%); fair processing (16%); e-marketing complaints (8%); and right to erasure (5%).

On the security front, the vast bulk of notifications received by the DPC related to unauthorised disclosure of data (aka breaches) — with a total across the private and public sector of 5,188 vs just 108 for hacking (though the second largest category was actually lost or stolen paper, with 345).

There were also 161 notification of phishing; 131 notification of unauthorized access; 24 notifications of malware; and 17 of ransomeware.

Europe sets out plan to boost data reuse and regulate “high risk” AIs

European Union lawmakers have set out a first bundle of proposals for a new digital strategy for the bloc, one that’s intended to drive digitalization across all industries and sectors — and enable what Commission president Ursula von der Leyen has described as ‘A Europe fit for the Digital Age‘.

It could also be summed up as a ‘scramble for AI’, with the Commission keen to rub out barriers to the pooling of massive European data sets in order to power a new generation of data-driven services as a strategy to boost regional competitiveness vs China and the U.S.

Pushing for the EU to achieve technological sovereignty is key plank of von der Leyen’s digital policy plan for the 27-Member State bloc.

Presenting the latest on her digital strategy to press in Brussels today, she said: “We want the digital transformation to power our economy and we want to find European solutions in the digital age.”

The top-line proposals are:

AI

  • Rules for “high risk” AI systems such as in health, policing, or transport requiring such systems are “transparent, traceable and guarantee human oversight”
  • A requirement that unbiased data is used to train high-risk systems so that they “perform properly, and to ensure respect of fundamental rights, in particular non-discrimination”
  • Consumer protection rules so authorities can “test and certify” data used by algorithms in a similar way to existing rules that allow for checks to be made on products such as cosmetics, cars or toys
  • A “broad debate” on the circumstances where use of remote use of biometric identification could be justified
  • A voluntary labelling scheme for lower risk AI applications
  • Proposing the creation of an EU governance structure to ensure a framework for compliance with the rules and avoid fragmentation across the bloc

Data

  • A regulatory framework covering data governance, access and reuse between businesses, between businesses and government, and within administrations to create incentives for data sharing, which the Commission says will establish “practical, fair and clear rules on data access and use, which comply with European values and rights such as personal data protection, consumer protection and competition rules” 
  • A push to make public sector data more widely available by opening up “high-value datasets” to enable their reuse to foster innovation
  • Support for cloud infrastructure platforms and systems to support the data reuse goals. The Commission says it will contribute to investments in European High Impact projects on European data spaces and trustworthy and energy efficient cloud infrastructures
  • Sectoral specific actions to build European data spaces that focus on specific areas such as industrial manufacturing, the green deal, mobility or health

The full data strategy proposal can be found here.

While the Commission’s white paper on AI “excellence and trust” is here.

Next steps will see the Commission taking feedback on the plan — as it kicks off public consultation on both proposals.

A final draft is slated by the end of the year after which the various EU institutions will have their chance to chip into (or chip away at) the plan. So how much policy survives for the long haul remains to be seen.

Tech for good

At a press conference following von der Leyen’s statement Margrethe Vestager, the Commission EVP who heads up digital policy, and Thierry Breton, commissioner for the internal market, went into some of the detail around the Commission’s grand plan for “shaping Europe’s digital future”.

The digital policy package is meant to define how we shape Europe’s digital future “in a way that serves us all”, said Vestager.

The strategy aims to unlock access to “more data and good quality data” to fuel innovation and underpin better public services, she added.

The Commission’s digital EVP Margrethe Vestager discussing the AI whitepaper

Collectively, the package is about embracing the possibilities AI create while managing the risks, she also said, adding that: “The point obviously is to create trust, rather than fear.”

She noted that the two policy pieces being unveiled by the Commission today, on AI and data, form part of a more wide-ranging digital and industrial strategy whole with additional proposals still to be set out.

“The picture that will come when we have assembled the puzzle should illustrate three objectives,” she said. “First that technology should world for people and not the other way round; it is first and foremost about purpose The development, the deployment, the uptake of technology must work in the same direction to make a real positive difference in our daily lives.

“Second that we want a fair and competitive economy — a full Single Market where companies of all sizes can compete on equal terms, where the road from garage to scale up is as short as possible. But it also means an economy where the market power held by a few incumbents cannot be used to block competition. It also means an economy were consumers can take it for granted that their rights are being respected and profits are being taxed where they are made”

Thirdly, she said the Commission plan would support “an open, democratic and sustainable society”.

“This means a society where citizens can control the data that they provide, where digit platforms are accountable for the contents that they feature… This is a fundamental thing — that while we use new digital tools, use AI as a tool, that we build a society based on our fundamental rights,” she added, trailing a forthcoming democracy action plan.

Digital technologies must also actively enable the green transition, said Vestager — pointing to the Commission’s pledge to achieve carbon neutrality by 2050. Digital, satellite, GPS and sensor data would be crucial to this goal, she suggested.

“More than ever a green transition and digital transition goes hand in hand.”

On the data package Breton said the Commission will launch a European and industrial cloud platform alliance to drive interest in building the next gen platforms he said would be needed to enable massive big data sharing across the EU — tapping into 5G and edge computing.

“We want to mobilize up to €2BN in order to create and mobilize this alliance,” he said. “In order to run this data you need to have specific platforms… Most of this data will be created locally and processed locally — thanks to 5G critical network deployments but also locally to edge devices. By 2030 we expect on the planet to have 500BN connected devices… and of course all the devices will exchange information extremely quickly. And here of course we need to have specific mini cloud or edge devices to store this data and to interact locally with the AI applications embedded on top of this.

“And believe me the requirement for these platforms are not at all the requirements that you see on the personal b2c platform… And then we need of course security and cyber security everywhere. You need of course latencies. You need to react in terms of millisecond — not tenths of a second. And that’s a totally different infrastructure.”

“We have everything in Europe to win this battle,” he added. “Because no one has expertise of this battle and the foundation — industrial base — than us. And that’s why we say that maybe the winner of tomorrow will not be the winner of today or yesterday.”

Trustworthy artificial intelligence

On AI Vestager said the major point of the plan is “to build trust” — by using a dual push to create what she called “an ecosystem of excellence” and another focused on trust.

The first piece includes a push by the Commission to stimulate funding, including in R&D and support for research such as by bolstering skills. “We need a lot of people to be able to work with AI,” she noted, saying it would be essential for small and medium sized businesses to be “invited in”.

On trust the plan aims to use risk to determine how much regulation is involved, with the most stringent rules being placed on what it dubs “high risk” AI systems. “That could be when AI tackles fundamental values, it could be life or death situation, any situation that could cause material or immaterial harm or expose us to discrimination,” said Vestager.

To scope this the Commission approach will focus on sectors where such risks might apply — such as energy and recruitment.

If an AI product or service is identified as posing a risk then the proposal is for an enforcement mechanism to test that the product is safe before it is put into use. These proposed “conformity assessments” for high risk AI systems include a number of obligations Vestager said are based on suggestions by the EU’s High Level Expert Group on AI — which put out a slate of AI policy recommendations last year.

The four requirements attached to this bit of the proposals are: 1) that AI systems should be trained using data that “respects European values and rules” and that a record of such data is kept; 2) that an AI system should provide “clear information to users about its purpose, its capabilities but also its limits” and that it be clear to users when they are interacting with an AI rather than a human; 3) AI systems must be “technically robust and accurate in order to be trustworthy”; and 4) they should always ensure “an appropriate level of human involvement and oversight”.

Obviously there are big questions about how such broad-brush requirements will be measured and stood up (as well as actively enforced) in practice.

If an AI product or service is not identified as high risk Vestager noted there would still be regulatory requirements in play — such as the need for developers to comply with existing EU data protection rules.

In her press statement, Commission president von der Leyen highlighted a number of examples of how AI might power a range of benefits for society — from “better and earlier” diagnosis of diseases like cancer to helping with her parallel push for the bloc to be carbon neutral by 2050, such as by enabling precision farming and smart heating — emphasizing that such applications rely on access to big data.

Artificial intelligence is about big data,” she said. “Data, data and again data. And we all know that the more data we have the smarter our algorithms. This is a very simple equation. Therefore it is so important to have access to data that are out there. This is why we want to give our businesses but also the researchers and the public services better access to data.”

“The majority of data we collect today are never ever used even once. And this is not at all sustainable,” she added. “In these data we collect that are out there lies an enormous amount of precious ideas, potential innovation, untapped potential we have to unleash — and therefore we follow the principal that in Europe we have to offer data spaces where you can not only store your data but also share with others. And therefore we want to create European data spaces where businesses, governments and researchers can not only store their data but also have access to other data they need for their innovation.”

She too impressed the need for AI regulation, including to guard against the risk of biased algorithms — saying “we want citizens to trust the new technology”. “We want the application of these new technologies to deserve the trust of our citizens. This is why we are promoting a responsible, human centric approach to artificial intelligence,” she added.

She said the planned restrictions on high risk AI would apply in fields such as healthcare, recruitment, transportation, policing and law enforcement — and potentially others.

“We will be particularly careful with sectors where essential human interests and rights are at stake,” she said. “Artificial intelligence must serve people. And therefore artificial intelligence must always comply with people’s rights. This is why a person must always be in control of critical decisions and so called ‘high risk AI’ — this is AI that potentially interferes with people’s rights — have to be tested and certified before they reach our single market.”

“Today’s message is that artificial intelligence is a huge opportunity in Europe, for Europe. We do have a lot but we have to unleash this potential that is out there. We want this innovation in Europe,” von der Leyen added. “We want to encourage our businesses, our researchers, the innovators, the entrepreneurs, to develop artificial intelligence and we want to encourage our citizens to feel confident to use it in Europe.”

Towards a rights-respecting common data space

The European Commission has been working on building what it dubs a “data economy” for several years at this point, plugging into its existing Digital Single Market strategy for boosting regional competitiveness.

Its aim is to remove barriers to the sharing of non-personal data within the single market. The Commission has previously worked on regulation to ban most data localization, as well as setting out measures to encourage the reuse of public sector data and open up access to scientific data.

Healthcare data sharing has also been in its sights, with policies to foster interoperability around electronic health records, and it’s been pushing for more private sector data sharing — both b2b and business-to-government.

“Every organisation should be able to store and process data anywhere in the European Union,” it wrote in 2018. It has also called the plan a “common European data space“. Aka “a seamless digital area with the scale that will enable the development of new products and services based on data”.

The focus on freeing up the flow of non-personal data is intended to complement the bloc’s long-standing rules on protecting personal data. The General Data Protection Regulation (GDPR), which came into force in 2018, has reinforced EU citizens’ rights around the processing of their personal information — updating and bolstering prior data protection rules.

The Commission views GDPR as a major success story by merit of how it’s exported conversations about EU digital standards to a global audience.

But it’s fair to say that back home enforcement of the GDPR remains a work in progress, some 21 months in — with many major cross-border complaints attached to how tech and adtech giants are processing people’s data still sitting on the desk of the Irish Data Protection Commission where multinationals tend to locate their EU HQ as a result of favorable corporate tax arrangements.

The Commission’s simultaneous push to encourage the development of AI arguably risks heaping further pressure on the GDPR — as both private and public sectors have been quick to see model-making value locked up in citizens’ data.

Already across Europe there are multiple examples of companies and/or state authorities working on building personal data-fuelled diagnostic AIs for healthcare; using machine learning for risk scoring of benefits claimants; and applying facial recognition as a security aid for law enforcement, to give three examples.

There has also been controversy fast following such developments. Including around issues such as proportionality and the question of consent to legally process people’s data — both under GDPR and in light of EU fundamental privacy rights as well as those set out in the European Convention of Human Rights.

Only this month a Dutch court ordered the state to cease use of a blackbox algorithm for assessing the fraud risk of benefits claimants on human rights grounds — objecting to a lack of transparency around how the system functions and therefore also “insufficient” controllability.

The von der Leyen Commission, which took up its five-year mandate in December, is alive to rights concerns about how AI is being applied, even as it has made it clear it intends to supercharge the bloc’s ability to leverage data and machine learning technologies — eyeing economic gains.

Commission president, Ursula von der Leyen, visiting the AI Intelligence Center in Brussels (via the EC’s EbS Live AudioVisual Service)

The Commission president committed to publishing proposals to regulate AI within the first 100 days — saying she wants a European framework to steer application to ensure powerful learning technologies are used ethically and for the public good.

But a leaked draft of the plan to regulate AI last month suggested it would step back from imposing even a temporary ban on the use of facial recognition technology — leaning instead towards tweaks to existing rules and sector/app specific risk-assessments and requirements.

It’s clear there are competing views at the top of the Commission on how much policy intervention is needed on the tech sector.

Breton has previously voiced opposition to regulating AI — telling the EU parliament just before he was confirmed in post that he “won’t be the voice of regulating AI“.

While Vestager has been steady in her public backing for a framework to govern how AI is applied, talking at her hearing before the EU parliament of the importance of people’s trust and Europe having its own flavor of AI that must “serve humans” and have “a purpose” .

“I don’t think that we can be world leaders without ethical guidelines,” she said then. “I think we will lose it if we just say no let’s do as they do in the rest of the world — let’s pool all the data from everyone, no matter where it comes from, and let’s just invest all our money.”

At the same time Vestager signalled a willingness to be pragmatic in the scope of the rules and how they would be devised — emphasizing the need for speed and agreeing the Commission would need to be “very careful not to over-regulate”, suggesting she’d accept a core minimum to get rules up and running.

The Commission is encouraging EU citizens to put questions about the digital strategy for Vestager to answer tomorrow, in a live Q&A at 17.45 CET on Facebook, Twitter and LinkedIn — using the hashtag #DigitalEU

Platform liability

There is more to come from the Commission on the digital policy front — with a Digital Services Act in the works to update pan-EU liability rules around Internet platforms.

That proposal is slated to be presented later this year and both commissioners said today that details remain to be worked out. The possibility that the Commission will propose rules to more tightly regulate online content platforms already has content farming adtech giants like Facebook cranking up their spin cycles.

During today’s press conference Breton said he would always push for what he dubbed “shared governance” but he warned several times that if platforms don’t agree an acceptable way forward “we will have to regulate” — saying it’s not up for European society to adapt to the platforms but for them to adapt to the EU.

“We will do this within the next eight months. It’s for sure. And everybody knows the rules,” he said. “Of course we’re entering here into dialogues with these platforms and like with any dialogue we don’t know exactly yet what will be the outcome. We may find at the end of the day a good coherent joint strategy which will fulfil our requirements… regarding the responsibilities of the platform. And by the way this is why personally when I meet with them I will always prefer a shared governance. But we have been extremely clear if it doesn’t work then we will have to regulate.”

Internal market commissioner, Thierry Breton

Ring slightly overhauls security and privacy, but it’s still not enough

Security camera maker Ring is updating its service to improve account security and give more control when it comes to privacy. Once again, this is yet another update that makes the overall experience slightly better but the Amazon-owned company is still not doing enough to protect its users.

First, Ring is reversing its stance when it comes to two-factor authentication. Two-factor authentication is now mandatory — you can’t even opt out. So the next time you login on your Ring account, you’ll receive a six-digit code via email or text message to confirm your login request.

This is very different from what Ring founder Jamie Siminoff told me at CES in early January:

“So now, we’re going one step further, which is for two-factor authentication. We really want to make it an opt-out, not an opt-in. You still want to let people opt out of it because there are people that just don’t want it. You don’t want to force it, but you want to make it as forceful as you can be without hurting the customer experience.”

Security experts all say that sending you a code by text message isn’t perfect. It’s better than no form of two-factor authentication, but text messages are not secure. They’re also tied to your phone number. That’s why SIM-swapping attacks are on the rise.

As for sending you a code via email, it really depends on your email account. If you haven’t enabled two-factor authentication on your email account, then Ring’s implementation of two-factor authentication is basically worthless. Ring should let you use app-based two-factor with the ability to turn off other methods in your account.

And that doesn’t solve Ring’s password issues. As Motherboard originally found out, Ring doesn’t prevent you from using a weak password and reusing passwords that have been compromised in security breaches from third-party services.

A couple of weeks ago, TechCrunch’s Zack Whittaker could create a Ring account with “12345678” and “password” as the password. He created another account with “password” a few minutes ago.

When it comes to privacy, the EFF called out Ring’s app as it shares a ton of information with third-party services, such as branch.io, mixpanel.com, appsflyer.com and facebook.com. Worse, Ring doesn’t require meaningful consent from the user.

You can now opt out of third-party services that help Ring serve personalized advertising. As for analytics, Ring is temporarily removing most third-party analytics services from its apps (but not all). The company plans on adding a menu to opt out of third-party analytics services in a future update.

Enabling third-party trackers and letting you opt out later isn’t GDPR compliant. So I hope the onboarding experience is going to change as well as the company shouldn’t enable these features without proper consent at all.

Ring could have used this opportunity to adopt a far stronger stance when it comes to privacy. The company sells devices that you set up in your garden, your living room and sometimes even your bedroom. Users certainly don’t want third-party companies to learn more about your interactions with Ring’s services. But it seems like Ring’s motto is still: “If we can do it, why shouldn’t we do it.”

Class action suit against Clearview AI cites Illinois law that cost Facebook $550M

Just two weeks ago Facebook settled a lawsuit alleging violations of privacy laws in Illinois for the considerable sum of $550 million. Now controversial startup Clearview AI, which has gleefully admitted to scraping and analyzing the data of millions, is the target of a new lawsuit citing similar violations.

Clearview made waves earlier this year with a business model seemingly predicated on wholesale abuse of public-facing data on Twitter, Facebook, Instagram and so on. If your face is visible to a web scraper or public API, Clearview either has it or wants it and will be submitting it for analysis by facial recognition systems.

Just one problem: That’s illegal in Illinois, and you ignore this to your peril, as Facebook found.

The lawsuit, filed yesterday on behalf of several Illinois citizens and first reported by Buzzfeed News, alleges that Clearview “actively collected, stored and used Plaintiffs’ biometrics — and the biometrics of most of the residents of Illinois — without providing notice, obtaining informed written consent or publishing data retention policies.”

Not only that, but this biometric data has been licensed to many law enforcement agencies, including within Illinois itself.

All this is allegedly in violation of the Biometric Information Privacy Act, a 2008 law that has proven to be remarkably long-sighted and resistant to attempts by industry (including, apparently, by Facebook while it fought its own court battle) to water it down.

The lawsuit (filed in New York, where Clearview is based) is at its very earliest stages and has only been assigned a judge, and summonses sent to Clearview and CDW Government, the intermediary for selling its services to law enforcement. It’s impossible to say how it will play out at this point but given the success of the Facebook suit and the similarity of the two cases (essentially the automatic and undisclosed ingestion of photos by a facial recognition engine) suggest that this one has legs.

The scale is difficult to predict, and likely would depend largely on disclosure by Clearview as to the number and nature of its analysis of photos of those protected by BIPA.

Even if Clearview were to immediately delete all the information it has on citizens of Illinois, it would still likely be liable for its previous acts. A federal judge in Facebook’s case wrote: “the development of face template using facial-recognition technology without consent (as alleged here) invades an individual’s private affairs and concrete interests,” and is therefore actionable. That’s a strong precedent and the similarities are undeniable — not that they won’t be denied.

You can read the text of the complaint here.

Surprise! Audit finds automated license plate reader programs are a privacy nightmare

Automated license plate readers, ALPRs, would be controversial even if they were responsibly employed by the governments that run them. Unfortunately, and to no one’s surprise, the way they actually operate is “deeply disturbing and confirm[s] our worst fears about the misuse of this data,” according to an audit of the programs instigated by a Californian legislator.

What we’ve learned today is that many law enforcement agencies are violating state law, are retaining personal data for lengthy periods of time, and are disseminating this personal data broadly. This state of affairs is totally unacceptable,” said California State Senator Scott Weiner (D-SF), who called for the audit of these programs. The four agencies audited were the LAPD, Fresno PD, and the Marin and Sacramento County Sheriffs Departments.

The inquiry revealed that the programs can barely justify their existence and not seem to have, let alone follow, best practices for security and privacy:

  • Los Angeles alone stores 320 million license plate images, 99.9 percent of which were not being sought by law enforcement at the time of collection.
  • Those images were shared with “hundreds” of other agencies but there was no record of how this was justified legally or accomplished properly.
  • None of the agencies has a privacy policy in line with requirements established in 2016. Three could not adequately explain access and oversight permissions, or how and when data would or could be destroyed, “and the remaining agency has not developed a policy at all.”
  • There were almost no policies or protections regarding account creation and use and have never audited their own systems.
  • Three of the agencies store their images and data with a cloud vendor, the contract for which had inadequate if any protections for that data.

In other words, “there is significant cause for alarm,” the press release stated. As the programs appear to violate state law they may be prosecuted, and as existing law appears to be inadequate to the task of regulating them, new ones must be proposed, Wiener said, and he is working on it.

The full report can be read here.