Can predictive analytics be made safe for humans?

Massive-scale predictive analytics is a relatively new phenomenon, one that challenges both decades of law as well as consumer thinking about privacy.

As a technology, it may well save thousands of lives in applications like predictive medicine, but if it isn’t used carefully, it may prevent thousands from getting loans, for instance, if an underwriting algorithm is biased against certain users.

I chatted with Dennis Hirsch a few weeks ago about the challenges posed by this new data economy. Hirsch is a professor of law at Ohio State and head of its Program on Data and Governance. He’s also affiliated with the university’s Risk Institute.

“Data ethics is the new form of risk mitigation for the algorithmic economy,” he said. In a post-Cambridge Analytica world, every company has to assess what data it has on its customers and mitigate the risk of harm. How to do that, though, is at the cutting edge of the new field of data governance, which investigates the processes and policies through which organizations manage their data.

You’re reading the Extra Crunch Daily. Like this newsletter? Subscribe for free to follow all of our discussions and debates.

“Traditional privacy regulation asks whether you gave someone notice and given them a choice,” he explains. That principle is the bedrock for Europe’s GDPR law, and for the patchwork of laws in the U.S. that protect privacy. It’s based around the simplistic idea that a datum — such as a customer’s address — shouldn’t be shared with, say, a marketer without that user’s knowledge. Privacy is about protecting the address book, so to speak.

The rise of “predictive analytics” though has completely demolished such privacy legislation. Predictive analytics is a fuzzy term, but essentially means interpreting raw data and drawing new conclusions through inference. This is the story of the famous Target data crisis, where the retailer recommended pregnancy-related goods to women who had certain patterns of purchases. As Charles Duhigg explained at the time:

Many shoppers purchase soap and cotton balls, but when someone suddenly starts buying lots of scent-free soap and extra-big bags of cotton balls, in addition to hand sanitizers and washcloths, it signals they could be getting close to their delivery date.

Predictive analytics is difficult to predict. Hirsch says “I don’t think any of us are going to be intelligent enough to understand predictive analytics.” Talking about customers, he said “They give up their surface items — like cotton balls and unscented body lotion — they know they are sharing that, but they don’t know they are giving up their pregnancy status. … People are not going to know how to protect themselves because they can’t know what can be inferred from their surface data.”

In other words, the scale of those predictions completely undermines notice and consent.

Even though the law hasn’t caught up to this exponentially more challenging problem, companies themselves seem to be responding in the wake of Target and Facebook’s very public scandals. “What we are hearing is that we don’t want to put our customers at risk,” Hirsch explained. “They understand that this predictive technology gives them really awesome power and they can do a lot of good with it, but they can also hurt people with it.” The key actors here are corporate chief privacy officers, a role that has cropped up in recent years to mitigate some of these challenges.

Hirsch is spending significant time trying to build new governance strategies to allow companies to use predictive analytics in an ethical way, so that “we can achieve and enjoy its benefits without having to bear these costs from it.” He’s focused on four areas: privacy, manipulation, bias, and procedural unfairness. “We are going to set out principles on what is ethical and and what is not,” he said.

Much of that focus has been on how to help regulators build policies that can manage predictive analytics. Since people can’t understand the extent that inferences can be made with their data, “I think a much better regulatory approach is to have someone who does understand, ideally some sort of regulator, who can draw some lines.” Hirsch has been researching how the FTC’s Unfairness Authority may be a path forward for getting such policies into practice.

He analogized this to the Food and Drug Administration. “We have no ability to assess the risks of a given drug [so] we give it to an expert agency and allow them to assess it,” he said. “That’s the kind of regulation that we need.”

Hirsch overall has a balanced perspective on the risks and rewards here. He wants analytics to be “more socially acceptable” but at the same time, sees the needs for careful scrutiny and oversight to ensure that consumers are protected. Ultimately, he sees that as incredibly beneficial to companies who can take the value out of this tech without risking provoking consumer ire.

Who will steal your data more: China or America?

The Huawei logo is seen in the center of Warsaw, Poland

Jaap Arriens/NurPhoto via Getty Images

Talking about data ethics, Europe is in the middle of a superpower pincer. China’s telecom giant Huawei has made expansion on the continent a major priority, while the United States has been sending delegation after delegation to convince its Western allies to reject Chinese equipment. The dilemma was quite visible last week at MWC-Barcelona, where the two sides each tried to make their case.

It’s been years since the Snowden revelations showed that the United States was operating an enormous eavesdropping infrastructure targeting countries throughout the world, including across Europe. Huawei has reiterated its stance that it does not steal information from its equipment, and has repeated its demands that the Trump administration provide public proof of flaws in its security.

There is an abundance of moral relativism here, but I see this as increasingly a litmus test of the West on China. China has not hidden its ambitions to take a prime role in East Asia, nor has it hidden its intentions to build a massive surveillance network over its own people or to influence the media overseas.

Those tactics, though, are straight out of the American playbook, which lost its moral legitimacy over the past two decades from some combination of the Iraq War, Snowden, Wikileaks, and other public scandals that have undermined trust in the country overseas.

Security and privacy might have been a competitive advantage for American products over their Chinese counterparts, but that advantage has been weakened for many countries to near zero. We are increasingly going to see countries choose a mix of Chinese and American equipment in sensitive applications, if only to ensure that if one country is going to steal their data, it might as well be balanced.

Things that seem interesting that I haven’t read yet

Obsessions

  • Perhaps some more challenges around data usage and algorithmic accountability
  • We have a bit of a theme around emerging markets, macroeconomics, and the next set of users to join the internet.
  • More discussion of megaprojects, infrastructure, and “why can’t we build things”

Thanks

To every member of Extra Crunch: thank you. You allow us to get off the ad-laden media churn conveyor belt and spend quality time on amazing ideas, people, and companies. If I can ever be of assistance, hit reply, or send an email to danny@techcrunch.com.

This newsletter is written with the assistance of Arman Tabatabai from New York.

You’re reading the Extra Crunch Daily. Like this newsletter? Subscribe for free to follow all of our discussions and debates.

Daily Crunch: Facebook faces new privacy concerns

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here.

1. Facebook won’t let you opt out of its phone number ‘look up’ setting

Users are complaining that the phone number Facebook hassled them to use to secure their account with two-factor authentication has also been associated with their user profile — which anyone can use to “look up” their profile.

Security expert and academic Zeynep Tufekci said in a tweet: “Using security to further weaken privacy is a lousy move — especially since phone numbers can be hijacked to weaken security.”

2. Huawei reportedly plans to sue U.S. government over ban

That news comes via two anonymous sources reported in The New York Times. The impending suit is pushback against longstanding bans in the U.S. that have barred the company’s equipment from infrastructural projects ahead of a nationwide push into 5G.

3. Bill Gates and Jeff Bezos-backed fund invests in a global geothermal energy project developer

Breakthrough Energy Ventures, the investment firm financed by billionaires like Jeff Bezos, Bill Gates and Jack Ma that invests in companies developing technologies to decarbonize society, is investing $12.5 million in a geothermal project development company called Baseload Capital.

tristan o tierney

4. Tristan O’Tierney, who helped develop Square’s original payment app, has passed away

Square co-founders Jack Dorsey and Jim McKelvey hired O’Tierney to develop Square’s original mobile payment app in early 2009, and he’s generally credited as a co-founder.

5. Samsung finally gets Bluetooth earbuds right

Brian Heater says the Galaxy Buds work like a charm.

6. JetBlue contest asks users to delete their Instagram pics to fly free for a year

Don’t do this (unless you really hate your photos).

7. This week’s TechCrunch podcasts

We’ve got a full-length episode of Equity discussing funding for women-led startups, plus a shorter segment about Lyft filing to go public. Meanwhile, the team at Original Content (including your humble newsletter editor) reviews Netflix’s “Umbrella Academy.”

Flawed visitor check-in systems let anyone steal guest logs and sneak into buildings

Security researchers at IBM have found, reported and disclosed 19 vulnerabilities in five popular visitor management systems, which they say can be used to steal data on visitors — or even sneak into sensitive and off-limit areas of office buildings.

You’ve probably seen one of these visitor check-in systems before: they’re often found in lobbies or reception areas of office buildings to check staff and visitors onto the work floor. Visitors check in with their name and who they’re meeting using the touch-screen display or tablet, and a name badge is either printed or issued.

But the IBM researchers say flaws in these systems provided “a false sense of security.”

The researchers examined five of the most popular systems: Lobby Track Desktop, built by Jolly Technologies, had seven vulnerabilities; eVisitorPass, recently rebranded as Threshold Security, had five vulnerabilities; EasyLobby Solo, built by HID Global, had four vulnerabilities; Envoy’s flagship Passport system had two vulnerabilities; and The Receptionist, an iPad app, had one vulnerability.

According to IBM, the vulnerabilities could only be exploited by someone physically at check-in. The bugs ranged from allowing someone to download visitor logs, such as names, driver license and Social Security data, and phone numbers; or in some cases, the buggy software could be exploited to escape “kiosk” mode, allowing access to the underlying operating system, which the researchers say could be used to pivot to other applications and on the network, if connected.

Worse of all, the use of default admin credentials that would give “allow complete control of the application,” such as the ability to edit the visitor database. Some systems “can even issue and provision RFID badges, giving an attacker a key to open doors,” the researchers wrote.

Daniel Crowley, research director at IBM X-Force Red, the company’s pen-testing and vulnerability hunting team, told TechCrunch that all of the companies responded to the team’s findings.

“Some responded much more quickly than others,” said Crowley. “The Lobby Track vulnerabilities were acknowledged by Jolly Technologies, but they stated that the issues can be addressed through configuration options. X-Force Red tested the Lobby Track software in its default configuration,” he added.

We contacted the companies and received — for the most part — dismal responses.

Kate Miller, a spokesperson for Envoy, confirmed it fixed the bugs but “customer and visitor data was never at risk.”

Andy Alsop, chief executive of The Receptionist, did not respond to a request for comment but instead automatically signed us up to a mailing list without our permission, which we swiftly unsubscribed from. When reached, Michael Ashford, director of marketing, did not comment.

David Jordan, a representative for Jolly, declined to comment. And, neither Threshold Security and HID Global responded to our requests for comment.

Facebook won’t let you opt-out of its phone number ‘look-up’ setting

Users are complaining that the phone number Facebook hassled them to use to secure their account with two-factor authentication has also been associated with their user profile — which anyone can use to “look up” your profile.

Worse, Facebook doesn’t give you an option to opt-out.

Last year, Facebook was forced to admit that after months of pestering its users to switch on two-factor by signing up their phone number, it was also using those phone numbers to target users with ads. But some users are finding out just now that Facebook’s default setting allows everyone — with or without an account — to look up a user profile based off the same phone number previously added to their account

The recent hubbub began today after a tweet by Jeremy Burge blew up, criticizing Facebook’s collection and use of phone numbers, which he likened to “a unique ID that is used to link your identity across every platform on the internet.”

Although you can hide your phone number on your profile so nobody can see it, it’s still possible to “look up” user profiles in other ways, such as “when someone uploads your contact info to Facebook from their mobile phone,” according to a Facebook help article. It’s a more restricted way than allowing users to search for user profiles using a person’s phone number, which Facebook restricted last year after admitting “most” users had their information scraped.

Facebook gives users the option of allowing users to “look up” their profile using their phone number to “everyone” by default, or to “friends of friends” or just the user’s “friends.”

But there’s no way to hide it completely.

Security expert and academic Zeynep Tufekci said in a tweet: “Using security to further weaken privacy is a lousy move — especially since phone numbers can be hijacked to weaken security,” referring to SIM swapping, where scammers impersonate cell customers to steal phone numbers and break into other accounts.

Tufekci’s argued that users can “no longer keep keep private the phone number that [they] provided only for security to Facebook.”

Facebook spokesperson Jay Nancarrow told TechCrunch that the settings “are not new,” adding that, “the setting applies to any phone numbers you added to your profile and isn’t specific to any feature.”

Gizmodo reported last year that Facebook uses that when a user gives Facebook a phone number for two-factor, it “became targetable by an advertiser within a couple of weeks.” And, if a user doesn’t like it, they can set up two-factor without using a phone number — which hasn’t been mandatory for additional login security since May 2018.

Even if users haven’t set up two-factor, there are well documented cases of users having their phone numbers collected by Facebook, whether the user expressly permitted it or not. In 2017, one reporter for The Telegraph described her alarm at the “look up” feature, given she had “not given Facebook my number, was unaware that it had found it from other sources, and did not know it could be used to look me up.”

To the specific concerns by users, Facebook said: “We appreciate the feedback we’ve received about these settings and will take it into account.”

Concerned users should switch their “look up” settings to “Friends” to mitigate as much of the privacy risk as possible.

When asked specifically if Facebook will allow users to users to opt-out of the setting, Facebook said it won’t comment on future plans. And, asked why it was set to “everyone” by default, Facebook said the feature makes it easier to find people you know but aren’t yet friends with.

Others criticized Facebook’s move to expose phone numbers to “look ups,” calling it “unconscionable.”

Alex Stamos, former chief security officer and now adjunct professor at Stanford University, also called out the practice in a tweet. “Facebook can’t credibly require two-factor for high-risk accounts without segmenting that from search and ads,” he said.

Since Stamos left Facebook in August, Facebook has not hired a replacement chief security officer.

Privacy complaints received by tech giants’ favorite EU watchdog up more than 2x since GDPR

A report by the lead data watchdog for a large number of tech giants operating in Europe shows a significant increase in privacy complaints and data breach notifications since the region’s updated privacy framework came into force last May.

The Irish Data Protection Commission (DPC)’s annual report, published today, covers the period May 25, aka the day the EU’s General Data Protection Regulation (GDPR) came into force, to December 31 2018 and shows the DPC received more than double the amount of complaints post-GDPR vs the first portion of 2018 prior to the new regime coming in: With 2,864 and 1,249 complaints received respectively.

That makes a total of 4,113 complaints for full year 2018 (vs just 2,642 for 2017). Which is a year on year increase of 36 per cent.

But the increase pre- and post-GDPR is even greater — 56 per cent — suggesting the regulation is working as intended by building momentum and support for individuals to exercise their fundamental rights.

“The phenomenon that is the [GDPR] has demonstrated one thing above all else: people’s interest in and appetite for understanding and controlling use of their personal data is anything but a reflection of apathy and fatalism,” writes Helen Dixon, Ireland’s commissioner for data protection.

She adds that the rise in the number of complaints and queries to DPAs across the EU since May 25 demonstrates “a new level of mobilisation to action on the part of individuals to tackle what they see as misuse or failure to adequately explain what is being done with their data”.

While Europe has had online privacy rules since 1995 a weak regime of enforcement essentially allowed them to be ignored for decades — and Internet companies to grab and exploit web users’ data without full regard and respect for European’s privacy rights.

But regulators hit the reset button last year. And Ireland’s data watchdog is an especially interesting agency to watch if you’re interested in assessing how GDPR is working, given how many tech giants have chosen to place their international data flows under the Irish DPC’s supervision.

More cross-border complaints

“The role places an important duty on the DPC to safeguard the data protection rights of hundreds of millions of individuals across the EU, a duty that the GDPR requires the DPC to fulfil in cooperation with other supervisory authorities,” the DPC writes in the report, discussing its role of supervisory authority for multiple tech multinationals and acknowledging both a “greatly expanded role under the GDPR” and a “significantly increased workload”.

A breakdown of GDPR vs Data Protection Act 1998 complaint types over the report period suggests complaints targeted at multinational entities have leapt up under the new DP regime.

For some complaint types the old rules resulted in just 2 per cent of complaints being targeted at multinationals vs close to a quarter (22 per cent) in the same categories under GDPR.

It’s the most marked difference between the old rules and the new — underlining the DPC’s expanded workload in acting as a hub (and often lead supervisory agency) for cross-border complaints under GDPR’s one-stop shop mechanism.

The category with the largest proportions of complaints under GDPR over the report period was access rights (30%) — with the DPC receiving a full 582 complaints related to people feeling they’re not getting their due data. Access rights was also most complained about under the prior data rules over this period.

Other prominent complaint types continue to be unfair processing of data (285 GDPR complaints vs 178 under the DPA); disclosure (217 vs 138); and electronic direct marketing (111 vs 36).

EU policymakers’ intent with GDPR is to redress the imbalance of weakly enforced rights — including by creating new opportunities for enforcement via a regime of supersized fines. (GDPR allows for penalties as high as up to 4 per cent of annual turnover, and in January the French data watchdog slapped Google with a $57M GDPR penalty related to transparency and consent — albeit still far off that theoretical maximum.)

Importantly, the regulation also introduced a collective redress option which has been adopted by some EU Member States.

This allows for third party organizations such as consumer rights groups to lodge data protection complaints on individuals’ behalf. The provision has led to a number of strategic complaints being filed by organized experts since last May (including in the case of the aforementioned Google fine) — spinning up momentum for collective consumer action to counter rights erosion. Again that’s important in a complex area that remains difficult for consumers to navigate without expert help.

For upheld complaints the GDPR ‘nuclear option’ is not fines though; it’s the ability for data protection agencies to order data controllers to stop processing data.

That remains the most significant tool in the regulatory toolbox. And depending on the outcome of various ongoing strategic GDPR complaints it could prove hugely significant in reshaping what data experts believe are systematic privacy incursions by adtech platform giants.

And while well-resourced tech giants may be able to factor in even very meaty financial penalties, as just a cost of doing a very lucrative business, data-focused business models could be far more precarious if processors can suddenly be slapped with an order to limit or even cease processing data. (As indeed Facebook’s business just has in Germany, where antitrust regulators have been liaising with privacy watchdogs.)

Data breach notifications also up

GDPR also shines a major spotlight on security — requiring privacy by design and default and introducing a universal requirement for swiftly reporting data breaches across the bloc, again with very stiff penalties for non-compliance.

On the data breach front, the Irish DPC says it received a total of 3,687 data breach notifications between May 25 and December 31 last year — finding just four per cent (145 cases) did not meet the definition of a personal-data breach set out in GDPR. That means it recorded a total of 3,542 valid data protection breaches over the report period — which it says represents an increase of 27 per cent on 2017 breach report figures.

“As in other years, the highest category of data breaches notified under the GDPR were classified as Unauthorised Disclosures and accounted for just under 85% of the total data-breach notifications received between 25 May and 31 December 2018,” it notes, adding: “The majority occurred in the private sector (2,070).”

More than 4,000 data breach notifications were recorded by the watchdog for full year 2018, the report also states.

The DPC further reveals that it was notified of 38 personal data breaches involving 11 multinational technology companies during the post-GDPR period of 2018. Which means breaches involving tech giants.

“A substantial number of these notifications involved the unauthorised disclosure of, and unauthorised access to, personal data as a result of bugs in software supplied by data processors engaged by the organisations,” it writes, saying it opened several investigations as a result (such as following the Facebook Token breach in September 2018).

Open probes of tech giants

As of 31 December 2018, the DPC says it had 15 investigations open in relation to multinational tech companies’ compliance with GDPR.

Below is the full list of the DPC’s currently open investigations of multinationals — including the tech giant under scrutiny; the origin of the inquiry; and the issues being examined:

  • Facebook Ireland Limited — Complaint-based inquiry: “Right of Access and Data Portability. Examining whether Facebook has discharged its GDPR obligations in respect of the right of access to personal data in the Facebook ‘Hive’ database and portability of “observed” personal data”
  • Facebook Ireland Limited — Complaint-based inquiry: “Lawful basis for processing in relation to Facebook’s Terms of Service and Data Policy. Examining whether Facebook has discharged its GDPR obligations in respect of the lawful basis on which it relies to process personal data of individuals using the Facebook platform.”
  • Facebook Ireland Limited — Complaint-based inquiry: “Lawful basis for processing. Examining whether Facebook has discharged its GDPR obligations in respect of the lawful basis on which it relies to process personal data in the context of behavioural analysis and targeted advertising on its platform.”
  • Facebook Ireland Limited — Own-volition inquiry: “Facebook September 2018 token breach. Examining whether Facebook Ireland has discharged its GDPR obligations to implement organisational and technical measures to secure and safeguard the personal data of its users.”
  • Facebook Ireland Limited — Own-volition inquiry: “Facebook September 2018 token breach. Examining Facebook’s compliance with the GDPR’s breach notification obligations.”
  • Facebook Inc. — Own-volition inquiry: “Facebook September 2018 token breach. Examining whether Facebook Inc. has discharged its GDPR obligations to implement organizational and technical measures to secure and safeguard the personal data of its users.”
  • Facebook Ireland Limited — Own-volition inquiry: “Commenced in response to large number of breaches notified to the DPC during the period since 25 May 2018 (separate to the token breach). Examining whether Facebook has discharged its GDPR obligations to implement organisational and technical measures to secure and safeguard the personal data of its users.”
  • Instagram (Facebook Ireland Limited) — Complaint-based inquiry: “Lawful basis for processing in relation to Instagram’s Terms of Use and Data Policy. Examining whether Instagram has discharged its GDPR obligations in respect of the lawful basis on which it relies to process personal data of individuals using the Instagram platform.”
  • WhatsApp Ireland Limited — Complaint-based inquiry: “Lawful basis for processing in relation to WhatsApp’s Terms of Service and Privacy Policy. Examining whether WhatsApp has discharged its GDPR obligations in respect of the lawful basis on which it relies to process personal data of individuals using the WhatsApp platform.”
  • WhatsApp Ireland Limited — Own-volition inquiry: “Transparency. Examining whether WhatsApp has discharged its GDPR transparency obligations with regard to the provision of information and the transparency of that information to both users and non-users of WhatsApp’s services, including information provided to data subjects about the processing of information between WhatsApp and other Facebook companies.”
  • Twitter International Company — Complaint-based inquiry: “Right of Access. Examining whether Twitter has discharged its obligations in respect of the right of access to links accessed on Twitter.”
  • Twitter International Company — Own-volition inquiry: “Commenced in response to the large number of breaches notified to the DPC during the period since 25 May 2018. Examining whether Twitter has discharged its GDPR obligations to implement organisational and technical measures to secure and safeguard the personal data of its users.”
  • LinkedIn Ireland Unlimited Company — Complaint-based inquiry: “Lawful basis for processing. Examining whether LinkedIn has discharged its GDPR obligations in respect of the lawful basis on which it relies to process personal data in the context of behavioural analysis and targeted advertising on its platform.”
  • Apple Distribution International — Complaint-based inquiry: “Lawful basis for processing. Examining whether Apple has discharged its GDPR obligations in respect of the lawful basis on which it relies to process personal data in the context of behavioural analysis and targeted advertising on its platform.”
  • Apple Distribution International — Complaint-based inquiry: “Transparency. Examining whether Apple has discharged its GDPR transparency obligations in respect of the information contained in its privacy policy and online documents regarding the processing of personal data of users of its services.”

“The DPC’s role in supervising the data-processing operations of the numerous large data-rich multinational companies — including technology internet and social media companies — with EU headquarters located in Ireland changed immeasurably on 25 May 2018,” the watchdog acknowledges.

“For many, including Apple, Facebook, Microsoft, Twitter, Dropbox, Airbnb, LinkedIn, Oath [disclosure: TechCrunch is owned by Verizon Media Group; aka Oath/AOL], WhatsApp, MTCH Technology and Yelp, the DPC acts as lead supervisory authority under the GDPR OSS [one-stop shop] facility.”

The DPC notes in the report that between May 25 and December 31 2018 it received 136 cross-border processing complaints through the regulation’s OSS mechanism (i.e. which had been lodged by individuals with other EU data protection authorities).

A breakdown of these (likely) tech giant focused GDPR complaints shows a strong focus on consent, right of erasure, right of access and the lawfulness of data processing:

Breakdown of cross-border complaint types received by the DPC under GDPR’s OSS mechanism

While the Irish DPC acts as the lead supervisor for many high profile GDPR complaints which relate to how tech giants are handling people’s data, it’s worth emphasizing that the OSS mechanism does not mean Ireland is sitting in sole judgement on Silicon Valley’s giants’ rights incursions in Europe.

The mechanism allows for other DPAs to be involved in these cross-border complaints.

And the European Data Protection Board, the body that works with all the EU Member States’ DPAs to help ensure consistent application of the regulation, can trigger a dispute resolution process if a lead agency considers it cannot implement a concerned agency objection. The aim is to work against forum shopping.

In a section on “EU cooperation”, the DPC further writes:

Our fellow EU regulators, alongside whom we sit on the European Data Protection Board (EDPB), follow the activities and results of the Irish DPC closely, given that a significant number of people in every EU member state are potentially impacted by processing activities of the internet companies located in Ireland. EDPB activity is intense, with monthly plenary meetings and a new system of online data sharing in relation to cross-border processing cases rolled out between the authorities. The DPC has led on the development of EDPB guidance on arrangements for Codes of Conduct under the GDPR and these should be approved and published by the EDPB in Q1 of 2019. The DPC looks forward to industry embracing Codes of Conduct and raising the bar in individual sectors in terms of standards of data protection and transparency. Codes of Conduct are important because they will more comprehensively reflect the context and reality of data-processing activities in a given sector and provide clarity to those who sign up to the standards that need to be attained in addition to external monitoring by an independent body. It is clarity of standards that will drive real results.

Over the reported period the watchdog also reveals that it issued 23 formal requests seeking detailed information on compliance with various aspects of the GDPR from tech giants, noting too that since May 25 it has engaged with platforms on “a broad range of issues” — citing the following examples to give a flavor of these concerns:

  • Google on the processing of location data
  • Facebook on issues such as the transfer of personal data from third-party apps to Facebook and Facebook’s collaboration with external researchers
  • Microsoft on the processing of telemetry data collected by its Office product
  • WhatsApp on matters relating to the sharing of personal data with other Facebook companies

“Supervision engagement with these companies on the matters outlined is ongoing,” the DPC adds of these issues.

Adtech sector “must comply” with GDPR 

Talking of ongoing action, a GDPR complaint related to the security of personal data that’s systematically processed to power behavioral advertising is another open complaint on the DPC’s desk.

The strategic complaint was filed by a number of individuals in multiple EU countries (including Ireland) last fall. Since then the individuals behind the complaints have continued to submit and publish evidence they argue bolsters their case against the behavioral ad targeting industry (principally Google and the IAB which set the spec involved in the real-time bidding (RTB) system).

The Irish DPC makes reference to this RTB complaint in the annual report, giving the adtech industry what amounts to a written warning that while the advertising ecosystem is “complex”, with multiple parties involved in “high-speed, voluminous transactions” related to bidding for ad space and serving ad content “the protection of personal data is a prerequisite to the processing of any personal data within this ecosystem and ultimately the sector must comply with the standards set down by the GDPR”.

The watchdog also reports that it has engaged with “several stakeholders, including publishers and data brokers on one side, and privacy advocates and affected individuals on the other”, vis-a-vis the RTB complaint, and says it will continue prioritizing its scrutiny of the sector in 2019 — “in cooperation with its counterparts at EU level so as to ensure a consistent approach across all EU member states”.

It goes on to say that some of its 15 open investigations into tech giants will both conclude this year and “contribute to answering some of the questions relating to this complex area”. So, tl;dr, watch this space.

Responding to the DPC’s comments on the RTB complaint, Dr Johnny Ryan, chief policy and industrial relations officer of private browser Brave — and also one of the complainants — told us they expect the DPC to act “urgently”.

“We have brought our complaint before the DPC and other European regulators because there is a dire need to fix adtech so that it’s works safely,” he told TechCrunch. “The DPC itself recognizes that online advertising is a priority. The IAB and Google online ‘ad auction’ system enables companies to broadcast what every single person online reads, watches, and listens to online to countless parties. There is no control over what happens to these data. The evidence that we have submitted to the DPC shows that this occurs hundreds of billions of times a day.”

“In view of the upcoming European elections, it is particularly troubling that the IAB and Google’s systems permit voters to be profiled in this way,” he added. “Clearly, this infringes the security and integrity principles of the GDPR, and we expect the DPC to act urgently.”

The IAB has previously rejected the complaints as “false”, arguing any security risk is “theoretical”; while Google has said it has policies in place to prohibit advertisers from targeting sensitive categories of data. But the RTB complaint itself pivots on GDPR’s security requirements which demand that personal data be processed in a manner that “ensures appropriate security”, including “protection against unauthorised or unlawful processing and against accidental loss”.

So the security of the RTB system is the core issue which the Irish DPC, along with agencies in the UK and Poland, will have to grapple with as a priority this year.

The complainants have also said they intend to file additional complaints in more markets across Europe, so more DPAs are likely to join the scrutiny of RTB, as concerned supervisory agencies, which could increase pressure on the Irish DPC to act.

Schrems II vs Facebook 

The watchdog’s report also includes an update on long-running litigation filed by European privacy campaigner Max Schrems concerning a data transfer mechanism known as standard contractual clauses (SCCs) — and originally only targeted at Facebook’s use of the mechanism.

The DPC decided to refer Schrems’ original challenge to the Irish courts — which have since widened the action by referring a series of legal questions up to the EU’s top court with (now) potential implications for the legality of the EU’s ‘flagship’ Privacy Shield data transfer mechanism.

That was negotiated following the demise of its predecessor Safe Harbor, in 2015, also via a Schrems legal challenge, going on to launch in August 2016 — despite ongoing concerns from data experts. Privacy Shield is now used by close to 4,500 companies to authorize transfers of EU users’ personal data to the US.

So while Schrems’ complaint about SCCs (sometimes also called “model contract clauses”) was targeted at Facebook’s use of them the litigation could end up having major implications for very many more companies if Privacy Shield itself comes unstuck.

More recently Facebook has sought to block the Irish judges’ referral of legal questions to the Court of Justice of the EU (CJEU) — winning leave to appeal last summer (though judges did not stay the referral in the meanwhile).

In its report the DPC notes that the substantive hearing of Facebook’s appeal took place over January 21, 22 and 23 before a five judge Supreme Court panel.

“Oral arguments were made on behalf of Facebook, the DPC, the U.S. Government and Mr Schrems,” it writes. “Some of the central questions arising from the appeal include the following: can the Supreme Court revisit the facts found by the High Court relating to US law? (This arises from allegations by Facebook and the US Government that the High Court judgment, which underpins the reference made to the CJEU, contains various factual errors concerning US law).

“If the Supreme Court considers that it may do so, further questions will then arise for the Court as to whether there are in fact errors in the judgment and if so, whether and how these should be addressed.”

“At the time of going to print there is no indication as to when the Supreme Court judgment will be delivered,” it adds. “In the meantime, the High Court’s reference to the CJEU remains valid and is pending before the CJEU.”

LinkedIn forced to ‘pause’ mentioned in the news feature in Europe after complaints about ID mix-ups

LinkedIn has been forced to ‘pause’ a feature in Europe in which the platform emails members’ connections when they’ve been ‘mentioned in the news’.

The regulatory action follows a number of data protection complaints after LinkedIn’s algorithms incorrect matched members to news articles — triggering a review of the feature and subsequent suspension order.

The feature appears as a case study in the ‘Technology Multinationals Supervision’ section of an annual report published today by the Irish Data Protection Commission (DPC). Although the report does not explicitly name LinkedIn — but we’ve confirmed it is the named professional social network.

The data watchdog’s report cites “two complaints about a feature on a professional networking platform” after LinkedIn incorrectly associated the members with media articles that were not actually about them.

“In one of the complaints, a media article that set out details of the private life and unsuccessful career of a person of the same name as the complainant was circulated to the complainant’s connections and followers by the data controller,” the DPC writes, noting the complainant initially complained to the company itself but did not receive a satisfactory response — hence taking up the matter with the regulator.

The complainant stated that the article had been detrimental to their professional standing and had resulted in the loss of contracts for their business,” it adds.

“The second complaint involved the circulation of an article that the complainant believed could be detrimental to future career prospects, which the data controller had not vetted correctly.”

LinkedIn appears to have been matching members to news articles by simple name matching — with obvious potential for identity mix-ups between people with shared names.

“It was clear from the complaints that matching by name only was insufficient, giving rise to data protection concerns, primarily the lawfulness, fairness and accuracy of the personal data processing utilised by the ‘Mentions in the news’ feature,” the DPC writes.

“As a result of these complaints and the intervention of the DPC, the data controller undertook a review of the feature. The result of this review was to suspend the feature for EU-based members, pending improvements to safeguard its members’ data.”

We reached out to LinkedIn with questions and it pointed us to this blog post where it confirms: “We are pausing our Mentioned in the News feature for our EU members while we reevaluate its effectiveness.”

LinkedIn adds that it is reviewing the accuracy of the feature, writing:

As referenced in the Irish Data Protection Commission’s report, we received useful feedback from our members about the feature and as a result are evaluating the accuracy and functionality of Mentioned in the News for all members.

The company’s blog post also points users to a page where they can find out more about the ‘mentioned in the news’ feature and get information on how to manage their LinkedIn email notification settings.

The Irish DPC’s action is not the first privacy strike against LinkedIn in Europe.

Late last year, in its early annual report, on the pre-GDPR portion of 2018, the watchdog revealed it had investigated complaints about LinkedIn related to it targeting non-users with adverts for its service.

The DPC found the company had obtained emails for 18 million people for whom it did not have consent to process their data. In that case LinkedIn agreed to cease processing the data entirely.

That complaint also led the DPC to audit LinkedIn. It then found a further privacy problem, discovering the company had been using its social graph algorithms to try to build suggested networks of compatible professional connections for non-members.

The regulator ordered LinkedIn to cease this “pre-compute processing” of non-members’ data and delete all personal data associated with it prior to GDPR coming into force.

LinkedIn said it had “voluntarily changed our practices as a result”.

Dow Jones’ watchlist of 2.4 million high-risk clients has leaked

A watchlist of risky individuals and corporate entities owned by Dow Jones has been exposed, after a company with access to the database left it on a server without a password.

Bob Diachenko, an independent security researcher, found the Amazon Web Services-hosted Elasticsearch database exposing more than 2.4 million records of individuals or business entities.

The data, since secured, is the financial giant’s Watchlist database, which companies use as part of their risk and compliance efforts. Other financial companies, like Thomson Reuters, have their own databases of high-risk clients, politically exposed persons and terrorists — but have also been exposed over the years through separate security lapses.

A 2010-dated brochure billed the Dow Jones Watchlist as allowing customers to “easily and accurately identify high-risk clients with detailed, up-to-date profiles” on any individual or company in the database. At the time, the database had 650,000 entries, the brochure said.

That includes current and former politicians, individuals or companies under sanctions or convicted of high-profile financial crimes such as fraud, or anyone with links to terrorism. Many of those on the list include “special interest persons,” according to the records in the exposed database seen by TechCrunch.

Diachenko, who wrote up his findings, said the database was “indexed, tagged and searchable.”

From a 2010-dated brochure of Dow Jones’ Watchlist, which at the time had 650,000 names of individuals and entities. The exposed database had 2.4 million records. (Screenshot: TechCrunch)

Many financial institutions and government agencies use the database to approve or deny financing, or even in the shuttering of bank accounts, the BBC previously reported. Others have reported that it can take little or weak evidence to land someone on the watchlists.

The data is all collected from public sources, such as news articles and government filings. Many of the individual records were sourced from Dow Jones’ Factiva news archive, which ingests data from many news sources — including the Dow Jones-owned The Wall Street Journal.

But the very existence of a name, or the reason why a name exists in the database, is proprietary and closely guarded.

The records we saw vary wildly, but can include names, addresses, cities and their location, whether they are deceased or not and, in some cases, photographs. Diachenko also found dates of birth and genders. Each profile had extensive notes collected from Factiva and other sources.

One name found at random was Badruddin Haqqani, a commander in the Haqqani guerilla insurgent network in Afghanistan affiliated with the Taliban. In 2012, the U.S. Treasury imposed sanctions on Haqqani and others for their involvement in financing terrorism. He was killed in a U.S. drone strike in Pakistan months later.

The database record on Haqqani, who was categorized under “sanctions list” and terror,” included (and condensed for clarity):

DOW JONES NOTES:
Killed in Pakistan's North Waziristan tribal area on 21-Aug-2012.

OFFICE OF FOREIGN ASSETS CONTROL (OFAC) NOTES:

Eye Color Brown; Hair Color Brown; Individual's Primary Language Pashto; Operational Commander of the Haqqani Network

EU NOTES:

Additional information from the narrative summary of reasons for listing provided by the Sanctions Committee:

Badruddin Haqqani is the operational commander for the Haqqani Network, a Taliban-affiliated group of militants that operates from North Waziristan Agency in the Federally Administered Tribal Areas of Pakistan. The Haqqani Network has been at the forefront of insurgent activity in Afghanistan, responsible for many high-profile attacks. The Haqqani Network's leadership consists of the three eldest sons of its founder Jalaluddin Haqqani, who joined Mullah Mohammed Omar's Taliban regime in the mid-1990s. Badruddin is the son of Jalaluddin and brother to Nasiruddin Haqqani and Sirajuddin Haqqani, as well as nephew of Khalil Ahmed Haqqani.

Badruddin helps lead Taliban associated insurgents and foreign fighters in attacks against targets in south- eastern Afghanistan. Badruddin sits on the Miram Shah shura of the Taliban, which has authority over Haqqani Network activities.

Badruddin is also believed to be in charge of kidnappings for the Haqqani Network. He has been responsible for the kidnapping of numerous Afghans and foreign nationals in the Afghanistan-Pakistan border region.

UN NOTES:

Other information: Operational commander of the Haqqani Network and member of the Taliban shura in Miram Shah. Has helped lead attacks against targets in southeastern Afghanistan. Son of Jalaluddin Haqqani (TI.H.40.01.). Brother of Sirajuddin Jallaloudine Haqqani (TI.H.144.07.) and Nasiruddin Haqqani (TI.H.146.10.). Nephew of Khalil Ahmed Haqqani (TI.H.150.11.). Reportedly deceased in late August 2012.

FEDERAL FINANCIAL MONITORING SERVICES NOTES:

Entities and individuals against whom there is evidence of involvement in terrorism.

Dow Jones spokesperson Sophie Bent said: “This dataset is part of our risk and compliance feed product, which is entirely derived from publicly available sources. At this time our review suggests this resulted from an authorized third party’s misconfiguration of an AWS server, and the data is no longer available.”

We asked Dow Jones specific questions, such as who the source of the data leak was and if the exposure would be reported to U.S. regulators and European data protection authorities, but the company would not comment on the record.

Two years ago, Dow Jones admitted a similar cloud storage misconfiguration exposed the names and contact information of 2.2 million customers, including subscribers of The Wall Street Journal. The company described the event as an “error.”

TikTok is launching a series of online safety videos in its app

On the heels of news that TikTok has reached 1 billion downloads, the company today is launching new initiative designed to help inform users about online safety, TikTok’s various privacy settings and other controls they can use within its app, and more. Instead of dumping this information in an in-app FAQ or help documentation, the company will release a series of video tutorials that are meant to be engaging and fun, in order to better resemble the other content on TikTok itself.

The safety series, called “You’re in Control,” will star TikTok users and will make use of popular memes, in-app editing tricks and other effects, just like other TikTok videos do.

The videos will focus on a range of privacy, safety and well-being settings and other safety-related policies. This includes TikTok’s Community Guidelines, how in-app reporting works, plus other settings for protecting your privacy, control comments, those to manage your screen time, and more.

They’re not exactly your traditional how-to videos, however.

Instead, the videos showcase what’s often a more serious issues – like being overrun with unwanted messages – in a humorous fashion. For example, in the video about configuring your message controls, angry commenters are depicted as shouting passengers on an airplane while the user is depicted by an overwhelmed flight attendant.

“Too many DMs?,” the video asks. The flight attendant snaps his fingers, which causes most of the passengers to disappear. The scene returns to peace and quiet. It’s a simple enough analogy for TikTok’s younger user base to understand.

This is then followed by a screen recording that shows you how to turn off messaging within the TikTok app’s settings.

Other videos have a similar style.

A barking, growling dog is used to demonstrate Restricted Mode, for instance. A noisy crowd overlooking someone’s shoulder is the intro on the video about using comment controls.

Another video encourages the use of screen time controls, asking “can’t put your phone down?” and shows someone so wrapped up in their phone they aren’t watching where they’re walking.

But the video about the Community Guidelines is maybe the most cringe-y, as it feels a bit like your parents reminding you to “play nice.” However, it still manages to set a tone for what TikTok wants to promote – a community for “positive vibes” where everyone feels “safe and comfortable.”

At launch, there are seven of these short-form videos in the safety series, which will launch in the TikTok app in the U.S. and U.K on Wednesday. In time, the company plans to add other tutorials and expand the series across its global markets, it says.

Of course, TikTok needs more than a series of videos to make its app a safe and welcoming community, the way it desires. It also needs a combination of policies, settings, controls, technology, moderation, and more, the company says.

That said, a focus on user education is an important aspect to this larger goal – and it stands in stark contrast to how Facebook intentionally made its privacy settings so complex and difficult to find and use for so many of its earlier years, that people gave up trying.

How well TikTok can execute on user privacy and safety as the app grows still remains to be seen. For now, it tends to be talked about as either a wholesome and fun video experience, or an online cesspool filled with hateful content and child predators. It’s an app on the internet, so both versions of this story are likely true.

There is no large user-generated content site – even those run by Facebook, Twitter, and YouTube – that has figured how to properly police the hatefulness and evil contained in humanity. But TikTok, at least, takes care not to showcase that content in its main feed – you have to seek it out directly (or train its algorithm by never clicking on anything wholesome.)

But, so far, TikTok has been better reviewed by child safety advocates than you might expect. For instance, Common Sense Media – a nonprofit that provides unbiased and trusted advice about all sorts of media, including apps – said that the app, used with parental supervision, can be “a kid-friendly experience.”

The launch of the video series comes at a time when TikTok’s growth is surging. The app recently surpassed a billion installs across the iOS App Store and Google Play, including Lite versions and regional variations, but excluding Android installs in China, according to data from Sensor Tower.

Roughly 25 percent of those installs are from India, the report said. And around 663 million of TikTok’s total installs occurred in 2018, which made the app the No. 4 most downloaded non-game for the year.

However, installs alone don’t tell the story of how many people actually use the app or how often. And a chunk of these could be the same user installing the app on multiple devices, or even bots used to push the app up the charts. In addition, parents often download the app their tween or teen is using for monitoring purposes, but don’t engage with the app or its content on a regular basis.

 

 

 

Cloudflare expands its government warrant canaries

When the government comes for your data, tech companies can’t always tell you. But thanks to a legal loophole, companies can say if they haven’t had a visit yet

That’s opened up an interesting clause that allows companies to silently warn customers when the government turns up to secretly raid its stash of customer data without violating a gag order it. Under U.S. freedom of speech laws, companies can publicly say that “the government has not been here” when there has been no demand for data, but they are allowed to remove statements when a warrant comes in as a warning shot to anyone who pays attention.

These so-called “warrant canaries” — named for the poor canary down the mine, who dies when there’s gas that the human can’t see — are a key transparency tool that predominantly privacy-focused companies use to keep their customers aware of the goings-on behind the scenes.

Where companies have abandoned their canaries or caved to legal pressure, Cloudflare is bucking the trend.

The networking and content delivery network giant said in a blog post this week that it’s expanding the transparency reports to include more canaries.

To date, the company:

  • has never turned over our SSL keys or our customers SSL keys to anyone;
  • has never installed any law enforcement software or equipment anywhere on our network;
  • has never terminated a customer or taken down content due to political pressure;
  • has never provided any law enforcement organization a feed of our customers’ content transiting our network.

Those key points are critical to the company’s business. A government demand for SSL keys and installing intercept equipment on its network would allow investigators unprecedented access to a customer’s communications and data, and undermine the company’s security. A similar demand led to Ladar Levison shutting down his email service Lavabit when they sought the keys to obtain information on whistleblower Edward Snowden, who used the service.

Now Cloudflare’s warrant canaries will include:

  • Cloudflare has never modified customer content at the request of law enforcement or another third party.
  • Cloudflare has never modified the intended destination of DNS responses at the request of law enforcement or another third party.
  • Cloudflare has never weakened, compromised, or subverted any of its encryption at the request of law enforcement or another third party.

It’s also expanded and replaced its first canary to confirm that the company “has never turned over our encryption or authentication keys or our customers’ encryption or authentication keys to anyone.”

Cloudflare said that if it were ever asked to do any of the above, the company would “exhaust all legal remedies” to protect customer data, and remove the statements from its site.

The networking and content delivery network is one of a handful of major companies that have used warrant canaries over the years. Following reports that the National Security Agency was vacuuming up the call records from the major telecom giants in bulk, Apple included a statement in its most recent transparency reports noting that the company has to date “not received any orders for bulk data.” Reddit removed its warrant canary in 2015, indicating that it had received a national security order it wasn’t permitted to disclose.

Cloudflare’s expanded canaries were included in the company’s latest transparency report, out this week.

According to its latest figures covering the second-half of 2018, Cloudflare responded to just seven subpoenas of the 19 requests, affecting 12 accounts and 309 domains. The company also responded to 44 court orders of the 55 requests, affecting 134 accounts and 19,265 domains.

The company received between 0-249 national security requests for the duration, and that it didn’t process any wiretap or foreign government requests for the duration.

New flaws in 4G, 5G allow attackers to intercept calls and track phone locations

A group of academics have found three new security flaws in 4G and 5G, which they say can be used to intercept phone calls and track the locations of cell phone users.

The findings are said to be the first time vulnerabilities have affected both 4G and the incoming 5G standard, which promises faster speeds and better security, particularly against law enforcement use of cell site simulators, known as “stingrays.” But the researchers say that their new attacks can defeat newer protections that were believed to make it more difficult to snoop on phone users.

“Any person with a little knowledge of cellular paging protocols can carry out this attack,” said Syed Rafiul Hussain, one of the co-authors of the paper, told TechCrunch in an email.

Hussain, along with Ninghui Li and Elisa Bertino at Purdue University, and Mitziu Echeverria and Omar Chowdhury at the University of Iowa are set to reveal their findings at the Network and Distributed System Security Symposium in San Diego on Tuesday.

“Any person with a little knowledge of cellular paging protocols can carry out this attack… such as phone call interception, location tracking, or targeted phishing attacks.” Syed Rafiul Hussain, Purdue University

The paper, seen by TechCrunch prior to the talk, details the attacks: the first is Torpedo, which exploits a weakness in the paging protocol that carriers use to notify a phone before a call or text message comes through. The researchers found that several phone calls placed and cancelled in a short period can trigger a paging message without alerting the target device to an incoming call, which an attacker can use to track a victim’s location. Knowing the victim’s paging occasion also lets an attacker hijack the paging channel and inject or deny paging messages, by spoofing messages like as Amber alerts or blocking messages altogether, the researchers say.

Torpedo opens the door to two other attacks: Piercer, which the researchers say allows an attacker to determine an international mobile subscriber identity (IMSI) on the 4G network; and the aptly named IMSI-Cracking attack, which can brute force an IMSI number in both 4G and 5G networks, where IMSI numbers are encrypted.

That puts even the newest 5G-capable devices at risk from stingrays, said Hussain, which law enforcement use to identify someone’s real-time location and log all the phones within its range. Some of the more advanced devices are believed to be able to intercept calls and text messages, he said.

According to Hussain, all four major U.S. operators — AT&T, Verizon (which owns TechCrunch), Sprint and T-Mobile — are affected by Torpedo, and the attacks can carried out with radio equipment costing as little as $200. One U.S. network, which he would not name, was also vulnerable to the Piercer attack.

The Torpedo attack — or “TRacking via Paging mEssage DistributiOn. (Image: supplied)

We contacted the big four cell giants, but none provided comment by the time of writing. If that changes, we’ll update.

Given two of the attacks exploit flaws in the 4G and 5G standards, almost all the cell networks outside the U.S. are vulnerable to these attacks, said Hussain.  Several networks in Europe and Asia are also vulnerable.

Given the nature of the attacks, he said, the researchers are not releasing the proof-of-concept code to exploit the flaws.

It’s the latest blow to cellular network security, which has faced intense scrutiny no more so than in the last year for flaws that have allowed the interception of calls and text messages. Vulnerabilities in Signaling System 7, used by cell networks to route calls and messages across networks, are under active exploitation by hackers. While 4G was meant to be more secure, research shows that it’s just as vulnerable as its 3G predecessor. And, 5G was meant to fix many of the intercepting capabilities but European data security authorities warned of similar flaws.

Hussain said the flaws were reported to the GSMA, an industry body that represents mobile operators. GSMA recognized the flaws, but a spokesperson was unable to provide comment when reached. It isn’t known when the flaws will be fixed.

Hussain said the Torpedo and IMSI-Cracking flaws would have to be first fixed by the GSMA, whereas a fix for Piercer depends solely on the carriers. Torpedo remains the priority as it precursors the other flaws, said Hussain.

The paper comes almost exactly a year after Hussain et al revealed ten separate weaknesses in 4G LTE that allowed eavesdropping on phone calls and text messages, and spoofing emergency alerts.