Europe’s top court takes a broad view of privacy responsibilities around platforms

An interesting ruling by Europe’s top court could have some major implications for data mining tech giants like Facebook and Google, along with anyone who administers pages that allow platforms to collect and process their visitors’ personal data — such as a Facebook fan page or even potentially a site running Google Analytics.

Passing judgement on a series of legal questions referred to it, the CJEU has held that the administrator of a fan page on Facebook is jointly responsible with Facebook for the processing of the data of visitors to the page — aligning with the the Advocate General’s opinion to the court, which we covered back in October.

In practical terms the ruling means tech giants could face more challenges from European data protection authorities. While anyone piggybacking on or plugging into platform services in Europe shouldn’t imagine they can just pass responsibility to the platforms for ensuring they are compliant with privacy rules.

The CJEU deems both parties to be responsible (aka, ‘data controllers’ in the legal jargon), though the court also emphasizes that “the existence of joint responsibility does not necessarily imply equal responsibility of the various operators involved in the processing of personal data”, adding: “On the contrary, those operators may be involved at different stages of that processing of personal data and to different degrees, so that the level of responsibility of each of them must be assessed with regard to all the relevant circumstances of the particular case.”

The original case dates back to 2011, when a German education and training company with a fan page on Facebook was ordered by a local data protection authority to deactivate the page because neither it nor Facebook had informed users their personal data was being collected. The education company challenged the DPA’s order and, after much legal back and forth, questions were referred to Europe’s top court for a preliminary ruling.

“The fact that an administrator of a fan page uses the platform provided by Facebook in order to benefit from the associated services cannot exempt it from compliance with its obligations concerning the protection of personal data,” the court writes today, handing down its judgement.

“It must be emphasised, moreover, that fan pages hosted on Facebook can also be visited by persons who are not Facebook users and so do not have a user account on that social network. In that case, the fan page administrator’s responsibility for the processing of the personal data of those persons appears to be even greater, as the mere consultation of the home page by visitors automatically starts the processing of their personal data.

“In those circumstances, the recognition of joint responsibility of the operator of the social network and the administrator of a fan page hosted on that network in relation to the processing of the personal data of visitors to that page contributes to ensuring more complete protection of the rights of persons visiting a fan page, in accordance with the requirements of Directive 95/46.”

Facebook unsurprisingly expressed disappointment at the CJEU’s decision when contacted for a response.

“We are disappointed by this ruling. Businesses of all sizes across Europe use internet services like Facebook to reach new customers and grow,” a spokesperson told us via emailed statement. “While there will be no immediate impact on the people and businesses who use Facebook services, we will work to help our partners understand its implications. We are compliant with applicable European law and as part of our preparations for GDPR, we have further improved our privacy policies, controls and tools to make them clearer.”

The company’s go-to legal strategy to defend against data protection challenges in Europe has been to claim it’s only bound by the jurisdiction of the Irish Data Protection Commissioner, given its international HQ is based in Ireland. So it’s essentially relied upon a cosy relationship with a local, pro-business DPA to shield it from complaints filed in other less friendly European jurisdictions.

But as we wrote last fall that strategy looks to be on borrowed time, as courts in Member States are increasing showing a willingness to assert jurisdiction over tech giants whose digital services freely cross EU borders and are entirely capable of impacting citizens’ rights everywhere.

“I do think it is becoming harder and harder for any tech company to evade the law,” Jef Ausloos, a researcher at the Centre for IT and IP Law in Belgium, tells us. “We see it in almost every CJEU ruling since GoogleSpain (delisting/rtbf) — the Court wants to ensure complete and effective protection.”

“From now onwards you can go through fan pages (that are in same jurisdiction and/or in a jurisdiction with strong DPA) by proxy to attack Facebook — regardless of one-stop-shop — so great for user-empowerment,” he adds.

“(Co-)responsibilising fan-pages will put massive pressure on Facebook but also Google -Analytics, for example, to enable better control to fan page-administrators and data subjects.”

While today’s CJEU ruling could pave the way for more enforcement of EU data protection rules at a Member State level, there are some caveats as the judgement relates to the bloc’s prior Data Protection Directive — which has now been replaced with an updated privacy framework, in the form of the General Data Protection Regulation (GDPR).

And Facebook is clearly attempting to promote a self-serving interpretation of GDPR that seeks to concentrate jurisdictional elements around a lead data protection authority — under the regulation’s so-called ‘one-stop shop’ principle. So once again it’s trying to lean towards only having to be answerable to the Irish DPA.

However that looks like wishful thinking. The GDPR’s OSS mechanism was not intended to limit the participation of other DPAs where complaints cross Member State borders — but rather to allow for co-ordination between multiple agencies.

And, well, Europe’s top court is making its view on the local competence of data watchdogs increasingly clear…

“[The CJEU ruling] continues the trend set in Google Spain that challenges can be brought across the Union,” agrees Michael Veale, a technology policy researcher at University College London. “However that aspect of the case is specifically about interpretation of the Data Protection Directive.

“The GDPR has a separate system to deal with cross-border processing, with mechanisms present such as the EDPB [European Data Protection Board], and voting systems for particular types of co-ordinated action, and the idea that a ‘lead supervisory authority’ can act but not control an entire process. Now we will see how fragmented that will end up as in practice.”

Playing down the potential impact of the ruling, Facebook — somewhat ironically — points to GDPR’s tightening of rules around the consent basis for processing persona data, meaning there’s more onus on data handlers to clearly and cleanly communicate choices to users, at least assuming consent is the legal basis they’re relying on to process people’s data.

So, in theory, that means any entities handling EU citizens’ personal data should already be thinking far more carefully about their responsibilities vis-a-vis users’ personal data — more than was perhaps the case all the way back in 2011 (when the penalties for ignoring Europe’s privacy rules were all too easily ignored).

The GDPR’s biggest change to the EU’s privacy regime is not so much new rules as an increase in the maximum penalties for data protection violations, giving enforcement the teeth that have always been lacking and thereby concentrating minds on compliance.

Though the irony here comes because in Facebook’s own case it’s already facing legal challenges to the consent flows it’s designed for GDPR — with early complaints filed against the eponymous Facebook platform and two other Facebook-owned services, Instagram and WhatsApp, alleging they are subverting the rules by coercing consent from users. (Another early consent-related complaint has also been lodged against Google’s Android.)

In terms of damage limitation as a result of the CJEU ruling, Facebook says it will work with partners and regulators in Europe to limit the potential impact on its services and on those that use them, suggesting — for example — that it could provide guidance to Page owners on how they can comply with their obligations.

At the start of this year it also announced a series of data protection workshops in Europe, set to run throughout this year, and aimed at small and medium businesses — with a stated focus on GDPR compliance.

So it’s already busy on that front — and only now likely to get busier.

But given the sheer volume of fan pages that exist on Facebook there’s no doubt the CJEU judgement greatly increases the company’s surface area for legal liabilities. (Though the ruling does not just apply to Facebook, of course.)

While the court’s backing for local DPAs’ jurisdiction sets the weather going into GDPR, and looks like a vital check against any overbearing corporate attempts to reshape the new rules to fit their own ends — at the expense of users’ fundamental rights.

iOS 12 will let users register another person to their Face ID

From advancements in AR to Memojis to group FaceTime, there is plenty to be excited about with iOS 12. But one of the more practical updates to Apple’s mobile operating system, coming this fall, went unmentioned during the keynote at WWDC.

According to 9to5Mac, iOS 12 will allow for two different faces to be registered to Face ID.

Up until now, Face ID has only allowed a single appearance to be registered to the iPhone X. 9to5Mac first noticed the update when combing through the iOS 12 beta, where one can find new settings for Face ID that allow users to “Set Up an Alternative Appearance.”

Here’s what the description says:

In addition to continuously learning how you look, Face ID can recognize an alternative appearance.

While that’s about as unclear as a description might be, 9to5Mac tested and confirmed the update, with the following caveat. Users who choose to register two faces to Face ID will not be able to remove that face without starting over from scratch with their own FaceID registration. In other words, if you choose to reset the alternate appearance, you’ll also have to clear out all existing data around your own face, too.

That small inconvenience aside, the ability to add a second face to Face ID makes total sense. Couples often pass their phones back and forth as a matter of practicality, and parents often let their children use their phones to play games and check out apps.

Plus, this may hint at Face ID on the next generation of iPads, which tend to be shared amongst multiple users more often than phones.

Apple got even tougher on ad trackers at WWDC

Apple unveiled a handful of pro-privacy enhancements for its Safari web browser at its annual developer event yesterday, building on an ad tracker blocker it announced at WWDC a year ago.

The feature — which Apple dubbed ‘Intelligent Tracking Prevention’ (IPT) — places restrictions on cookies based on how frequently a user interacts with the website that dropped them. After 30 days of a site not being visited Safari purges the cookies entirely.

Since debuting IPT a major data misuse scandal has engulfed Facebook, and consumer awareness about how social platforms and data brokers track them around the web and erode their privacy by building detailed profiles to target them with ads has likely never been higher.

Apple was ahead of the pack on this issue and is now nicely positioned to surf a rising wave of concern about how web infrastructure watches what users are doing by getting even tougher on trackers.

Cupertino’s business model also of course aligns with privacy, given the company’s main money spinner is device sales. And features intended to help safeguard users’ data remain one of the clearest and most compelling points of differentiation vs rival devices running Google’s Android OS, for example.

“Safari works really hard to protect your privacy and this year it’s working even harder,” said Craig Federighi, Apple’s SVP of software engineering during yesterday’s keynote.

He then took direct aim at social media giant Facebook — highlighting how social plugins such as Like buttons, and comment fields which use a Facebook login, form a core part of the tracking infrastructure that follows people as they browse across the web.

In April US lawmakers also closely questioned Facebook’s CEO Mark Zuckerberg about the information the company gleans on users via their offsite web browsing, gathered via its tracking cookies and pixels — receiving only evasive answers in return.

Facebook subsequently announced it will launch a Clear History feature, claiming this will let users purge their browsing history from Facebook. But it’s less clear whether the control will allow people to clear their data off of Facebook’s servers entirely.

The feature requires users to trust that Facebook is doing what it claims to be doing. And plenty of questions remain. So, from a consumer point of view, it’s much better to defeat or dilute tracking in the first place — which is what the clutch of features Apple announced yesterday are intended to do.

“It turns out these [like buttons and comment fields] can be used to track you whether you click on them or not. And so this year we are shutting that down,” said Federighi, drawing sustained applause and appreciative woos from the WWDC audience.

He demoed how Safari will show a pop-up asking users whether or not they want to allow the plugin to track their browsing — letting web browsers “decide to keep your information private”, as he put it.

Safari will also immediately partition cookies for domains that Apple has “determined to have tracking abilities” — removing the 24 window after a website interaction that Apple allowed in the first version of IPT.

It has also engineered a feature designed to detect when a domain is solely used as a “first party bounce tracker” — i.e. meaning it is never used as a third party content provider but tracks the user purely through navigational redirects — with Safari also purging website data in such instances.

Another pro-privacy enhancement detailed by Federighi yesterday is intended to counter browser fingerprinting techniques that are also used to track users from site to site — and which can be a way of doing so even when/if tracking cookies are cleared.

“Data companies are clever and relentless,” he said. “It turns out that when you browse the web your device can be identified by a unique set of characteristics like its configuration, its fonts you have installed, and the plugins you might have installed on a device.

“With Mojave we’re making it much harder for trackers to create a unique fingerprint. We’re presenting websites with only a simplified system configuration. We show them only built-in fonts. And legacy plugins are no longer supported so those can’t contribute to a fingerprint. And as a result your Mac will look more like everyone else’s Mac and will it be dramatically more difficult for data companies to uniquely identify your device and track you.”

In a post detailing IPT 2.0 on its WebKit developer blog, Apple security engineer John Wilander writes that Apple researchers found that cross-site trackers “help each other identify the user”.

“This is basically one tracker telling another tracker that ‘I think it’s user ABC’, at which point the second tracker tells a third tracker ‘Hey, Tracker One thinks it’s user ABC and I think it’s user XYZ’. We call this tracker collusion, and ITP 2.0 detects this behavior through a collusion graph and classifies all involved parties as trackers,” he explains, warning developers they should therefore “avoid making unnecessary redirects to domains that are likely to be classified as having tracking ability” — or else risk being mistaken for a tracker and penalized by having website data purged.

ITP 2.0 will also downgrade the referrer header of a webpage that a tracker can receive to “just the page’s origin for third party requests to domains that the system has classified as possible trackers and which have not received user interaction” (Apple specifies this is not just a visit to a site but must include an interaction such as a tap/click).

Apple gives the example of a user visiting ‘https://store.example/baby-products/strollers/deluxe-navy-blue.html’, and that page loading a resource from a tracker — which prior to ITP 2.0 would have received a request containing the full referrer (which contains details of the exact product being bought and from which lots of personal information can be inferred about the user).

But under ITP 2.0, the referrer will be reduced to just “https://store.example/”. Which is a very clear privacy win.

Another welcome privacy update for Mac users that Apple announced yesterday — albeit, it’s really just playing catch-up with Windows and iOS — is expanded privacy controls in Mojave around the camera and microphone so it’s protected by default for any app you run. The user has to authorize access, much like with iOS.

Facebook data misuse firm snubs UK watchdog’s legal order

The company at the center of a major Facebook data misuse scandal has failed to respond to a legal order issued by the U.K.’s data protection watchdog to provide a U.S. voter with all the personal information it holds on him.

An enforcement notice was served on Cambridge Analytica affiliate SCL Elections last month and the deadline for a response passed without it providing a response today.

The enforcement order followed a complaint by the U.S. academic, professor David Carroll, that the original Subject Access Request (SAR) he made under European law seeking to obtain his personal data had not been satisfactorily fulfilled.

The academic has spent more than a year trying to obtain the data Cambridge Analytica/SCL held on him after learning the company had built psychographic profiles of U.S. voters for the 2016 presidential election, when it was working for the Trump campaign.

Speaking in front of the EU parliament’s justice, civil liberties and home affairs (LIBE) committee today, Carroll said: “We have heard nothing [from SCL in response to the ICO’s enforcement order]. So they have not respected the regulator. They have not co-operated with the regulator. They are not respecting the law, in my opinion. So that’s very troubling — because they seem to be trying to use liquidation to evade their responsibility as far as we can tell.”

While he is not a U.K. citizen, Carroll discovered his personal data had been processed in the U.K. so he decided to bring a test case under U.K. law. The ICO supported his complaint — and last month ordered Cambridge Analytica/SCL Elections to hand over everything it holds on him, warning that failure to comply with the order is a criminal offense that can carry an unlimited fine.

At the same time — and pretty much at the height of a storm of publicity around the data misuse scandal — Cambridge Analytica and SCL Elections announced insolvency proceedings, blaming what they described as “unfairly negative media coverage.”

Its Twitter account has been silent ever since. Though company directors, senior management and investors were quickly spotted attaching themselves to yet another data company. So the bankruptcy proceedings look rather more like an exit strategy to try to escape the snowballing scandal and cover any associated data trails.

There are a lot of data trails though. Back in April Facebook admitted that data on as many as 87 million of its users had been passed to Cambridge Analytica without most of the people’s knowledge or consent.

“I expected to help set precedents of data sovereignty in this case. But I did not expect to be trying to also set rules of liquidation as a way to avoid responsibility for potential data crimes,” Carroll also told the LIBE committee. “So now that this is seeming to becoming a criminal matter we are now in uncharted waters.

“I’m seeking full disclosure… so that I can evaluate if my opinions were influenced for the presidential election. I suspect that they were, I suspect that I was exposed to malicious information that was trying to [influence my vote] — whether it did is a different question.”

He added that he intends to continue to pursue a claim for full disclosure via the courts, arguing that the only way to assess whether psychographic models can successfully be matched to online profiles for the purposes of manipulating political opinions — which is what Cambridge Analytica/SCL stands accused of misusing Facebook data for — is to see how the company structured and processed the information it sucked out of Facebook’s platform.

“If the predictions of my personality are in 80-90% then we can understand that their model has the potential to affect a population — even if it’s just a tiny slice of the population. Because in the US only about 70,000 voters in three states decided the election,” he added.

What comes after Cambridge Analytica?

The LIBE committee hearing in the European Union’s parliament is the first of a series of planned sessions focused on digging into the Cambridge Analytica Facebook scandal and “setting out a way forward,” as committee chair Claude Moraes put it.

Today’s hearing took evidence from former Facebook employee turned whistleblower Sandy Parakilas; investigative journalist Carole Cadwalladr; Cambridge Analytica whistleblower Chris Wylie; and the U.K.’s ICO Elizabeth Denham, along with her deputy, James Dipple-Johnstone.

The Information Commissioner’s Office has been running a more-than-year-long investigation into political ad targeting on online platforms — which now of course encompasses the Cambridge Analytica scandal and much more besides.

Denham described it today as “unprecedented in scale” — and likely the largest investigation ever undertaken by a data protection agency in Europe.

The inquiry is looking at “exactly what data went where; from whom; and how that flowed through the system; how that data was combined with other data from other data brokers; what were the algorithms that were processed,” explained Dipple-Johnstone, who is leading the investigation for the ICO.

“We’re presently working through a huge volume — many hundreds of terabytes of data — to follow that audit trail and we’re committed to getting to the bottom of that,” he added. “We are looking at over 30 organizations as part of this investigation and the actions of dozens of key individuals. We’re investigating social media platforms, data brokers, analytics firms, political parties and campaign groups across all spectrums and academic institutions.

“We are looking at both regulatory and criminal breaches, and we are working with other regulators, EU data protection colleagues and law enforcement in the U.K. and abroad.”

He said the ICO’s report is now expected to be published at the end of this month.

Denham previously told a U.K. parliamentary committee she’s leaning toward recommending a code of conduct for the use of social media in political campaigns to avoid the risk of political uses of the technology getting ahead of the law — a point she reiterated today.

“Beyond data protection I expect my report will be relevant to other regulators overseeing electoral processes and also overseeing academic research,” she said, emphasizing that the recommendations will be relevant “well beyond the borders of the U.K.”

“What is clear is that work will need to be done to strengthen information-sharing and closer working across these areas,” she added.

Many MEPs asked the witnesses for their views on whether the EU’s new data protection framework, the GDPR, is sufficient to curb the kinds of data abuse and misuse that has been so publicly foregrounded by the Cambridge Analytica-Facebook scandal — or whether additional regulations are required?

On this Denham made a plea for GDPR to be “given some time to work.” “I think the GDPR is an important step, it’s one step but remember the GDPR is the law that’s written on paper — and what really matters now is the enforcement of the law,” she said.

“So it’s the activities that data protection authorities are willing to do. It’s the sanctions that we look at. It’s the users and the citizens who understand their rights enough to take action — because we don’t have thousands of inspectors that are going to go around and look at every system. But we do have millions of users and millions of citizens that can exercise their rights. So it’s the enforcement and the administration of the law. It’s going to take a village to change the scenario.

“You asked me if I thought this kind of activity which we’re speaking about today — involving Cambridge Analytica and Facebook — is happening on other platforms or if there’s other applications or if there’s misuse and misselling of personal data. I would say yes,” she said in response to another question from an MEP.

“Even in the political arena there are other political consultancies that are pairing up with data brokers and other data analytics companies. I think there is a lack of transparency for users across many platforms.”

Parakilas, a former Facebook platform operations manager — and the closest stand in for the company in the room — fielded many of the questions from MEPs, including being asked for suggestions for a legislative framework that “wouldn’t put breaks on the development of healthy companies” and also not be unduly burdensome on smaller companies.

He urged EU lawmakers to think about ways to incentivize a commercial ecosystem that works to encourage rather than undermine data protection and privacy, as well as ensuring regulators are properly resourced to enforce the law.

“I think the GDPR is a really important first step,” he added. “What I would say beyond that is there’s going to have to be a lot of thinking that is done about the next generation of technologies — and so while I think GDPR does a admirable job of addressing some of the issues with current technologies the stuff that’s coming is, frankly, when you think about the bad cases is terrifying.

“Things like deepfakes. The ability to create on-demand content that’s completely fabricated but looks real… Things like artificial intelligence which can predict user actions before those actions are actually done. And in fact Facebook is just one company that’s working on this — but the fact that they have a business model where they could potentially sell the ability to influence future actions using these predictions. There’s a lot of thinking that needs to be done about the frameworks for these new technologies. So I would just encourage you to engage as soon as possible on those new technologies.”

Parakilas also discussed fresh revelations related to how Facebook’s platform disseminates user data published by The New York Times at the weekend.

The newspaper’s report details how, until April, Facebook’s API was passing user and friend data to at least 60 device makers without gaining people’s consent — despite a consent decree the company struck with the Federal Trade Commission in 2011, which Parakilas suggested “appears to prohibit that kind of behavior.”

He also pointed out the device maker data-sharing “appears to contradict Facebook’s own testimony to Congress and potentially other testimony and public statements they’ve made” — given the company’s repeat claims, since the Cambridge Analytica scandal broke, that it “locked down” data-sharing on its platform in 2015.

Yet data was still flowing out to multiple device maker partners — apparently without users’ knowledge or consent.

“I think this is a very, very important developing story. And I would encourage everyone in this body to follow it closely,” he said.

Two more LIBE hearings are planned around the Cambridge Analytica scandal — one on June 25 and one on July 2 — with the latter slated to include a Facebook representative.

Mark Zuckerberg himself attended a meeting with the EU parliament’s Council of Presidents on May 22, though the format of the meeting was widely criticized for allowing the Facebook founder to cherry-pick questions he wanted to answer — and dodge those he didn’t.

MEPs pushed for Facebook to follow up with answers to their many outstanding questions — and two sets of Facebook responses have now been published by the EU parliament.

In its follow up responses the company claims, for example, that it does not create shadow profiles on non-users — saying it merely collects information on site visitors in the same way that “any website or app” might.

On the issue of compensation for EU users affected by the Cambridge Analytica scandal — something MEPs also pressed Zuckerberg on — Facebook claims it has not seen evidence that the app developer who harvested people’s data from its platform on behalf of Cambridge Analytica/SCL sold any EU users’ data to the company.

The developer, Dr. Aleksandr Kogan, had been contracted by SCL Elections for U.S.-related election work. Although his apps collected data on Facebook users from all over the world — including some 2.7 million EU citizens.

“We will conduct a forensic audit of Cambridge Analytica, which we hope to complete as soon as we are authorized by the UK’s Information Commissioner,” Facebook also writes on that.

Facebook’s latest privacy blunder has already attracted congressional ire

The news that Facebook offered to partners until just recently a form of the friend-scraping capability it claimed to have discontinued back in 2014 has, within hours, brought rebuke and a call to action from the House of Representatives.

“It’s deeply concerning that Facebook continues to withhold critical details about the information it has and shares with others. This is just the latest example of Facebook only coming forward when forced to do so by a media outlet,” reads a statement from Rep. Frank Pallone (D-NJ).

Indeed, the question of whether and how a user’s friends’ data was being shared with third parties was brought up during Zuckerberg’s testimony. It is, after all, likely that this is the vector by which millions of users’ data was exfiltrated by agents both malicious and benign.

In the same line of thinking as “don’t talk to the cops,” the CEO was almost certainly instructed not to volunteer any disadvantageous information unless directly asked. Therefore, it should surprise no one that he failed to mention that there existed until quite recently a similar program allowing third parties to collect data on unsuspecting friends.

It’s telling of Facebook’s current predicament that before they can adequately answer some questions, even more arise.

“Our Committee is also still waiting for a lot of answers from Facebook to questions Mr. Zuckerberg could not or would not answer at our hearing,” Pallone said.

He also called for the FTC to get involved: “The Federal Trade Commission must conduct a full review to determine if the consent decree was violated.” I’ve asked if the Representative will be appealing to the FTC directly, and/or whether any existing investigation (the FTC is quiet about these) will be affected.

Pallone is just one among hundreds of senators and representatives, but he is one of the crew responsible for the pending Congressional Review Act rollback of the FCC’s new, weaker net neutrality rules. So it’s not a surprise to see him weigh in quickly on another tech issue. Here’s hoping it helps keep Facebook accountable.

Facebook says it “disagrees” with the New York Times’ criticisms of its device-integrated APIs

Facebook has responded to a New York Times story that raises privacy concerns about the company’s device-integrated APIs, saying that it “disagree[s] with the issues they’ve raised about these APIs.”

Headined “Facebook Gave Device Makers Deep Access to Data on Users and Friends,” the New York Times article criticizes the privacy protections of device-integrated APIs, which were launched by Facebook a decade ago. Before app stores became common, the APIs enabled Facebook to strike data-sharing partnerships with at least 60 device makers, including Apple, Amazon, BlackBerry, Microsoft and Samsung, that allowed them to offer Facebook features, such as messaging, address books and the like button, to their users.

But they may have given access to more data than assumed, says the article. New York Times reporters Gabriel J.X. Dance, Nicholas Confessore and Michael LaForgia write that “the partnerships, whose scope has not been previously reported, raise concerns about the company’s privacy protections,” as well as its compliance with a consent decree it struck with the Federal Trade Commission in 2011. The FTC is currently investigating Facebook’s privacy practices in light of the Cambridge Analytica data misuse scandal.

“Facebook allowed the device companies access to the data of users’ friends without their explicit consent, even after declaring that it would no longer share such information with outsiders,” the New York Times story says. “Some device makers could retrieve personal information even from users’ friends who believed they had barred any sharing, The New York Times found.”

Facebook said in April it would begin winding down access to its device-integrated APIs, but the New York Times says that many of those partnerships are still in effect.

Facebook is already under intense scrutiny by lawmakers and regulators, including the FTC, because of the Cambridge Analytica revelation, which raised serious concerns about the public APIs used by third-party developers and the company’s data-sharing policies.

“In the furor that followed, Facebook’s leaders said that the kind of access exploited by Cambridge in 2014 was cut off by the next year, when Facebook prohibited developers from collecting information from users’ friends,” the New York Times says. “But the company officials did not disclose that Facebook had exempted the makers of cellphones, tablets and other hardware from such restrictions.”

Facebook told the New York Times that data sharing through device-integrated APIs adhered to its privacy policies and the 2011 FTC agreement. The company also told the newspapers that it knew of no cases where a partner had misused data. Facebook acknowledged that some partners did store users’ data, including data from their Facebook friends, on their own servers, but said that those practices abided by strict agreements.

In a post on Facebook’s blog, vice president of product partnerships Ime Archibong reiterates the company’s stance that the device-integrated APIs were controlled tightly.

“Partners could not integrate the user’s Facebook features with their devices without the user’s permission. And our partnership and engineering teams approved the Facebook experiences these companies built,” he continued. “Contrary to claims by the New York Times, friends’ information, like photos, was only accessible on devices when people made a decision to share their information with those friends. We are not aware of any abuse by these companies.”

But the New York Times report claims that Facebook’s partners were able to retrieve user data on relationship status, religion, political leanings and upcoming events, and were also able to get data about their users’ Facebook friends, even if they did not have permission.

“Tests by The Times showed that the partners requested and received data in the same way other third parties did,” it says. “Facebook’s view that the device makers are not outsiders lets the partners go even further, The Times found: They can obtain data about a user’s Facebook friends, even those who have denied Facebook permission to share information with any third parties.”

Not just another decentralized web whitepaper?

Given all the hype and noise swirling around crypto and decentralized network projects, which runs the full gamut from scams and stupidity, to very clever and inspired ideas, the release of yet another whitepaper does not immediately set off an attention klaxon.

But this whitepaper — which details a new protocol for achieving consensus within a decentralized network — is worth paying more attention to than most.

MaidSafe, the team behind it, are also the literal opposite of fly-by-night crypto opportunists. They’ve been working on decentralized networking since long before the space became the hot, hyped thing it is now.

Their overarching mission is to engineer an entirely decentralized Internet which bakes in privacy, security and freedom of expression by design — the ‘Safe’ in their planned ‘Safe Network’ stands for ‘Secure access for everyone’ — meaning it’s encrypted, autonomous, self-organizing, self-healing. And the new consensus protocol is just another piece towards fulfilling that grand vision.

What’s consensus in decentralized networking terms? “Within decentralized networks you must have a way of the network agreeing on a state — such as can somebody access a file or confirming a coin transaction, for example — and the reason you need this is because you don’t have a central server to confirm all this to you,” explains MaidSafe’s COO Nick Lambert, discussing what the protocol is intended to achieve.

“So you need all these decentralized nodes all reaching agreement somehow on a state within the network. Consensus occurs by each of these nodes on the network voting and letting the network as a whole know what it thinks of a transaction.

“It’s almost like consensus could be considered the heart of the networks. It’s required for almost every event in the network.”

We wrote about MaidSafe’s alternative, server-less Internet in 2014. But they actually began work on the project in stealth all the way back in 2006. So they’re over a decade into the R&D at this point.

The network is p2p because it’s being designed so that data is locally encrypted, broken up into pieces and then stored distributed and replicated across the network, relying on the users’ own compute resources to stand in and take the strain. No servers necessary.

The prototype Safe Network is currently in an alpha testing stage (they opened for alpha in 2016). Several more alpha test stages are planned, with a beta release still a distant, undated prospect at this stage. But rearchitecting the entire Internet was clearly never going to be a day’s work.

MaidSafe also ran a multimillion dollar crowdsale in 2014 — for a proxy token of the coin that will eventually be baked into the network — and did so long before ICOs became a crypto-related bandwagon that all sorts of entities were jumping onto. The SafeCoin cryptocurrency is intended to operate as the inventive mechanism for developers to build apps for the Safe Network and users to contribute compute resource and thus bring MaidSafe’s distributed dream alive.

Their timing on the token sale front, coupled with prudent hodling of some of the Bitcoins they’ve raised, means they’re essentially in a position of not having to worry about raising more funds to build the network, according to Lambert.

A rough, back-of-an-envelope calculation on MaidSafe’s original crowdsale suggests, given they raised $2M in Bitcoin in April 2014 when the price for 1BTC was up to around $500, the Bitcoins they obtained then could be worth between ~$30M-$40M by today’s Bitcoin prices — though that would be assuming they held on to most of them. Bitcoin’s price also peaked far higher last year too.

As well as the token sale they also did an equity raise in 2016, via the fintech investment platform bnktothefuture, pulling in around $1.7M from that — in a mixture of cash and “some Bitcoin”.

“It’s gone both ways,” says Lambert, discussing the team’s luck with Bitcoin. “The crowdsale we were on the losing end of Bitcoin price decreasing. We did a raise from bnktothefuture in autumn of 2016… and fortunately we held on to quite a lot of the Bitcoin. So we rode the Bitcoin price up. So I feel like the universe paid us back a little bit for that. So it feels like we’re level now.”

“Fundraising is exceedingly time consuming right through the organization, and it does take a lot of time away from what you wants to be focusing on, and so to be in a position where you’re not desperate for funding is a really nice one to be in,” he adds. “It allows us to focus on the technology and releasing the network.”

The team’s headcount is now up to around 33, with founding members based at the HQ in Ayr, Scotland, and other engineers working remotely or distributed (including in a new dev office they opened in India at the start of this year), even though MaidSafe is still not taking in any revenue.

This April they also made the decision to switch from a dual licensing approach for their software — previously offering both an open source license and a commercial license (which let people close source their code for a fee) — to going only open source, to encourage more developer engagement and contributions to the project, as Lambert tells it.

“We always see the SafeNetwork a bit like a public utility,” he says. “In terms of once we’ve got this thing up and launched we don’t want to control it or own it because if we do nobody will want to use it — it needs to be seen as everyone contributing. So we felt it’s a much more encouraging sign for developers who want to contribute if they see everything is fully open sourced and cannot be closed source.”

MaidSafe’s story so far is reason enough to take note of their whitepaper.

But the consensus issue the paper addresses is also a key challenge for decentralized networks so any proposed solution is potentially a big deal — if indeed it pans out as promised.

 

Protocol for Asynchronous, Reliable, Secure and Efficient Consensus

MaidSafe reckons they’ve come up with a way of achieving consensus on decentralized networks that’s scalable, robust and efficient. Hence the name of the protocol — ‘Parsec’ — being short for: ‘Protocol for Asynchronous, Reliable, Secure and Efficient Consensus’.

They will be open sourcing the protocol under a GPL v3 license — with a rough timeframe of “months” for that release, according to Lambert.

He says they’ve been working on Parsec for the last 18 months to two years — but also drawing on earlier research the team carried out into areas such as conflict-free replicated data types, synchronous and asynchronous consensus, and topics such as threshold signatures and common coin.

More specifically, the research underpinning Parsec is based on the following five papers: 1. Baird L. The Swirlds Hashgraph Consensus Algorithm: Fair, Fast, Byzantine Fault Tolerance, Swirlds Tech Report SWIRLDS-TR-2016-01 (2016); 2. Mostefaoui A., Hamouna M., Raynal M. Signature-Free Asynchronous Byzantine Consensus with t <n/3 and O(n 2 ) Messages, ACM PODC (2014); 3. Micali S. Byzantine Agreement, Made Trivial, (2018); 4. Miller A., Xia Y., Croman K., Shi E., Song D. The Honey Badger of BFT Protocols, CCS (2016); 5. Team Rocket Snowflake to Avalanche: A Novel Metastable Consensus Protocol Family for Cryptocurrencies, (2018).

One tweet responding to the protocol’s unveiling just over a week ago wonders whether it’s too good to be true. Time will tell — but the potential is certainly enticing.

Bitcoin’s use of a drastically energy-inefficient ‘proof of work’ method to achieve consensus and write each transaction to its blockchain very clearly doesn’t scale. It’s slow, cumbersome and wasteful. And how to get blockchain-based networks to support the billions of transactions per second that might be needed to sustain the various envisaged applications remains an essential work in progress — with projects investigating various ideas and approaches to try to overcome the limitation.

MaidSafe’s network is not blockchain-based. It’s engineered to function with asynchronous voting of nodes, rather than synchronous voting, which should avoid the bottleneck problems associated with blockchain. But it’s still decentralized. So it needs a consensus mechanism to enable operations and transactions to be carried out autonomously and robustly. That’s where Parsec is intended to slot in.

The protocol does not use proof of work. And is able, so the whitepaper claims, to achieve consensus even if a third of the network is comprised of malicious nodes — i.e. nodes which are attempting to disrupt network operations or otherwise attack the network.

Another claimed advantage is that decisions made via the protocol are both mathematically guaranteed and irreversible.

“What Parsec does is it can reach consensus even with malicious nodes. And up to a third of the nodes being malicious is what the maths proofs suggest,” says Lambert. “This ability to provide mathematical guarantees that all parts of the network will come to the same agreement at a point in time, even with some fault in the network or bad actors — that’s what Byzantine Fault Tolerance is.”

In theory a blockchain using proof of work could be hacked if any one entity controlled 51% of the nodes on the network (although in reality it’s likely that such a large amount of energy would be required it’s pretty much impractical).

So on the surface MaidSafe’s decentralized network — which ‘only’ needs 33% of its nodes to be compromised for its consensus decisions to be attacked — sounds rather less robust. But Lambert says it’s more nuanced than the numbers suggest. And in fact the malicious third would also need to be nodes that have the authority to vote. “So it is a third but it’s a third of well reputed nodes,” as he puts it.

So there’s an element of proof of stake involved too, bound up with additional planned characteristics of the Safe Network — related to dynamic membership and sharding (Lambert says MaidSafe has additional whitepapers on both those elements coming soon).

“Those two papers, particularly the one around dynamic membership, will explain why having a third of malicious nodes is actually harder than just having 33% of malicious nodes. Because the nodes that can vote have to have a reputation as well. So it’s not just purely you can flood the Safe Network with lots and lots of malicious nodes and override it only using a third of the nodes. What we’re saying is the nodes that can vote and actually have a say must have a good reputation in the network,” he says.

“The other thing is proof of stake… Everyone is desperate to move away from proof of work because of its environmental impact. So proof of stake — I liken it to the Scottish landowners, where people with a lot of power have more say. In the cryptocurrency field, proof of stake might be if you have, let’s say, 10 coins and I have one coin your vote might be worth 10x as much authority as what my one coin would be. So any of these mechanisms that they come up with it has that weighting to it… So the people with the most vested interests in the network are also given the more votes.”

Sharding refers to closed groups that allow for consensus votes to be reached by a subset of nodes on a decentralized network. By splitting the network into small sections for consensus voting purposes the idea is you avoid the inefficiencies of having to poll all the nodes on the network — yet can still retain robustness, at least so long as subgroups are carefully structured and secured.

“If you do that correctly you can make it more secure and you can make things much more efficient and faster,” says Lambert. “Because rather than polling, let’s say 6,000 nodes, you might be polling eight nodes. So you can get that information back quickly.

“Obviously you need to be careful about how you do that because with much less nodes you can potentially game the network so you need to be careful how you secure those smaller closed groups or shards. So that will be quite a big thing because pretty much every crypto project is looking at sharding to make, certainly, blockchains more efficient. And so the fact that we’ll have something coming out in that, after we have the dynamic membership stuff coming out, is going to be quite exciting to see the reaction to that as well.”

Voting authority on the Safe Network might be based on a node’s longevity, quality and historical activity — so a sort of ‘reputation’ score (or ledger) that can yield voting rights over time.

“If you’re like that then you will have a vote in these closed groups. And so a third of those votes — and that then becomes quite hard to game because somebody who’s then trying to be malicious would need to have their nodes act as good corporate citizens for a time period. And then all of a sudden become malicious, by which time they’ve probably got a vested stake in the network. So it wouldn’t be possible for someone to just come and flood the network with new nodes and then be malicious because it would not impact upon the network,” Lambert suggests.

The computing power that would be required to attack the Safe Network once it’s public and at scale would also be “really, really significant”, he adds. “Once it gets to scale it would be really hard to co-ordinate anything against it because you’re always having to be several hundred percent bigger than the network and then have a co-ordinated attack on it itself. And all of that work might get you to impact the decision within one closed group. So it’s not even network wide… And that decision could be on who accesses one piece of encrypted shard of data for example… Even the thing you might be able to steal is only an encrypted shard of something — it’s not even the whole thing.”

Other distributed ledger projects are similarly working on Asynchronous Byzantine Fault Tolerant (AFBT) consensus models, including those using directed acrylic graphs (DAGs) — another nascent decentralization technology that’s been suggested as an alternative to blockchain.

And indeed AFBT techniques predate Bitcoin, though MaidSafe says these kind of models have only more recently become viable thanks to research and the relative maturing of decentralized computing and data types, itself as a consequence of increased interest and investment in the space.

However in the case of Hashgraph — the DAG project which has probably attracted the most attention so far — it’s closed source, not open. So that’s one major difference with MaidSafe’s approach. 

Another difference that Lambert points to is that Parsec has been built to work in a dynamic, permissionless network environment (essential for the intended use-case, as the Safe Network is intended as a public network). Whereas he claims Hashgraph has only demonstrated its algorithms working on a permissioned (and therefore private) network “where all the nodes are known”.

He also suggests there’s a question mark over whether Hashgraph’s algorithm can achieve consensus when there are malicious nodes operating on the network. Which — if true — would limit what it can be used for.

“The Hashgraph algorithm is only proven to reach agreement if there’s no adversaries within the network,” Lambert claims. “So if everything’s running well then happy days, but if there’s any maliciousness or any failure within that network then — certainly on the basis of what’s been published — it would suggest that that algorithm was not going to hold up to that.”

“I think being able to do all of these things asynchronously with all of the mathematical guarantees is very difficult,” he continues, returning to the core consensus challenge. “So at the moment we see that we have come out with something that is unique, that covers a lot of these bases, and is a very good use for our use-case. And I think will be useful for others — so I think we like to think that we’ve made a paradigm shift or a vast improvement over the state of the art.”

 

Paradigm shift vs marginal innovation

Despite the team’s conviction that, with Parsec, they’ve come up with something very notable, early feedback includes some very vocal Twitter doubters.

For example there’s a lengthy back-and-forth between several MaidSafe engineers and Ethereum researcher Vlad Zamfir — who dubs the Parsec protocol “overhyped” and a “marginal innovation if that”… so, er, ouch.

Lambert is, if not entirely sanguine, then solidly phlegmatic in the face of a bit of initial Twitter blowback — saying he reckons it will take more time for more detailed responses to come, i.e. allowing for people to properly digest the whitepaper.

“In the world of async BFT algorithms, any advance is huge,” MaidSafe CEO David Irvine also tells us when we ask for a response to Zamfir’s critique. “How huge is subjective, but any advance has to be great for the world. We hope others will advance Parsec like we have built on others (as we clearly state and thank them for their work).  So even if it was a marginal development (which it certainly is not) then I would take that.”

“All in all, though, nothing was said that took away from the fact Parsec moves the industry forward,” he adds. “I felt the comments were a bit juvenile at times and a bit defensive (probably due to us not agreeing with POS in our Medium post) but in terms of the only part commented on (the coin flip) we as a team feel that part could be much more concrete in terms of defining exactly how small such random (finite) delays could be. We know they do not stop the network and a delaying node would be killed, but for completeness, it would be nice to be that detailed.”

A developer source of our own in the crypto/blockchain space — who’s not connected to the MaidSafe or Ethereum projects — also points out that Parsec “getting objective review will take some time given that so many potential reviewers have vested interest in their own project/coin”.

It’s certainly fair to say the space excels at public spats and disagreements. Researchers pouring effort into one project can be less than kind to rivals’ efforts. (And, well, given all the crypto Lambos at stake it’s not hard to see why there can be no love lost — and, ironically, zero trust — between competing champions of trustless tech.)

Another fundamental truth of these projects is they’re all busily experimenting right now, with lots of ideas in play to try and fix core issues like scalability, efficiency and robustness — often having different ideas over implementation even if rival projects are circling and/or converging on similar approaches and techniques.

“Certainly other projects are looking at sharding,” says Lambert. “So I know that Ethereum are looking at sharding. And I think Bitcoin are looking at that as well, but I think everyone probably has quite different ideas about how to implement it. And of course we’re not using a blockchain which makes that another different use-case where Ethereum and Bitcoin obviously are. But everyone has — as with anything — these different approaches and different ideas.”

“Every network will have its own different ways of doing [consensus],” he adds when asked whether he believes Parsec could be adopted by other projects wrestling with the consensus challenge. “So it’s not like some could lift [Parsec] out and just put it in. Ethereum is blockchain-based — I think they’re looking at something around proof of stake, but maybe they could take some ideas or concepts from the work that we’re open sourcing for their specific case.

“If you get other blockchain-less networks like IOTA, Byteball, I think POA is another one as well. These other projects it might be easier for them to implement something like Parsec with them because they’re not using blockchain. So maybe less of that adaption required.”

Whether other projects will deem Parsec worthy of their attention remains to be seen at this point with so much still to play for. Some may prefer to expend effort trying to rubbish a rival approach, whose open source tech could, if it stands up to scrutiny and operational performance, reduce the commercial value of proprietary and patented mechanisms also intended to grease the wheels of decentralized networks — for a fee.

And of course MaidSafe’s developed-in-stealth consensus protocol may also turn out to be a relatively minor development. But finding a non-vested expert to give an impartial assessment of complex network routing algorithms conjoined to such a self-interested and, frankly, anarchical industry is another characteristic challenge of the space.

Irvine’s view is that DAG based projects which are using a centralized component will have to move on or adopt what he dubs “state of art” asynchronous consensus algorithms — as MaidSafe believes Parsec is — aka, algorithms which are “more widely accepted and proven”.

“So these projects should contribute to the research, but more importantly, they will have to adopt better algorithms than they use,” he suggests. “So they can play an important part, upgrades! How to upgrade a running DAG based network? How to had fork a graph? etc. We know how to hard fork blockchains, but upgrading DAG based networks may not be so simple when they are used as ledgers.

“Projects like Hashgraph, Algorand etc will probably use an ABFT algorithm like this as their whole network with a little work for a currency; IOTA, NANO, Bytball etc should. That is entirely possible with advances like Parsec. However adding dynamic membership, sharding, a data layer then a currency is a much larger proposition, which is why Parsec has been in stealth mode while it is being developed.

“We hope that by being open about the algorithm, and making the code open source when complete, we will help all the other projects working on similar problems.”

Of course MaidSafe’s team might be misguided in terms of the breakthrough they think they’ve made with Parsec. But it’s pretty hard to stand up the idea they’re being intentionally misleading.

Because, well, what would be the point of that? While the exact depth of MaidSafe’s funding reserves isn’t clear, Lambert doesn’t sound like a startup guy with money worries. And the team’s staying power cannot be in doubt — over a decade into the R&D needed to underpin their alt network.

It’s true that being around for so long does have some downsides, though. Especially, perhaps, given how hyped the decentralized space has now become. “Because we’ve been working on it for so long, and it’s been such a big project, you can see some negative feedback about that,” as Lambert admits.

And with such intense attention now on the space, injecting energy which in turn accelerates ideas and activity, there’s perhaps extra pressure on a veteran player like MaidSafe to be seen making a meaningful contribution — ergo, it might be tempting for the team to believe the consensus protocol they’ve engineered really is a big deal.

To stand up and be counted amid all the noise, as it were. And to draw attention to their own project — which needs lots of external developers to buy into the vision if it’s to succeed, yet, here in 2018, it’s just one decentralization project among so many. 

 

The Safe Network roadmap

Consensus aside, MaidSafe’s biggest challenge is still turning the sizable amount of funding and resources the team’s ideas have attracted to date into a bona fide alternative network that anyone really can use. And there’s a very long road to travel still on that front, clearly.

The Safe Network is in alpha 2 testing incarnation (which has been up and running since September last year) — consisting of around a hundred nodes that MaidSafe is maintaining itself.

The core decentralization proposition of anyone being able to supply storage resource to the network via lending their own spare capacity is not yet live — and won’t come fully until alpha 4.

“People are starting to create different apps against that network. So we’ve seen Jams — a decentralized music player… There are a couple of storage style apps… There is encrypted email running as well, and also that is running on Android,” says Lambert. “And we have a forked version of the Beaker browser — that’s the browser that we use right now. So if you can create websites on the Safe Network, which has its own protocol, and if you want to go and view those sites you need a Safe browser to do that, so we’ve also been working on our own browser from scratch that we’ll be releasing later this year… So there’s a number of apps that are running against that alpha 2 network.

“What alpha 3 will bring is it will run in parallel with alpha 2 but it will effectively be a decentralized routing network. What that means is it will be one for more technical people to run, and it will enable data to be passed around a network where anyone can contribute their resources to it but it will not facilitate data storage. So it’ll be a command line app, which is probably why it’ll suit technical people more because there’ll be no user interface for it, and they will contribute their resources to enable messages to be passed around the network. So secure messaging would be a use-case for that.

“And then alpha 4 is effectively bringing together alpha 2 and alpha 3. So it adds a storage layer on top of the alpha 3 network — and at that point it gives you the fully decentralized network where users are contributing their resources from home and they will be able to store data, send messages and things of that nature. Potentially during alpha 4, or a later alpha, we’ll introduce test SafeCoin. Which is the final piece of the initial puzzle to provide incentives for users to provide resources and for developers to make apps. So that’s probably what the immediate roadmap looks like.”

On the timeline front Lambert won’t be coaxed into fixing any deadlines to all these planned alphas. They’ve long ago learnt not to try and predict the pace of progress, he says with a laugh. Though he does not question that progress is being made.

“These big infrastructure projects are typically only government funded because the payback is too slow for venture capitalists,” he adds. “So in the past you had things like Arpanet, the precursor to the Internet — that was obviously a US government funded project — and so we’ve taken on a project which has, not grown arms and legs, but certainly there’s more to it than what was initially thought about.

“So we are almost privately funding this infrastructure. Which is quite a big scope, and I will say why it’s taking a bit of time. But we definitely do seem to be making lots of progress.”

Ticketfly’s website is offline after a hacker got into its homepage and database

Following what it calls a “cyber incident,” the event ticket distributor Ticketfly took its homepage offline on Thursday morning. The company left this message on its website, which remains nonfunctional hours later:

“Following a series of recent issues with Ticketfly properties, we’ve determined that Ticketfly has been the target of a cyber incident. Out of an abundance of caution, we have taken all Ticketfly systems temporarily offline as we continue to look into the issue. We are working to bring our systems back online as soon as possible. Please check back later.

For information on specific events please check the social media accounts of the presenting venues/promoters to learn more about availability/status of upcoming shows. In many cases, shows are still happening and tickets may be available at the door.”

Before Ticketfly regained control of its site, a hacker calling themselves IsHaKdZ hijacked it to display apparent database files along with a Guy Fawkes mask and an email contact.

According to correspondence with Motherboard, the hacker apparently demanded a single bitcoin (worth $7,502, at the time of writing) to divulge the vulnerability that left Ticketfly open to attack. Motherboard reports that it was able to verify the validity of at least six sets of user data listed in the hacked files, which included names, addresses, email addresses and phone numbers of Ticketfly customers as well as some employees. We’ll update this story as we learn more.

Government investigation finds federal agencies failing at cybersecurity basics

The Office of Management and Budget reports that the federal government is a shambles — cybersecurity-wise, anyway. Finding little situational awareness, few standard processes for reporting or managing attacks, and almost no agencies adequately performing even basic encryption, the OMB concluded that “the current situation is untenable.”

All told, nearly three quarters of federal agencies have cybersecurity programs that qualified as either “at risk” (significant gaps in security) or “high risk” (fundamental processes not in place).

The report, which you can read here, lists four major findings, each of which with its own pitiful statistics and recommendations that occasionally amount to a complete about-face or overhaul of existing policies.

1. “Agencies do not understand and do not have the resources to combat the current threat environment.”

The simple truth and perhaps origin of all these problems is that the federal government is a slow-moving beast that can’t keep up with the nimble threat of state-sponsored hackers and the rapid pace of technology. The simplest indicator of this problem is perhaps this: of the 30,899 (!) known successful compromises of federal systems in FY 2016, 11,802 of them never even had their threat vector identified.

38 percent of attacks had no identified method or attacker.
So for 38 percent of successful attacks, they don’t have a clue who did it or how!

This lack of situational awareness means that even if they have budgets in the billions, these agencies don’t have the capability to deploy them effectively.

While cyber spending increases year-over-year, OMB found that agencies are not effectively using available information, such as threat intelligence, incident data, and network traffic flow data to determine the extent that assets are at risk, or inform how they to prioritize resource allocations.

To this end, the OMB will be working with agencies on a threat-based budget model, looking at what is actually possible to affect the agency, what is in place to prevent it, and what specifically needs to be improved.

2. “Agencies do not have standardized cybersecurity processes and IT capabilities.”

There’s immense variety in the tasks and capabilities of our many federal agencies, but you would think that some basics would have be established along the lines of best practices for reporting, standard security measures to lock down secure systems, and so on. Nope!

For example, one agency lists no fewer than 62 separately managed email services in its environment, making it virtually impossible to track and inspect inbound and outbound communications across the agency.

51 percent of agencies can’t detect or whitelist software running on their systems
Only half of the agencies the OMB looked at said they have the ability to detect and whitelist software running on their systems. Now, while it may only be needed on a case by case basis for IT to manage users’ apps and watch for troubling processes, well, the capability should at least be there!

When something happens, things are little better: 59 percent of agencies have some kind of standard process for communicating cyber-threats to their users. So, for example, if one of their 62 email systems has been compromised, the agency as likely as not has no good way to notify everyone about it.

And only 30 percent have “predictable, enterprise-wide incident response processes in place,” meaning once the threat has been detected, only one in three has some kind of standard procedure for who to tell and what to tell them.

Establishing standard processes for cybersecurity and general harmony in computing resources is something the OMB has been working on for a long time. Too bad the position of cyber coordinator just got eliminated.

3. “Agencies lack visibility into what is occurring on their networks, and especially lack the ability to detect data exfiltration.”

Monitoring your organization’s data and traffic, both internal and external, is a critical part of any cybersecurity plan. Time and again federal agencies have proven susceptible to all kinds of exfiltration schemes, from USB keys to phishing for login details.

73 percent can’t detect attempts to access large volumes of data.
Turns out that only 27 percent of the agencies even “have the ability to detect and investigate attempts to access large volumes of data.”

Simply put, agencies cannot detect when large amounts of information leave their networks, which is particularly alarming in the wake of some of the high-profile incidents across government and industry in recent years.

Hard to secure your data if you can’t see where it’s going. After the “high-profile incidents” to which the OMB report alludes, one would think that detection and lockdown of data repositories would be one of the first efforts these agencies would make.

Perhaps it’s the total lack of insight into how and why these things occur. Only 17 percent of agencies analyzed incident response data after the fact, so maybe they just filed the incidents away, never to be looked at again.

The OMB has a smart way to start addressing this: one agency that has its act together will be designated a “SOC [Secure Operations Center] Center of Excellence.” (Yes, “Center” is there twice.) This SOC will offer secure storage and access as a service to other agencies while the latter improve or establish their own facilities.

4. “Agencies lack standardized and enterprise-wide processes for managing cybersecurity risks”

There’s a bit of overlap with 2 here, but redundancy is the name of the game when it comes to the U.S. government. This one is a bit more focused on the leadership itself.

While most agencies noted… that their leadership was actively engaged in cybersecurity risk management, many did not, or could not, elaborate in detail on leadership engagement above the CIO level.

Federal agencies possess neither robust risk management programs nor consistent methods for notifying leadership of cybersecurity risks across the agency.

84 percent of agencies failed to meet goals for encrypting data at rest.
In other words, cyber is being left to the cyber-guys, with little guidance or clout offered by the higher-ups at the agencies. That’s important because, as the OMB notes, many decisions or requests can only be made by those higher-ups. For example, budgetary concerns.

Despite “repeated calls from industry leaders, GAO [the Government Accountability Office], and privacy advocates” to utilize encryption wherever possible, less than 16 percent of agencies achieved their targets for encrypting data at rest. 16 percent! Encrypting at rest isn’t even that hard!

Turns out this is an example of under-investment by the powers that be. Non-defense agencies budgeted a total between them of under $51 million on encrypting data in FY 2017, which is extremely little even before you consider that half of that came from two agencies. How are even motivated IT departments supposed to migrate to encrypted storage when they have no money to hire the experts or get the equipment necessary to do so?

“Agencies have demonstrated that this is a low priority…it is easy to see government’s priorities must be realigned,” the OMB remarked.

While the conclusion of the report isn’t as gloomy as the body, it’s clear that the OMB’s researchers are deeply disappointed by what they found. This is hardly a new issue, despite the current President’s designation of it as a key issue — the previous Presidents did as well, but movement has been slow and halting, punctuated by disastrous breaches and embarrassing leaks.

The report declines to name and shame the offending agencies, perhaps because their failings and successes were diverse and no one deserved worse treatment than another, but it seems highly likely that in less public channels those agencies are not being spared. Hopefully this damning report will put spurs to the efforts that have been limping along for the last decade.

Facebook didn’t see Cambridge Analytica breach coming because it was focused ‘on the old threat’

In light of the massive data scandal involving Cambridge Analytica around the 2016 U.S. presidential election, a lot of people wondered how something like that could’ve happened. Well, Facebook didn’t see it coming, Facebook COO Sheryl Sandberg said at the Code conference this evening.

“If you go back to 2016 and you think about what people were worried about in terms of nations, states or election security, it was largely spam and phishing hacking,” Sandberg said. “That’s what people were worried about.”

She referenced the Sony email hack and how Facebook didn’t have a lot of the problems other companies were having at the time. Unfortunately, while Facebook was focused on not screwing up in that area, “we didn’t see coming a different kind of more insidious threat,” Sandberg said.

Sandberg added, “We realized we didn’t see the new threat coming. We were focused on the old threat and now we understand that this is the kind of threat we have.”

Moving forward, Sandberg said, Facebook now understands the threat and that it’s better able to meet those threats leading in to future elections. On stage, Sandberg also said Facebook was not only late to discovering Cambridge Analytica’s unauthorized access to its data, but that Facebook still doesn’t know exactly what data Cambridge Analytica accessed. Facebook was in the midst of conducting its own audit when the U.K. government decided to conduct one of their own, therefore putting Facebook’s on hold.

“They didn’t have any data that we could’ve identified as ours,” Sandberg said. “To this day, we still don’t know what data Cambridge Analytica had.”