Not just another decentralized web whitepaper?

Given all the hype and noise swirling around crypto and decentralized network projects, which runs the full gamut from scams and stupidity, to very clever and inspired ideas, the release of yet another whitepaper does not immediately set off an attention klaxon.

But this whitepaper — which details a new protocol for achieving consensus within a decentralized network — is worth paying more attention to than most.

MaidSafe, the team behind it, are also the literal opposite of fly-by-night crypto opportunists. They’ve been working on decentralized networking since long before the space became the hot, hyped thing it is now.

Their overarching mission is to engineer an entirely decentralized Internet which bakes in privacy, security and freedom of expression by design — the ‘Safe’ in their planned ‘Safe Network’ stands for ‘Secure access for everyone’ — meaning it’s encrypted, autonomous, self-organizing, self-healing. And the new consensus protocol is just another piece towards fulfilling that grand vision.

What’s consensus in decentralized networking terms? “Within decentralized networks you must have a way of the network agreeing on a state — such as can somebody access a file or confirming a coin transaction, for example — and the reason you need this is because you don’t have a central server to confirm all this to you,” explains MaidSafe’s COO Nick Lambert, discussing what the protocol is intended to achieve.

“So you need all these decentralized nodes all reaching agreement somehow on a state within the network. Consensus occurs by each of these nodes on the network voting and letting the network as a whole know what it thinks of a transaction.

“It’s almost like consensus could be considered the heart of the networks. It’s required for almost every event in the network.”

We wrote about MaidSafe’s alternative, server-less Internet in 2014. But they actually began work on the project in stealth all the way back in 2006. So they’re over a decade into the R&D at this point.

The network is p2p because it’s being designed so that data is locally encrypted, broken up into pieces and then stored distributed and replicated across the network, relying on the users’ own compute resources to stand in and take the strain. No servers necessary.

The prototype Safe Network is currently in an alpha testing stage (they opened for alpha in 2016). Several more alpha test stages are planned, with a beta release still a distant, undated prospect at this stage. But rearchitecting the entire Internet was clearly never going to be a day’s work.

MaidSafe also ran a multimillion dollar crowdsale in 2014 — for a proxy token of the coin that will eventually be baked into the network — and did so long before ICOs became a crypto-related bandwagon that all sorts of entities were jumping onto. The SafeCoin cryptocurrency is intended to operate as the inventive mechanism for developers to build apps for the Safe Network and users to contribute compute resource and thus bring MaidSafe’s distributed dream alive.

Their timing on the token sale front, coupled with prudent hodling of some of the Bitcoins they’ve raised, means they’re essentially in a position of not having to worry about raising more funds to build the network, according to Lambert.

A rough, back-of-an-envelope calculation on MaidSafe’s original crowdsale suggests, given they raised $2M in Bitcoin in April 2014 when the price for 1BTC was up to around $500, the Bitcoins they obtained then could be worth between ~$30M-$40M by today’s Bitcoin prices — though that would be assuming they held on to most of them. Bitcoin’s price also peaked far higher last year too.

As well as the token sale they also did an equity raise in 2016, via the fintech investment platform bnktothefuture, pulling in around $1.7M from that — in a mixture of cash and “some Bitcoin”.

“It’s gone both ways,” says Lambert, discussing the team’s luck with Bitcoin. “The crowdsale we were on the losing end of Bitcoin price decreasing. We did a raise from bnktothefuture in autumn of 2016… and fortunately we held on to quite a lot of the Bitcoin. So we rode the Bitcoin price up. So I feel like the universe paid us back a little bit for that. So it feels like we’re level now.”

“Fundraising is exceedingly time consuming right through the organization, and it does take a lot of time away from what you wants to be focusing on, and so to be in a position where you’re not desperate for funding is a really nice one to be in,” he adds. “It allows us to focus on the technology and releasing the network.”

The team’s headcount is now up to around 33, with founding members based at the HQ in Ayr, Scotland, and other engineers working remotely or distributed (including in a new dev office they opened in India at the start of this year), even though MaidSafe is still not taking in any revenue.

This April they also made the decision to switch from a dual licensing approach for their software — previously offering both an open source license and a commercial license (which let people close source their code for a fee) — to going only open source, to encourage more developer engagement and contributions to the project, as Lambert tells it.

“We always see the SafeNetwork a bit like a public utility,” he says. “In terms of once we’ve got this thing up and launched we don’t want to control it or own it because if we do nobody will want to use it — it needs to be seen as everyone contributing. So we felt it’s a much more encouraging sign for developers who want to contribute if they see everything is fully open sourced and cannot be closed source.”

MaidSafe’s story so far is reason enough to take note of their whitepaper.

But the consensus issue the paper addresses is also a key challenge for decentralized networks so any proposed solution is potentially a big deal — if indeed it pans out as promised.

 

Protocol for Asynchronous, Reliable, Secure and Efficient Consensus

MaidSafe reckons they’ve come up with a way of achieving consensus on decentralized networks that’s scalable, robust and efficient. Hence the name of the protocol — ‘Parsec’ — being short for: ‘Protocol for Asynchronous, Reliable, Secure and Efficient Consensus’.

They will be open sourcing the protocol under a GPL v3 license — with a rough timeframe of “months” for that release, according to Lambert.

He says they’ve been working on Parsec for the last 18 months to two years — but also drawing on earlier research the team carried out into areas such as conflict-free replicated data types, synchronous and asynchronous consensus, and topics such as threshold signatures and common coin.

More specifically, the research underpinning Parsec is based on the following five papers: 1. Baird L. The Swirlds Hashgraph Consensus Algorithm: Fair, Fast, Byzantine Fault Tolerance, Swirlds Tech Report SWIRLDS-TR-2016-01 (2016); 2. Mostefaoui A., Hamouna M., Raynal M. Signature-Free Asynchronous Byzantine Consensus with t <n/3 and O(n 2 ) Messages, ACM PODC (2014); 3. Micali S. Byzantine Agreement, Made Trivial, (2018); 4. Miller A., Xia Y., Croman K., Shi E., Song D. The Honey Badger of BFT Protocols, CCS (2016); 5. Team Rocket Snowflake to Avalanche: A Novel Metastable Consensus Protocol Family for Cryptocurrencies, (2018).

One tweet responding to the protocol’s unveiling just over a week ago wonders whether it’s too good to be true. Time will tell — but the potential is certainly enticing.

Bitcoin’s use of a drastically energy-inefficient ‘proof of work’ method to achieve consensus and write each transaction to its blockchain very clearly doesn’t scale. It’s slow, cumbersome and wasteful. And how to get blockchain-based networks to support the billions of transactions per second that might be needed to sustain the various envisaged applications remains an essential work in progress — with projects investigating various ideas and approaches to try to overcome the limitation.

MaidSafe’s network is not blockchain-based. It’s engineered to function with asynchronous voting of nodes, rather than synchronous voting, which should avoid the bottleneck problems associated with blockchain. But it’s still decentralized. So it needs a consensus mechanism to enable operations and transactions to be carried out autonomously and robustly. That’s where Parsec is intended to slot in.

The protocol does not use proof of work. And is able, so the whitepaper claims, to achieve consensus even if a third of the network is comprised of malicious nodes — i.e. nodes which are attempting to disrupt network operations or otherwise attack the network.

Another claimed advantage is that decisions made via the protocol are both mathematically guaranteed and irreversible.

“What Parsec does is it can reach consensus even with malicious nodes. And up to a third of the nodes being malicious is what the maths proofs suggest,” says Lambert. “This ability to provide mathematical guarantees that all parts of the network will come to the same agreement at a point in time, even with some fault in the network or bad actors — that’s what Byzantine Fault Tolerance is.”

In theory a blockchain using proof of work could be hacked if any one entity controlled 51% of the nodes on the network (although in reality it’s likely that such a large amount of energy would be required it’s pretty much impractical).

So on the surface MaidSafe’s decentralized network — which ‘only’ needs 33% of its nodes to be compromised for its consensus decisions to be attacked — sounds rather less robust. But Lambert says it’s more nuanced than the numbers suggest. And in fact the malicious third would also need to be nodes that have the authority to vote. “So it is a third but it’s a third of well reputed nodes,” as he puts it.

So there’s an element of proof of stake involved too, bound up with additional planned characteristics of the Safe Network — related to dynamic membership and sharding (Lambert says MaidSafe has additional whitepapers on both those elements coming soon).

“Those two papers, particularly the one around dynamic membership, will explain why having a third of malicious nodes is actually harder than just having 33% of malicious nodes. Because the nodes that can vote have to have a reputation as well. So it’s not just purely you can flood the Safe Network with lots and lots of malicious nodes and override it only using a third of the nodes. What we’re saying is the nodes that can vote and actually have a say must have a good reputation in the network,” he says.

“The other thing is proof of stake… Everyone is desperate to move away from proof of work because of its environmental impact. So proof of stake — I liken it to the Scottish landowners, where people with a lot of power have more say. In the cryptocurrency field, proof of stake might be if you have, let’s say, 10 coins and I have one coin your vote might be worth 10x as much authority as what my one coin would be. So any of these mechanisms that they come up with it has that weighting to it… So the people with the most vested interests in the network are also given the more votes.”

Sharding refers to closed groups that allow for consensus votes to be reached by a subset of nodes on a decentralized network. By splitting the network into small sections for consensus voting purposes the idea is you avoid the inefficiencies of having to poll all the nodes on the network — yet can still retain robustness, at least so long as subgroups are carefully structured and secured.

“If you do that correctly you can make it more secure and you can make things much more efficient and faster,” says Lambert. “Because rather than polling, let’s say 6,000 nodes, you might be polling eight nodes. So you can get that information back quickly.

“Obviously you need to be careful about how you do that because with much less nodes you can potentially game the network so you need to be careful how you secure those smaller closed groups or shards. So that will be quite a big thing because pretty much every crypto project is looking at sharding to make, certainly, blockchains more efficient. And so the fact that we’ll have something coming out in that, after we have the dynamic membership stuff coming out, is going to be quite exciting to see the reaction to that as well.”

Voting authority on the Safe Network might be based on a node’s longevity, quality and historical activity — so a sort of ‘reputation’ score (or ledger) that can yield voting rights over time.

“If you’re like that then you will have a vote in these closed groups. And so a third of those votes — and that then becomes quite hard to game because somebody who’s then trying to be malicious would need to have their nodes act as good corporate citizens for a time period. And then all of a sudden become malicious, by which time they’ve probably got a vested stake in the network. So it wouldn’t be possible for someone to just come and flood the network with new nodes and then be malicious because it would not impact upon the network,” Lambert suggests.

The computing power that would be required to attack the Safe Network once it’s public and at scale would also be “really, really significant”, he adds. “Once it gets to scale it would be really hard to co-ordinate anything against it because you’re always having to be several hundred percent bigger than the network and then have a co-ordinated attack on it itself. And all of that work might get you to impact the decision within one closed group. So it’s not even network wide… And that decision could be on who accesses one piece of encrypted shard of data for example… Even the thing you might be able to steal is only an encrypted shard of something — it’s not even the whole thing.”

Other distributed ledger projects are similarly working on Asynchronous Byzantine Fault Tolerant (AFBT) consensus models, including those using directed acrylic graphs (DAGs) — another nascent decentralization technology that’s been suggested as an alternative to blockchain.

And indeed AFBT techniques predate Bitcoin, though MaidSafe says these kind of models have only more recently become viable thanks to research and the relative maturing of decentralized computing and data types, itself as a consequence of increased interest and investment in the space.

However in the case of Hashgraph — the DAG project which has probably attracted the most attention so far — it’s closed source, not open. So that’s one major difference with MaidSafe’s approach. 

Another difference that Lambert points to is that Parsec has been built to work in a dynamic, permissionless network environment (essential for the intended use-case, as the Safe Network is intended as a public network). Whereas he claims Hashgraph has only demonstrated its algorithms working on a permissioned (and therefore private) network “where all the nodes are known”.

He also suggests there’s a question mark over whether Hashgraph’s algorithm can achieve consensus when there are malicious nodes operating on the network. Which — if true — would limit what it can be used for.

“The Hashgraph algorithm is only proven to reach agreement if there’s no adversaries within the network,” Lambert claims. “So if everything’s running well then happy days, but if there’s any maliciousness or any failure within that network then — certainly on the basis of what’s been published — it would suggest that that algorithm was not going to hold up to that.”

“I think being able to do all of these things asynchronously with all of the mathematical guarantees is very difficult,” he continues, returning to the core consensus challenge. “So at the moment we see that we have come out with something that is unique, that covers a lot of these bases, and is a very good use for our use-case. And I think will be useful for others — so I think we like to think that we’ve made a paradigm shift or a vast improvement over the state of the art.”

 

Paradigm shift vs marginal innovation

Despite the team’s conviction that, with Parsec, they’ve come up with something very notable, early feedback includes some very vocal Twitter doubters.

For example there’s a lengthy back-and-forth between several MaidSafe engineers and Ethereum researcher Vlad Zamfir — who dubs the Parsec protocol “overhyped” and a “marginal innovation if that”… so, er, ouch.

Lambert is, if not entirely sanguine, then solidly phlegmatic in the face of a bit of initial Twitter blowback — saying he reckons it will take more time for more detailed responses to come, i.e. allowing for people to properly digest the whitepaper.

“In the world of async BFT algorithms, any advance is huge,” MaidSafe CEO David Irvine also tells us when we ask for a response to Zamfir’s critique. “How huge is subjective, but any advance has to be great for the world. We hope others will advance Parsec like we have built on others (as we clearly state and thank them for their work).  So even if it was a marginal development (which it certainly is not) then I would take that.”

“All in all, though, nothing was said that took away from the fact Parsec moves the industry forward,” he adds. “I felt the comments were a bit juvenile at times and a bit defensive (probably due to us not agreeing with POS in our Medium post) but in terms of the only part commented on (the coin flip) we as a team feel that part could be much more concrete in terms of defining exactly how small such random (finite) delays could be. We know they do not stop the network and a delaying node would be killed, but for completeness, it would be nice to be that detailed.”

A developer source of our own in the crypto/blockchain space — who’s not connected to the MaidSafe or Ethereum projects — also points out that Parsec “getting objective review will take some time given that so many potential reviewers have vested interest in their own project/coin”.

It’s certainly fair to say the space excels at public spats and disagreements. Researchers pouring effort into one project can be less than kind to rivals’ efforts. (And, well, given all the crypto Lambos at stake it’s not hard to see why there can be no love lost — and, ironically, zero trust — between competing champions of trustless tech.)

Another fundamental truth of these projects is they’re all busily experimenting right now, with lots of ideas in play to try and fix core issues like scalability, efficiency and robustness — often having different ideas over implementation even if rival projects are circling and/or converging on similar approaches and techniques.

“Certainly other projects are looking at sharding,” says Lambert. “So I know that Ethereum are looking at sharding. And I think Bitcoin are looking at that as well, but I think everyone probably has quite different ideas about how to implement it. And of course we’re not using a blockchain which makes that another different use-case where Ethereum and Bitcoin obviously are. But everyone has — as with anything — these different approaches and different ideas.”

“Every network will have its own different ways of doing [consensus],” he adds when asked whether he believes Parsec could be adopted by other projects wrestling with the consensus challenge. “So it’s not like some could lift [Parsec] out and just put it in. Ethereum is blockchain-based — I think they’re looking at something around proof of stake, but maybe they could take some ideas or concepts from the work that we’re open sourcing for their specific case.

“If you get other blockchain-less networks like IOTA, Byteball, I think POA is another one as well. These other projects it might be easier for them to implement something like Parsec with them because they’re not using blockchain. So maybe less of that adaption required.”

Whether other projects will deem Parsec worthy of their attention remains to be seen at this point with so much still to play for. Some may prefer to expend effort trying to rubbish a rival approach, whose open source tech could, if it stands up to scrutiny and operational performance, reduce the commercial value of proprietary and patented mechanisms also intended to grease the wheels of decentralized networks — for a fee.

And of course MaidSafe’s developed-in-stealth consensus protocol may also turn out to be a relatively minor development. But finding a non-vested expert to give an impartial assessment of complex network routing algorithms conjoined to such a self-interested and, frankly, anarchical industry is another characteristic challenge of the space.

Irvine’s view is that DAG based projects which are using a centralized component will have to move on or adopt what he dubs “state of art” asynchronous consensus algorithms — as MaidSafe believes Parsec is — aka, algorithms which are “more widely accepted and proven”.

“So these projects should contribute to the research, but more importantly, they will have to adopt better algorithms than they use,” he suggests. “So they can play an important part, upgrades! How to upgrade a running DAG based network? How to had fork a graph? etc. We know how to hard fork blockchains, but upgrading DAG based networks may not be so simple when they are used as ledgers.

“Projects like Hashgraph, Algorand etc will probably use an ABFT algorithm like this as their whole network with a little work for a currency; IOTA, NANO, Bytball etc should. That is entirely possible with advances like Parsec. However adding dynamic membership, sharding, a data layer then a currency is a much larger proposition, which is why Parsec has been in stealth mode while it is being developed.

“We hope that by being open about the algorithm, and making the code open source when complete, we will help all the other projects working on similar problems.”

Of course MaidSafe’s team might be misguided in terms of the breakthrough they think they’ve made with Parsec. But it’s pretty hard to stand up the idea they’re being intentionally misleading.

Because, well, what would be the point of that? While the exact depth of MaidSafe’s funding reserves isn’t clear, Lambert doesn’t sound like a startup guy with money worries. And the team’s staying power cannot be in doubt — over a decade into the R&D needed to underpin their alt network.

It’s true that being around for so long does have some downsides, though. Especially, perhaps, given how hyped the decentralized space has now become. “Because we’ve been working on it for so long, and it’s been such a big project, you can see some negative feedback about that,” as Lambert admits.

And with such intense attention now on the space, injecting energy which in turn accelerates ideas and activity, there’s perhaps extra pressure on a veteran player like MaidSafe to be seen making a meaningful contribution — ergo, it might be tempting for the team to believe the consensus protocol they’ve engineered really is a big deal.

To stand up and be counted amid all the noise, as it were. And to draw attention to their own project — which needs lots of external developers to buy into the vision if it’s to succeed, yet, here in 2018, it’s just one decentralization project among so many. 

 

The Safe Network roadmap

Consensus aside, MaidSafe’s biggest challenge is still turning the sizable amount of funding and resources the team’s ideas have attracted to date into a bona fide alternative network that anyone really can use. And there’s a very long road to travel still on that front, clearly.

The Safe Network is in alpha 2 testing incarnation (which has been up and running since September last year) — consisting of around a hundred nodes that MaidSafe is maintaining itself.

The core decentralization proposition of anyone being able to supply storage resource to the network via lending their own spare capacity is not yet live — and won’t come fully until alpha 4.

“People are starting to create different apps against that network. So we’ve seen Jams — a decentralized music player… There are a couple of storage style apps… There is encrypted email running as well, and also that is running on Android,” says Lambert. “And we have a forked version of the Beaker browser — that’s the browser that we use right now. So if you can create websites on the Safe Network, which has its own protocol, and if you want to go and view those sites you need a Safe browser to do that, so we’ve also been working on our own browser from scratch that we’ll be releasing later this year… So there’s a number of apps that are running against that alpha 2 network.

“What alpha 3 will bring is it will run in parallel with alpha 2 but it will effectively be a decentralized routing network. What that means is it will be one for more technical people to run, and it will enable data to be passed around a network where anyone can contribute their resources to it but it will not facilitate data storage. So it’ll be a command line app, which is probably why it’ll suit technical people more because there’ll be no user interface for it, and they will contribute their resources to enable messages to be passed around the network. So secure messaging would be a use-case for that.

“And then alpha 4 is effectively bringing together alpha 2 and alpha 3. So it adds a storage layer on top of the alpha 3 network — and at that point it gives you the fully decentralized network where users are contributing their resources from home and they will be able to store data, send messages and things of that nature. Potentially during alpha 4, or a later alpha, we’ll introduce test SafeCoin. Which is the final piece of the initial puzzle to provide incentives for users to provide resources and for developers to make apps. So that’s probably what the immediate roadmap looks like.”

On the timeline front Lambert won’t be coaxed into fixing any deadlines to all these planned alphas. They’ve long ago learnt not to try and predict the pace of progress, he says with a laugh. Though he does not question that progress is being made.

“These big infrastructure projects are typically only government funded because the payback is too slow for venture capitalists,” he adds. “So in the past you had things like Arpanet, the precursor to the Internet — that was obviously a US government funded project — and so we’ve taken on a project which has, not grown arms and legs, but certainly there’s more to it than what was initially thought about.

“So we are almost privately funding this infrastructure. Which is quite a big scope, and I will say why it’s taking a bit of time. But we definitely do seem to be making lots of progress.”

Ticketfly’s website is offline after a hacker got into its homepage and database

Following what it calls a “cyber incident,” the event ticket distributor Ticketfly took its homepage offline on Thursday morning. The company left this message on its website, which remains nonfunctional hours later:

“Following a series of recent issues with Ticketfly properties, we’ve determined that Ticketfly has been the target of a cyber incident. Out of an abundance of caution, we have taken all Ticketfly systems temporarily offline as we continue to look into the issue. We are working to bring our systems back online as soon as possible. Please check back later.

For information on specific events please check the social media accounts of the presenting venues/promoters to learn more about availability/status of upcoming shows. In many cases, shows are still happening and tickets may be available at the door.”

Before Ticketfly regained control of its site, a hacker calling themselves IsHaKdZ hijacked it to display apparent database files along with a Guy Fawkes mask and an email contact.

According to correspondence with Motherboard, the hacker apparently demanded a single bitcoin (worth $7,502, at the time of writing) to divulge the vulnerability that left Ticketfly open to attack. Motherboard reports that it was able to verify the validity of at least six sets of user data listed in the hacked files, which included names, addresses, email addresses and phone numbers of Ticketfly customers as well as some employees. We’ll update this story as we learn more.

Government investigation finds federal agencies failing at cybersecurity basics

The Office of Management and Budget reports that the federal government is a shambles — cybersecurity-wise, anyway. Finding little situational awareness, few standard processes for reporting or managing attacks, and almost no agencies adequately performing even basic encryption, the OMB concluded that “the current situation is untenable.”

All told, nearly three quarters of federal agencies have cybersecurity programs that qualified as either “at risk” (significant gaps in security) or “high risk” (fundamental processes not in place).

The report, which you can read here, lists four major findings, each of which with its own pitiful statistics and recommendations that occasionally amount to a complete about-face or overhaul of existing policies.

1. “Agencies do not understand and do not have the resources to combat the current threat environment.”

The simple truth and perhaps origin of all these problems is that the federal government is a slow-moving beast that can’t keep up with the nimble threat of state-sponsored hackers and the rapid pace of technology. The simplest indicator of this problem is perhaps this: of the 30,899 (!) known successful compromises of federal systems in FY 2016, 11,802 of them never even had their threat vector identified.

38 percent of attacks had no identified method or attacker.
So for 38 percent of successful attacks, they don’t have a clue who did it or how!

This lack of situational awareness means that even if they have budgets in the billions, these agencies don’t have the capability to deploy them effectively.

While cyber spending increases year-over-year, OMB found that agencies are not effectively using available information, such as threat intelligence, incident data, and network traffic flow data to determine the extent that assets are at risk, or inform how they to prioritize resource allocations.

To this end, the OMB will be working with agencies on a threat-based budget model, looking at what is actually possible to affect the agency, what is in place to prevent it, and what specifically needs to be improved.

2. “Agencies do not have standardized cybersecurity processes and IT capabilities.”

There’s immense variety in the tasks and capabilities of our many federal agencies, but you would think that some basics would have be established along the lines of best practices for reporting, standard security measures to lock down secure systems, and so on. Nope!

For example, one agency lists no fewer than 62 separately managed email services in its environment, making it virtually impossible to track and inspect inbound and outbound communications across the agency.

51 percent of agencies can’t detect or whitelist software running on their systems
Only half of the agencies the OMB looked at said they have the ability to detect and whitelist software running on their systems. Now, while it may only be needed on a case by case basis for IT to manage users’ apps and watch for troubling processes, well, the capability should at least be there!

When something happens, things are little better: 59 percent of agencies have some kind of standard process for communicating cyber-threats to their users. So, for example, if one of their 62 email systems has been compromised, the agency as likely as not has no good way to notify everyone about it.

And only 30 percent have “predictable, enterprise-wide incident response processes in place,” meaning once the threat has been detected, only one in three has some kind of standard procedure for who to tell and what to tell them.

Establishing standard processes for cybersecurity and general harmony in computing resources is something the OMB has been working on for a long time. Too bad the position of cyber coordinator just got eliminated.

3. “Agencies lack visibility into what is occurring on their networks, and especially lack the ability to detect data exfiltration.”

Monitoring your organization’s data and traffic, both internal and external, is a critical part of any cybersecurity plan. Time and again federal agencies have proven susceptible to all kinds of exfiltration schemes, from USB keys to phishing for login details.

73 percent can’t detect attempts to access large volumes of data.
Turns out that only 27 percent of the agencies even “have the ability to detect and investigate attempts to access large volumes of data.”

Simply put, agencies cannot detect when large amounts of information leave their networks, which is particularly alarming in the wake of some of the high-profile incidents across government and industry in recent years.

Hard to secure your data if you can’t see where it’s going. After the “high-profile incidents” to which the OMB report alludes, one would think that detection and lockdown of data repositories would be one of the first efforts these agencies would make.

Perhaps it’s the total lack of insight into how and why these things occur. Only 17 percent of agencies analyzed incident response data after the fact, so maybe they just filed the incidents away, never to be looked at again.

The OMB has a smart way to start addressing this: one agency that has its act together will be designated a “SOC [Secure Operations Center] Center of Excellence.” (Yes, “Center” is there twice.) This SOC will offer secure storage and access as a service to other agencies while the latter improve or establish their own facilities.

4. “Agencies lack standardized and enterprise-wide processes for managing cybersecurity risks”

There’s a bit of overlap with 2 here, but redundancy is the name of the game when it comes to the U.S. government. This one is a bit more focused on the leadership itself.

While most agencies noted… that their leadership was actively engaged in cybersecurity risk management, many did not, or could not, elaborate in detail on leadership engagement above the CIO level.

Federal agencies possess neither robust risk management programs nor consistent methods for notifying leadership of cybersecurity risks across the agency.

84 percent of agencies failed to meet goals for encrypting data at rest.
In other words, cyber is being left to the cyber-guys, with little guidance or clout offered by the higher-ups at the agencies. That’s important because, as the OMB notes, many decisions or requests can only be made by those higher-ups. For example, budgetary concerns.

Despite “repeated calls from industry leaders, GAO [the Government Accountability Office], and privacy advocates” to utilize encryption wherever possible, less than 16 percent of agencies achieved their targets for encrypting data at rest. 16 percent! Encrypting at rest isn’t even that hard!

Turns out this is an example of under-investment by the powers that be. Non-defense agencies budgeted a total between them of under $51 million on encrypting data in FY 2017, which is extremely little even before you consider that half of that came from two agencies. How are even motivated IT departments supposed to migrate to encrypted storage when they have no money to hire the experts or get the equipment necessary to do so?

“Agencies have demonstrated that this is a low priority…it is easy to see government’s priorities must be realigned,” the OMB remarked.

While the conclusion of the report isn’t as gloomy as the body, it’s clear that the OMB’s researchers are deeply disappointed by what they found. This is hardly a new issue, despite the current President’s designation of it as a key issue — the previous Presidents did as well, but movement has been slow and halting, punctuated by disastrous breaches and embarrassing leaks.

The report declines to name and shame the offending agencies, perhaps because their failings and successes were diverse and no one deserved worse treatment than another, but it seems highly likely that in less public channels those agencies are not being spared. Hopefully this damning report will put spurs to the efforts that have been limping along for the last decade.

Facebook didn’t see Cambridge Analytica breach coming because it was focused ‘on the old threat’

In light of the massive data scandal involving Cambridge Analytica around the 2016 U.S. presidential election, a lot of people wondered how something like that could’ve happened. Well, Facebook didn’t see it coming, Facebook COO Sheryl Sandberg said at the Code conference this evening.

“If you go back to 2016 and you think about what people were worried about in terms of nations, states or election security, it was largely spam and phishing hacking,” Sandberg said. “That’s what people were worried about.”

She referenced the Sony email hack and how Facebook didn’t have a lot of the problems other companies were having at the time. Unfortunately, while Facebook was focused on not screwing up in that area, “we didn’t see coming a different kind of more insidious threat,” Sandberg said.

Sandberg added, “We realized we didn’t see the new threat coming. We were focused on the old threat and now we understand that this is the kind of threat we have.”

Moving forward, Sandberg said, Facebook now understands the threat and that it’s better able to meet those threats leading in to future elections. On stage, Sandberg also said Facebook was not only late to discovering Cambridge Analytica’s unauthorized access to its data, but that Facebook still doesn’t know exactly what data Cambridge Analytica accessed. Facebook was in the midst of conducting its own audit when the U.K. government decided to conduct one of their own, therefore putting Facebook’s on hold.

“They didn’t have any data that we could’ve identified as ours,” Sandberg said. “To this day, we still don’t know what data Cambridge Analytica had.”

Canadian Yahoo hacker gets a five-year prison sentence

After pleading guilty in November, the Canadian hacker at least partially to blame for the massive Yahoo hack that exposed up to 3 billion accounts will face five years in prison. According to the Justice Department, the hacker, 23-year-old Karim Baratov, worked under the guidance of two agents from the FSB, Russia’s spy agency, to compromise the accounts.

Those officers, Dmitry Dokuchaev and Igor Sushchin, reside in Russia, as does Latvian hacker Alexsey Belan who also was implicated in the Yahoo hack. Given their location, those three are unlikely to face consequences for their involvement, but Baratov’s Canadian citizenship made him vulnerable to prosecution.

“Baratov’s role in the charged conspiracy was to hack webmail accounts of individuals of interest to his coconspirator who was working for the FSB and send those accounts’ passwords to Dokuchaev in exchange for money,” the Justice Department described in its summary of Baratov’s sentencing.

Acting U.S. Attorney for the Northern District of California Alex G. Tse issued a stern warning to other would-be hackers doing a foreign government’s dirty work:

The sentence imposed reflects the seriousness of hacking for hire. Hackers such as Baratov ply their trade without regard for the criminal objectives of the people who hire and pay them. These hackers are not minor players; they are a critical tool used by criminals to obtain and exploit personal information illegally. In sentencing Baratov to five years in prison, the Court sent a clear message to hackers that participating in cyber attacks sponsored by nation states will result in significant consequences.

In addition to his prison sentence, Baratov was ordered to pay out all of his remaining assets up to $2,250,000 in the form of a fine. As part of his plea, Baratov also admitted to hacking as many as 11,000 email accounts between 2010 and his arrest in 2017.

Baratov’s crimes include aggravated identity theft and conspiracy to violate the Computer Fraud and Abuse Act.

Students confront the unethical side of tech in ‘Designing for Evil’ course

Whether it’s surveilling or deceiving users, mishandling or selling their data, or engendering unhealthy habits or thoughts, tech these days is not short on unethical behavior. But it isn’t enough to just say “that’s creepy.” Fortunately, a course at the University of Washington is equipping its students with the philosophical insights to better identify — and fix — tech’s pernicious lack of ethics.

“Designing for Evil” just concluded its first quarter at UW’s Information School, where prospective creators of apps and services like those we all rely on daily learn the tools of the trade. But thanks to Alexis Hiniker, who teaches the class, they are also learning the critical skill of inquiring into the moral and ethical implications of those apps and services.

What, for example, is a good way of going about making a dating app that is inclusive and promotes healthy relationships? How can an AI imitating a human avoid unnecessary deception? How can something as invasive as China’s proposed citizen scoring system be made as user-friendly as it is possible to be?

I talked to all the student teams at a poster session held on UW’s campus, and also chatted with Hiniker, who designed the course and seemed pleased at how it turned out.

The premise is that the students are given a crash course in ethical philosophy that acquaints them with influential ideas such as utilitarianism and deontology.

“It’s designed to be as accessible to lay people as possible,” Hiniker told me. “These aren’t philosophy students — this is a design class. But I wanted to see what I could get away with.”

The primary text is Harvard philosophy professor Michael Sandel’s popular book Justice, which Hiniker felt combined the various philosophies into a readable, integrated format. After ingesting this, the students grouped up and picked an app or technology that they would evaluate using the principles described, and then prescribe ethical remedies.

As it turned out, finding ethical problems in tech was the easy part — and fixes for them ranged from the trivial to the impossible. Their insights were interesting, but I got the feeling from many of them that there was a sort of disappointment at the fact that so much of what tech offers, or how it offers it, is inescapably and fundamentally unethical.

I found the students fell into one of three categories.

Not fundamentally unethical (but could use an ethical tune-up)

WebMD is of course a very useful site, but it was plain to the students that it lacked inclusivity: its symptom checker is stacked against non-English-speakers and those who might not know the names of symptoms. The team suggested a more visual symptom reporter, with a basic body map and non-written symptom and pain indicators.

Hello Barbie, the doll that chats back to kids, is certainly a minefield of potential legal and ethical violations, but there’s no reason it can’t be done right. With parental consent and careful engineering it will be in line with privacy laws, but the team said that it still failed some tests of keeping the dialogue with kids healthy and parents informed. The scripts for interaction, they said, should be public — which is obvious in retrospect — and audio should be analyzed on device rather than in the cloud. Lastly, a set of warning words or phrases indicating unhealthy behaviors could warn parents of things like self-harm while keeping the rest of the conversation secret.

WeChat Discover allows users to find others around them and see recent photos they’ve taken — it’s opt-in, which is good, but it can be filtered by gender, promoting a hookup culture that the team said is frowned on in China. It also obscures many user controls behind multiple layers of menus, which may cause people to share location when they don’t intend to. Some basic UI fixes were proposed by the students, and a few ideas on how to combat the possibility of unwanted advances from strangers.

Netflix isn’t evil, but its tendency to promote binge-watching has robbed its users of many an hour. This team felt that some basic user-set limits like two episodes per day, or delaying the next episode by a certain amount of time, could interrupt the habit and encourage people to take back control of their time.

Fundamentally unethical (fixes are still worth making)

FakeApp is a way to face-swap in video, producing convincing fakes in which a politician or friend appears to be saying something they didn’t. It’s fundamentally deceptive, of course, in a broad sense, but really only if the clips are passed on as genuine. Watermarks visible and invisible, as well as controlled cropping of source videos, were this team’s suggestion, though ultimately the technology won’t yield to these voluntary mitigations. So really, an informed populace is the only answer. Good luck with that!

China’s “social credit” system is not actually, the students argued, absolutely unethical — that judgment involves a certain amount of cultural bias. But I’m comfortable putting it here because of the massive ethical questions it has sidestepped and dismissed on the road to deployment. Their highly practical suggestions, however, were focused on making the system more accountable and transparent. Contest reports of behavior, see what types of things have contributed to your own score, see how it has changed over time, and so on.

Tinder’s unethical nature, according to the team, was based on the fact that it was ostensibly about forming human connections but is very plainly designed to be a meat market. Forcing people to think of themselves as physical objects first and foremost in pursuit of romance is not healthy, they argued, and causes people to devalue themselves. As a countermeasure, they suggested having responses to questions or prompts be the first thing you see about a person. You’d have to swipe based on that before seeing any pictures. I suggested having some dealbreaker questions you’d have to agree on, as well. It’s not a bad idea, though open to gaming (like the rest of online dating).

Fundamentally unethical (fixes are essentially impossible)

The League, on the other hand, was a dating app that proved intractable to ethical guidelines. Not only was it a meat market, but it was a meat market where people paid to be among the self-selected “elite” and could filter by ethnicity and other troubling categories. Their suggestions of removing the fee and these filters, among other things, essentially destroyed the product. Unfortunately, The League is an unethical product for unethical people. No amount of tweaking will change that.

Duplex was taken on by a smart team that nevertheless clearly only started their project after Google I/O. Unfortunately, they found that the fundamental deception intrinsic in an AI posing as a human is ethically impermissible. It could, of course, identify itself — but that would spoil the entire value proposition. But they also asked a question I didn’t think to ask myself in my own coverage: why isn’t this AI exhausting all other options before calling a human? It could visit the site, send a text, use other apps, and so on. AIs in general should default to interacting with websites and apps first, then to other AIs, then and only then to people — at which time it should say it’s an AI.


To me the most valuable part of all these inquiries was learning what hopefully becomes a habit: to look at the fundamental ethical soundness of a business or technology and be able to articulate it.

That may be the difference in a meeting between being able to saying something vague and easily blown off, like “I don’t think that’s a good idea,” and describing a specific harm and reason why that harm is important — and perhaps how it can be avoided.

As for Hiniker, she has some ideas for improving the course should it be approved for a repeat next year. A broader set of texts, for one: “More diverse writers, more diverse voices,” she said. And ideally it could even be expanded to a multi-quarter course so that the students get more than a light dusting of ethics.

With any luck the kids in this course (and any in the future) will be able to help make those choices, leading to fewer Leagues and Duplexes and more COPPA-compliant smart toys and dating apps that don’t sabotage self esteem.

Brexit blow for UK’s hopes of helping set AI rules in Europe

The UK’s hopes of retaining an influential role for its data protection agency in shaping European Union regulations post-Brexit — including helping to set any new Europe-wide rules around artificial intelligence — look well and truly dashed.

In a speech at the weekend in front of the International Federation for European Law, the EU’s chief Brexit negotiator, Michel Barnier, shot down the notion of anything other than a so-called ‘adequacy decision’ being on the table for the UK after it exits the bloc.

If granted, an adequacy decision is an EU mechanism for enabling citizens’ personal data to more easily flow from the bloc to third countries — as the UK will be after Brexit.

Such decisions are only granted by the European Commission after a review of a third country’s privacy standards that’s intended to determine that they offer essentially equivalent protections as EU rules.

But the mechanism does not allow for the third country to be involved, in any shape or form, in discussions around forming and shaping the EU’s rules themselves. So, in the UK’s case, the country would be going from having a seat at the rule-making table to being shut out of the process entirely — at time when the EU is really setting the global agenda on digital regulations.

“The United Kingdom decided to leave our harmonised system of decision-making and enforcement. It must respect the fact that the European Union will continue to work on the basis of this system, which has allowed us to build a single market, and which allows us to deepen our single market in response to new challenges,” said Barnier in Lisbon on Saturday.

“And, as indicated in the European Council guidelines, the UK must understand that the only possibility for the EU to protect personal data is through an adequacy decision. It is one thing to be inside the Union, and another to be outside.”

“Brexit is not, and never will be, in the interest of EU businesses,” he added. “And it will especially run counter to the interests of our businesses if we abandon our decision-making autonomy. This autonomy allows us to set standards for the whole of the EU, but also to see these standards being replicated around the world. This is the normative power of the Union, or what is often called ‘the Brussels effect’.

“And we cannot, and will not, share this decision-making autonomy with a third country, including a former Member State who does not want to be part of the same legal ecosystem as us.”

Earlier this month the UK’s Information Commissioner, Elizabeth Denham, told MPs on the UK parliament’s committee for exiting the European Union that a bespoke data agreement that gave the ICO a continued role after Brexit would be a far superior option to an adequacy agreement — pointing out that the UK stands to lose influence at a time when the EU is setting global privacy standards via the General Data Protection Regulation (GDPR), which came into full force last Friday.

“At this time when the GDPR is in its infancy, participating in shaping and interpreting the law I think is really important. And the group of regulators that sit around the table at the EU are the most influential blocs of regulators — and if we’re outside of that group and we’re an observer we’re not going to have the kind of effect that we need to have with big tech companies. Because that’s all going to be decided by that group of regulators,” she warned.

“The European Data Protection Board will set the weather when it comes to standards for artificial intelligence, for technologies, for regulating big tech. So we will be a less influential regulator, we will continue to regulate the law and protect UK citizens as we do now, but we won’t be at the leading edge of interpreting the GDPR — and we won’t be bringing British values to that table if we’re not at the table.”

She also pointed out that without a bespoke arrangement to accommodate the ICO her office would also be shut out of participating in the GDPR’s one-stop shop, which allows EU data protection agencies to work together and co-ordinate regulatory actions, and which she said “would bring huge advantages to both sides and also to British businesses”.

Huge advantages that the UK stands to lose as a result of Brexit.

With the ICO being excluded from participating in GDPR’s one-stop shop mechanism, it also means UK businesses will have to choose an alternative data protection agency within the EU to act as their lead regulator after Brexit — putting yet another burden on startups as they will need to build new relationships with a regulator in the EU.

The Irish Data Protection Commission seems the likely candidate for UK companies to look to after Brexit, when the ICO is on the side lines of GDPR, given shared language and proximity. (And Ireland’s DPC has been ramping up its headcount in anticipation of handling more investigations as a result of the new regulation.)

But UK businesses would clearly prefer to be able to continue working with their domestic regulator. Unfortunately, though, Brexit closes the door on that option.

We’ve reached out to the ICO for comment and will update this story with any response.

The UK government has committed to aligning the country with GDPR regardless of Brexit — as it seeks to avoid the economic threat of EU-UK data flows being cut off if it’s not judged to be providing adequate data protection.

Looking ahead that also essentially means the UK will need to keep its regulatory regime aligned with the EU’s in perpetuity — or risk being deemed inadequate, with, once again, the risk of data flows being cut of (or at very least businesses scrambling to put in place alternative legal arrangements to authorize their data flows, and saddled with the expense of doing so, as happened when Safe Harbor was struck down in 2015).

So, thanks to Brexit, it will be the rest of Europe setting the agenda on regulating AI — with the UK bound to follow.

To truly protect citizens, lawmakers need to restructure their regulatory oversight of big tech

If members of the European Parliament thought they could bring Mark Zuckerberg to heel with his recent appearance, they underestimated the enormous gulf between 21st century companies and their last-century regulators.

Zuckerberg himself reiterated that regulation is necessary, provided it is the “right regulation.”

But anyone who thinks that our existing regulatory tools can reign in our digital behemoths is engaging in magical thinking. Getting to “right regulation” will require us to think very differently.

The challenge goes far beyond Facebook and other social media: the use and abuse of data is going to be the defining feature of just about every company on the planet as we enter the age of machine learning and autonomous systems.

So far, Europe has taken a much more aggressive regulatory approach than anything the US was contemplating before or since Zuckerberg’s testimony.

The European Parliament’s Global Data Protection Regulation (GDPR) is now in force, which extends data privacy rights to all European citizens regardless of whether their data is processed by companies within the EU or beyond.

But I’m not holding my breath that the GDPR will get us very far on the massive regulatory challenge we face. It is just more of the same when it comes to regulation in the modern economy: a lot of ambiguous costly-to-interpret words and procedures on paper that are outmatched by rapidly evolving digital global technologies.

Crucially, the GDPR still relies heavily on the outmoded technology of user choice and consent, the main result of which has seen almost everyone in Europe (and beyond) inundated with emails asking them to reconfirm permission to keep their data. But this is an illusion of choice, just as it is when we are ostensibly given the option to decide whether to agree to terms set by large corporations in standardized take-it-or-leave-it click-to-agree documents.  

There’s also the problem of actually tracking whether companies are complying. It is likely that the regulation of online activity requires yet more technology, such as blockchain and AI-powered monitoring systems, to track data usage and implement smart contract terms.

As the EU has already discovered with the right to be forgotten, however, governments lack the technological resources needed to enforce these rights. Search engines are required to serve as their own judge and jury in the first instance; Google at last count was doing 500 a day.  

The fundamental challenge we face, here and throughout the modern economy, is not: “what should the rules for Facebook be?” but rather, “how can we can innovate new ways to regulate effectively in the global digital age?”

The answer is that we need to find ways to harness the same ingenuity and drive that built Facebook to build the regulatory systems of the digital age. One way to do this is with what I call “super-regulation” which involves developing a market for licensed private regulators that serve two masters: achieving regulatory targets set by governments but also facing the market incentive to compete for business by innovating more cost-effective ways to do that.  

Imagine, for example, if instead of drafting a detailed 261-page law like the EU did, a government instead settled on the principles of data protection, based on core values, such as privacy and user control.

Private entities, profit and non-profit, could apply to a government oversight agency for a license to provide data regulatory services to companies like Facebook, showing that their regulatory approach is effective in achieving these legislative principles.  

These private regulators might use technology, big-data analysis, and machine learning to do that. They might also figure out how to communicate simple options to people, in the same way that the developers of our smartphone figured that out. They might develop effective schemes to audit and test whether their systems are working—on pain of losing their license to regulate.

There could be many such regulators among which both consumers and Facebook could choose: some could even specialize in offering packages of data management attributes that would appeal to certain demographics – from the people who want to be invisible online, to those who want their every move documented on social media.

The key here is competition: for-profit and non-profit private regulators compete to attract money and brains the problem of how to regulate complex systems like data creation and processing.

Zuckerberg thinks there’s some kind of “right” regulation possible for the digital world. I believe him; I just don’t think governments alone can invent it. Ideally, some next generation college kid would be staying up late trying to invent it in his or her dorm room.

The challenge we face is not how to get governments to write better laws; it’s how to get them to create the right conditions for the continued innovation necessary for new and effective regulatory systems.

Vermont passes first first law to crack down on data brokers

While Facebook and Cambridge Analytica are hogging the spotlight, data brokers that collect your information from hundreds of sources and sell it wholesale are laughing all the way to the bank. But they’re not laughing in Vermont, where a first-of-its-kind law hems in these dangerous data mongers and gives the state’s citizens much-needed protections.

Data brokers in Vermont will now have to register as such with the state; they must take standard security measures and notify authorities of security breaches (no, they weren’t before); and using their data for criminal purposes like fraud is now its own actionable offense.

If you’re not familiar with data brokers, well, that’s the idea. These companies don’t really have a consumer-facing side, instead opting to collect information on people from as many sources as possible, buying and selling it amongst themselves like the commodity it has become.

This data exists in a regulatory near-vacuum. As long as they step carefully, data brokers can maintain what amounts to a shadow profile on consumers. I talked with director of the World Privacy Forum, Pam Dixon, about this practice.

“If you use an actual credit score, it’s regulated under the Fair Credit Reporting Act,” she told me. “But if you take a thousand points like shopping habits, zip code, housing status, you can create a new credit score; you can use that and it’s not discrimination.”

And while medical data like blood tests are protected from snooping, it’s not against the law for a company to make an educated guess your condition from the medicine you pay for at the local pharmacy. Now you’re on a secret list of “inferred” diabetics, and that data gets sold to, for example, Facebook, which combines it with its own metrics and allows advertisers to target it.

Oh yes, Facebook does that. Or did do it for years, only ending the practice under the present scrutiny. “When you looked at Facebook’s targeting there were like 90 targets – race, income, housing status — that was all Acxiom data,” Dixon told me; Acxiom is one of the largest brokers.

Data brokers have been quietly supplying everyone with your personal information for a long time. And advertising is the least of its applications: this data is used for informing shadow credit scores, restricting services and offers to certain classes of people, setting terms of loans, and more.

Vermont’s new law, which took effect late last week, is the nation’s first to address the data broker problem directly.

“It’s been a huge oversight,” said Dixon. “Until Vermont passed this law there was no regulation for data brokers. It’s that serious. We’ve been looking for something like this to be put in place for like 20 years.”

Europe, meanwhile, has leapfrogged American regulators with the monumental GDPR, which just entered into effect.

The issue, she said, has always been defining a data broker. It’s harder than you might think, considering how secretive and influential these companies are. When every company collects data on their customers and occasionally monetizes it, who’s to say where an ordinary business ends and data brokering begins?

They fought previous laws, and they fought this one. But Dixon, who along with the companies themselves was part of the state’s hearings to create the law, said Vermont avoided this pitfall.

“The way the bill is written is extremely well thought through. They didn’t worry as much about the definition, but focused on the activity,” she explained. And indeed the directness and clarity of the law are a pleasant surprise:

While data brokers offer many benefits, there are also risks associated with the widespread aggregation and sale of data about consumers, including risks related to consumers’ ability to know and control information held and sold about them and risks arising from the unauthorized or harmful acquisition and use of consumer information.

Consumers may not be aware that data brokers exist, who the companies are, or what information they collect, and may not be aware of available recourse.

This straightforward description of a subtle and widespread problem greatly enabled by technology is a rarity in a world dominated by legislators and judges who regularly demonstrate ignorance on high-tech topics. (You can read the full law here.)

As Dixon pointed out, lots of companies will find themselves encompassed by the law’s broad definition:

“Data broker” means a business, or unit or units of a business, separately or together, that knowingly collects and sells or licenses to third parties the brokered personal information of a consumer with whom the business does not have a direct relationship.

In other words, anyone who collects data second hand and resells it. There are a few exceptions for things like consumer-focused information services (411, for example) but it seems unlikely that any of the real brokers will escape the designation.

With the requirement to register, along with a few other disclosures brokers will be required to make, consumers will be aware of which they can opt out of and how. And if they find themselves the victim of a crime that used broker data — a home loan rate secretly raised because of race, for instance, or a job offer rescinded because of a surreptitiously discovered medical condition — they have legal recourse.

Security at these companies will have to meet a minimum standard, as well as access controls. And data breach rules mean prompt notification if personal data is leaked in spite of them.

It’s a good first step and one that should prove extremely beneficial to Vermonters; if it’s as successful as Dixon thinks it is, other states may soon imitate it.

US news sites are ghosting European readers on GDPR deadline

A cluster of U.S. news websites has gone dark for readers in Europe as the EU’s new privacy laws went into effect on Friday. The ruleset, known as General Data Protection Regulation (GDPR), outlines a robust set of requirements that internet companies collecting any personal data on consumers must follow. The consequences are considerable enough that the American media company Tronc decided to block all European readers from its sites rather than risk the ramifications of its apparent noncompliance.

Tronc -owned sites affected by the EU blackout include the Los Angeles Times, The Chicago Tribune, The New York Daily News, The Orlando Sentinel and The Baltimore Sun. Some newspapers owned by Lee Enterprises also blocked European readers, including The St. Louis Post Dispatch and The Arizona Daily Star.

While Tronc deemed its European readership disposable, at least in the short-term, most major national U.S. outlets took a different approach, serving a cleaned-up version of their website or asking users for opt-in consent to use their data. NPR even pointed delighted users toward a plaintext version of their site.

While many of the regional papers that blinked offline for EU users predominantly serve U.S. markets, some are prominent enough to attract an international readership, prompting European users left out in the cold to openly criticize the approach.

Those criticisms are well-deserved. The privacy regulations that GDPR sets in place were first adopted in April 2016, meaning that companies had two years to form a compliance plan before the regulations actually went live today.