DEA says AT&T still provides access to billions of phone records

A program that allows drug agents to obtain a pool of billions of call records from AT&T is “still active,” according to a watchdog report.

The report, published Thursday by the Justice Department inspector general, confirmed the program — named in the report only as Project C — continues to provide access to “billions” of domestic and international call records to Drug Enforcement Administration agents. The program allows agents to pull information about the callers and when and where a call was made from the telecoms provider operating the program without requiring a court order.

Details of Project C — known internally as Hemisphere — was first revealed in 2013, almost a decade after the program first began in 2007. According to the watchdog report, the telecoms provider — said to be AT&T — “maintains and analyzes” its own collection of bulk phone record data and allows access to several law enforcement agencies.

Hemisphere is said to collect more phone records than the National Security Agency’s once hotly contested phone records collection program, first disclosed by whistleblower Edward Snowden. The NSA’s program collected some 500 million call records in 2017 but was effectively shuttered last year.

The AT&T-run program collects information on every call that passed through one of its systems, including by other cell carriers and network providers. The database reportedly contains call records dating back three decade, including the location of callers. It also provides IMSI and IMEI numbers to identify callers.

But the Justice Department’s watchdog said the program posed legal problems because agents may need a higher authority to access the data than an agency-issued subpoena, which doesn’t require judicial oversight.

Bill Blier, deputy inspector general for the Justice Department, said the DEA “failed to conduct a comprehensive legal analysis prior to using its administrative subpoena authority.”

The watchdog recommended the DEA issue a legal opinion on the permissible uses of the program.

Law enforcement were accused of using the database for “parallel construction,” a way of allowing police to discover potentially criminal acts but reverse-engineer their findings to prevent public disclosure of the database. But the watchdog found “nothing inherently inappropriate” about using parallel construction to re-create information for use in a court filing.

DEA spokesperson Katherine Pfaff told TechCrunch that the agency “agrees” with the watchdog’s recommendations and “has already begun enhancing these processes.” The DEA declined to name the telecoms provider.

AT&T did not immediately comment.

DEA says AT&T still provides access to billions of phone records

A program that allows drug agents to obtain a pool of billions of call records from AT&T is “still active,” according to a watchdog report.

The report, published Thursday by the Justice Department inspector general, confirmed the program — named in the report only as Project C — continues to provide access to “billions” of domestic and international call records to Drug Enforcement Administration agents. The program allows agents to pull information about the callers and when and where a call was made from the telecoms provider operating the program without requiring a court order.

Details of Project C — known internally as Hemisphere — was first revealed in 2013, almost a decade after the program first began in 2007. According to the watchdog report, the telecoms provider — said to be AT&T — “maintains and analyzes” its own collection of bulk phone record data and allows access to several law enforcement agencies.

Hemisphere is said to collect more phone records than the National Security Agency’s once hotly contested phone records collection program, first disclosed by whistleblower Edward Snowden. The NSA’s program collected some 500 million call records in 2017 but was effectively shuttered last year.

The AT&T-run program collects information on every call that passed through one of its systems, including by other cell carriers and network providers. The database reportedly contains call records dating back three decade, including the location of callers. It also provides IMSI and IMEI numbers to identify callers.

But the Justice Department’s watchdog said the program posed legal problems because agents may need a higher authority to access the data than an agency-issued subpoena, which doesn’t require judicial oversight.

Bill Blier, deputy inspector general for the Justice Department, said the DEA “failed to conduct a comprehensive legal analysis prior to using its administrative subpoena authority.”

The watchdog recommended the DEA issue a legal opinion on the permissible uses of the program.

Law enforcement were accused of using the database for “parallel construction,” a way of allowing police to discover potentially criminal acts but reverse-engineer their findings to prevent public disclosure of the database. But the watchdog found “nothing inherently inappropriate” about using parallel construction to re-create information for use in a court filing.

DEA spokesperson Katherine Pfaff told TechCrunch that the agency “agrees” with the watchdog’s recommendations and “has already begun enhancing these processes.” The DEA declined to name the telecoms provider.

AT&T did not immediately comment.

Cloudflare expands its government warrant canaries

When the government comes for your data, tech companies can’t always tell you. But thanks to a legal loophole, companies can say if they haven’t had a visit yet

That’s opened up an interesting clause that allows companies to silently warn customers when the government turns up to secretly raid its stash of customer data without violating a gag order it. Under U.S. freedom of speech laws, companies can publicly say that “the government has not been here” when there has been no demand for data, but they are allowed to remove statements when a warrant comes in as a warning shot to anyone who pays attention.

These so-called “warrant canaries” — named for the poor canary down the mine, who dies when there’s gas that the human can’t see — are a key transparency tool that predominantly privacy-focused companies use to keep their customers aware of the goings-on behind the scenes.

Where companies have abandoned their canaries or caved to legal pressure, Cloudflare is bucking the trend.

The networking and content delivery network giant said in a blog post this week that it’s expanding the transparency reports to include more canaries.

To date, the company:

  • has never turned over our SSL keys or our customers SSL keys to anyone;
  • has never installed any law enforcement software or equipment anywhere on our network;
  • has never terminated a customer or taken down content due to political pressure;
  • has never provided any law enforcement organization a feed of our customers’ content transiting our network.

Those key points are critical to the company’s business. A government demand for SSL keys and installing intercept equipment on its network would allow investigators unprecedented access to a customer’s communications and data, and undermine the company’s security. A similar demand led to Ladar Levison shutting down his email service Lavabit when they sought the keys to obtain information on whistleblower Edward Snowden, who used the service.

Now Cloudflare’s warrant canaries will include:

  • Cloudflare has never modified customer content at the request of law enforcement or another third party.
  • Cloudflare has never modified the intended destination of DNS responses at the request of law enforcement or another third party.
  • Cloudflare has never weakened, compromised, or subverted any of its encryption at the request of law enforcement or another third party.

It’s also expanded and replaced its first canary to confirm that the company “has never turned over our encryption or authentication keys or our customers’ encryption or authentication keys to anyone.”

Cloudflare said that if it were ever asked to do any of the above, the company would “exhaust all legal remedies” to protect customer data, and remove the statements from its site.

The networking and content delivery network is one of a handful of major companies that have used warrant canaries over the years. Following reports that the National Security Agency was vacuuming up the call records from the major telecom giants in bulk, Apple included a statement in its most recent transparency reports noting that the company has to date “not received any orders for bulk data.” Reddit removed its warrant canary in 2015, indicating that it had received a national security order it wasn’t permitted to disclose.

Cloudflare’s expanded canaries were included in the company’s latest transparency report, out this week.

According to its latest figures covering the second-half of 2018, Cloudflare responded to just seven subpoenas of the 19 requests, affecting 12 accounts and 309 domains. The company also responded to 44 court orders of the 55 requests, affecting 134 accounts and 19,265 domains.

The company received between 0-249 national security requests for the duration, and that it didn’t process any wiretap or foreign government requests for the duration.

Huawei: “The US security accusation of our 5G has no evidence. Nothing.”

Huawei’s rotating chairman Guo Ping kicked off a keynote speech this morning at the world’s biggest mobile industry tradeshow with a wry joke. “There has never been more interest in Huawei,” he told delegates at Mobile World Congress. “We must be doing something right!”

The Chinese company is seeking to dispel suspicion around the security of its 5G network equipment which has been accelerated by U.S. president Trump who has been urging U.S. allies not to buy kit or services from Huawei.

Last week Trump also tweet-shamed U.S. companies — saying they needed to step up their efforts to rollout 5G networks or “get left behind”.

In an MWC keynote speech yesterday the European Commission’s digital commissioner Mariya Gabriel also signalled the executive is prepared to step in and regulate to ensure a “common approach” on the issue of network security — to avoid the risk of EU member states taking individual actions that could delay 5G rollouts across Europe.

Huawei appeared to welcome the prospect today.

“Government and the mobile operators should work together to agree what this assurance testing and certification rating for Europe will be,” said Guo, suggesting that’s Huawei’s hope for any Commission action on 5G security.

“Let experts decide whether networks are safe or not,” he added, implying Trump is the opposite of an expert. “Huawei has a strong track record in security for three decades. Serving three billion people around the world. The U.S. security accusation of our 5G has no evidence. Nothing.”

Geopolitical tensions about network security have translated into the biggest headache for Huawei which has positioned itself as a key vendor for 5G kit right as carriers are preparing to upgrade their existing cellular networks to the next-gen flavor.

Guo claimed today that Huawei is “the first company who can deploy 5G networks at scale”, giving a pitch for what he described as “powerful, simple and intelligent” next-gen network kit while clearly enjoying the opportunity of being able to agree with U.S. president Trump in public — that “the U.S. needs powerful, faster and smarter 5G”.

But any competitive lead in next-gen network tech also puts the company in prime position for political blowback linked to espionage concerns related to the Chinese state’s access to data held or accessed by commercial companies.

Huawei’s strategy to counter this threat has been to come out fighting for its commercial business — and it had plenty more of that spirit on show this morning. As well as a bunch of in-jokes. Most notably a reference to NSA whistleblower Edward Snowden which drew a knowing ripple of laughter from the audience.

“We understand innovation is nothing without security,” said Guo, segwaying from making a sales pitch for Huawei’s 5G network solutions straight into the giant geopolitical security question looming over the conference.

“Prism, prism on the wall who is the most trustworthy of them all?” he said. “It’s a very important question. And if you don’t ask them that you can go ask Edward Snowden.”

You can’t use “a crystal ball to manage cybersecurity”, Guo went on, dubbing it “a challenge we all share” and arguing that every player in the mobile industry has responsibility to defuse the network security issue — from kit vendors to carriers and standards bodies, as well as regulators.

“With 5G we have made a lot of progress over 4G and we can proudly say that 5G is safer than 4G. As a vendor we don’t operate carriers network, and we don’t all carry data. Our responsibility — what we promise — is that we don’t do anything bad,” he said. “We don’t do bad things.”

“Let me says this as clear as possible,” he went on, putting up another slide that literally underlined the point. “Huawei has not and will never plant backdoors. And we will never allow anyone to do so in our equipment.

“We take this responsibility very seriously.”

Guo’s pitch on network trust and security was to argue that where 5G networks are concerned security is a collective industry responsibility — which in turn means every player in the chain plays a monitoring role that allows for networks to be collectively trusted.

“Carriers are responsible for secure operations of their own networks. 5G networks are private networks. The boundary between different networks are clear. Carriers can prevent outside attacks with firewalls and security gateways. For internal threats carriers can manage, monitor and audit all vendors and partners to make sure their network elements are secure,” he said, going on to urge the industry to work together on standards which he described as “our shared responsibility”.

“To build safer networks we need to standardize cybersecurity requirements and these standards must be verifiable for all vendors and all carriers,” he said, adding that Huawei “fully supports” the work of industry standards and certification bodies the GSMA and 3GPP who he also claimed have “strong capabilities to verify 5G’s security”.

Huawei’s strategy to defuse geopolitical risk by appealing to the industry as a whole to get behind tackling the network trust issue is a smart one given the uncertainty generated by Trump’s attacks is hardly being welcomed by players in the mobile business.

Huawei’s headache might lead to the mobile industry as a whole catching a cold — and no one at MWC wants that.

Later in the keynote Guo also pointed to the awkward “irony” of the U.S Cloud Act — given the legislation allows US entities to “access data across borders”.

U.S. overreach on accessing the personal data of foreign citizens continues to cause major legal headaches in Europe as a result of the clash between its national security interest and EU citizens fundamental privacy rights. So his point there won’t have been lost on an MWC audience packed with European delegates attending the annual tradeshow in Barcelona.

“So for best technology and greater security choose Huawei. Please choose Huawei,” Guo finished, ending his keynote with a line that could very well make it as a new marketing slogan writ large on one of the myriad tech-packed booths here at Fira Gran Via.

Europe’s highest human rights court to hear challenge to UK’s bulk surveillance regime

The Grand Chamber of the European Court of Human Rights (ECHR) has agreed to hear a legal challenge to the use of bulk data collection surveillance powers by UK intelligence agencies.

Last September a lower chamber of the ECHR ruled that UK surveillance practices violated human rights law but did not find bulk collection itself to be in violation of the convention.

The civil and digital groups and charities behind the challenge, which include Liberty, Privacy International and Amnesty International, are hoping for a definitive judgement against bulk collection from Europe’s highest human rights court.

The legal challenge dates back around five years, and stems from the 2013 disclosures of government surveillance programs revealed by NSA whistleblower Edward Snowden .

The ECHR’s lower court heard an amalgam of complaints from three cases. And in a landmark judgement last fall it found the UK’s bulk interception regime had violated Article 8 of the European Convention on Human Rights (a right to respect for private and family life/communications); and Article 10 (the right to freedom of expression and information).

The court found there was insufficient oversight of the Internet infrastructure and communications selected for interception and searching; and also insufficient safeguards for journalistic material.

The court also ruled against the government’s regime for obtaining data from communications service providers, finding it violated both articles.

But the judges declined to find the state surveillance regime unlawful on the grounds that it constituted “general and indiscriminate” retention of data.

This is important because the legal framework around surveillance in the UK had already been superseded — with the Investigatory Powers Act, which was passed in 2016 — enshrining a number of bulk powers in law, alongside what the government bills as an adequate oversight framework. (Though it has since been forced by domestic courts to rework certain aspects of the legislation judged to be disproportionate.)

The groups behind the human rights challenge argue the lower court’s judgment “did not go far enough with regard to the unlawfulness of bulk interception powers and the fundamental shortcomings in inter-state intelligence sharing based on communications intercepts”.

Hence now pushing for an overarching judgement from judges in the Grand Chamber which — if it goes their way — could force the UK to radically rethink its approach to intelligence capabilities and put a check on the creeping encroachment of state surveillance.

Commenting in a statement, Caroline Wilson Palow, general counsel at Privacy International, said: “The UK Government continues to intercept enormous volumes of internet traffic flowing across its borders. And it continues to have access to similarly vast troves of information intercepted by the US Government. We call on the Court to reject these mass surveillance practices and find that they are fundamentally incompatible with the rights to privacy and freedom of expression enshrined in the European Convention on Human Rights.”

“The surveillance regime that the UK Government has built seriously undermines our freedom. Spying on vast numbers of people without suspicion of wrongdoing violates everyone’s rights to privacy and free expression, and can never be lawful,” added Megan Goulding, lawyer for Liberty, in another statement. “We welcome the opportunity from the Court to prove that indiscriminate state snooping is incompatible with our rights.  We need a rights-respecting and targeted surveillance system — not one where everyone is treated as a suspect as they go about their everyday lives.”

Also commenting in a statement, Lucy Claridge, director of strategic litigation at Amnesty International, said: “Industrial scale mass surveillance makes it incredibly difficult for organisations such as Amnesty International to carry out their vital human rights work. It’s critical that they are able to seek and receive information of public interest from their confidential sources, free from government intrusion.”

There’s little prospect of an imminent check on the UK’s current bulk-based surveillance modus operandi via this legal route, with what could be a wait of several years before the Grand Chamber even hears the case. 

Add to that, at that unknown future time it’s still anyone’s guess whether the UK — which is in the process of trying to determine how it will exit the European Union — will still be a party to the European Convention on Human Rights or not.

While the ECHR is attached to the Council of Europe, rather than the EU itself, some elements of the Conservative Party have been pushing to pull the UK out of the convention too. Which throws a potential future spanner in the works of this rights based challenge.

Tor pulls in record donations as it lessens reliance on US government grants

Tor, the open source initiative which provides a more secure way to access the internet, is continuing to diversify its funding away from its long-standing reliance on U.S. government grants.

The Tor Foundation — the organization behind the service which stands for ‘The Onion Router’ — announced this week that it brought in a record $460,000 from individual donors in 2018. In addition, recently released financial information shows it raised a record $4.13 million from all sources in 2017 thanks to a growth in non-U.S. government donors.

The individual donation push represents an increase on the $400,000 it raised in 2017. A large part of that is down to Tor ally Mozilla, which once again pledged to match donations in the closing months of the year, while an anonymous individual matched all new backers who pledged up to $20,000.

Overall, the foundation said that it attracted donations from 115 countries worldwide in 2018 which reflects its importance outside of the U.S.

The record donation haul comes weeks after the Tor Foundation quietly revealed its latest financials — for 2017 — which show it has lessened its dependence on U.S. government sources. That’s been a key goal for some time, particularly after allegations that the FBI paid Carnegie Mellon researchers to help crack Tor, which served as a major motivation for the introduction of fundraising drives in 2015.

Back in 2015, U.S. government sources accounted for 80-90 percent of its financial backing, but that fell to just over 50 percent in 2017. The addition of a Swedish government agency, which provided $600,000, helped on that front as well as corporate donations from Mozilla ($520,000) and DuckDuckGo ($25,000), more than $400,000 from a range of private foundations, and, of course, those donations from individuals.

Tor is best known for being used by NSA whistleblower Edward Snowden but, with governments across the world cracking down on the internet, it is a resource that’s increasingly necessary if we are to guard the world’s right to a free internet.

Tor has certainly been busy making its technology more accessible over the last year.

It launched its first official mobile browser for Android in September and the same month it released TorBrowser 8.0, its most usable browser yet which is based on Firefox’s 2017 Quantum structure. It is also worked closely with Mozilla to bring Tor into Firefox itself as it has already done with Brave, a browser firm led by former Mozilla CEO Brendan Eich.

Beyond the browser and the Tor network itself, which is designed to minimize the potential for network surveillance, the organization also develops a range of other projects. More than two million people are estimated to use Tor, according to data from the organization.

Europe issues a deadline for US’ Privacy Shield compliance

The European Commission has finally given the U.S. a deadline related to the much criticized data transfer mechanism known as the EU-US Privacy Shield .

But it’s only asking for the U.S. to nominate a permanent ombudsperson — to handle any EU citizens’ complaints — by February 28, 2019.

If a permanent ombudsperson is not appointed by then the Commission says it will “consider taking appropriate measures, in accordance with the General Data Protection Regulation”.

So not an out-and-out threat to suspend the mechanism — which is what critics and MEPs have been calling for.

But still a fixed deadline at last.

“We now expect our American partners to nominate the Ombudsperson on a permanent basis, so we can make sure that our EU-US relations in data protection are fully trustworthy,” said Andrus Ansip, Commission VP for the Digital Single Market, in a statement.

“All elements of the Shield must be working at full speed, including the Ombudsperson,” added Věra Jourová, the commissioner for justice and consumers.

It’s the first sign the Commission is losing patience with its U.S. counterparts.

Although there’s no doubt the EC remains fully committed to the survival of the business-friendly mechanism which it spent years negotiating after the prior arrangement, Safe Harbor, was struck down by Europe’s top court following NSA whistleblower Edward Snowden’s disclosures of US government surveillance programs.

Its problem is it has to contend with Trump administration priorities — which naturally don’t align with privacy protection for non-US citizens.

While the EU-US Privacy Shield is over two years’ old at this point, president Trump has failed to nominate a permanent ombudsperson to a key oversight role.

The acting civil servant (Judith Garber, principal deputy assistant secretary for the Bureau of Oceans and International Environmental and Scientific Affairs) was also nominated as U.S. ambassador to Cyprus this summer, suggesting a hard limit to her already divided attention on EU citizens’ data privacy.

Despite this problematic wrinkle, the EU’s executive today professed itself otherwise satisfied that the mechanism is ensuring “an adequate level of protection for personal data”, announcing the conclusion of its second annual Privacy Shield review.

The data transfer mechanism is now used by more than 4,000 companies to simplify flows of EU citizens’ personal data to the US.

And the Commission clearly wants to avoid a repeat of the scramble that kicked off when, three years ago, Safe Harbor was struck down and businesses had to find alternative legal means for authorizing essential data flows.

But at the same time Privacy Shield has been under growing pressure. This summer the EU parliament called for the mechanism to be suspended until the U.S. comes into compliance.

The parliament’s Libe committee also called for better monitoring of data transfers was clearly required in light of the Cambridge Analytica Facebook data misuse scandal. (Both companies having been signed up to Privacy Shield.)

The mechanism has also been looped into a separate legal challenge to another data transfer tool after the Irish High Court referred a series of questions to the European Court of Justice — setting the stage for another high stakes legal drama if fundamental European privacy rights are again deemed incompatible with U.S. national security practices.

A decision on that referral remains for the future. But in the meanwhile the Commission looks to be doing everything it can to claim it’s ‘business as usual’ for EU-US data flows.

In a press release today, it lauds steps taken by the U.S. authorities to implement recommendations it made in last year’s Privacy Shield review — saying they have “improved the functioning of the framework”.

Albeit, the detail of these slated ‘improvements’ shows how very low its starting bar was set — with the Commission listing, for e.g.:

  • the strengthening by the Department of Commerce of the certification process and of its proactive oversight over the framework — including setting up mechanisms such as a system of spot checks (it says that 100 companies have been checked; and 21 had “issues that have now been solved” — suggesting a fifth of claimed compliance was, er, not actually compliance)
  • additional “compliance review procedures” such as analysis of Privacy Shield participants’ websites “to ensure that links to privacy policies are correct”; so previously we must assume no one in the U.S. was bothering to check
  • the Department of Commerce put in place a system to identify false claims which the Commission now claims “prevents companies from claiming their compliance with the Privacy Shield, when they have not been certified”; so again, prior to this system being set up certifications weren’t necessary worth the pixels they were painted in

The Commission also claims the Federal Trade Commission has shown “a more proactive approach” to enforcement by monitoring the principles of the Privacy Shield — noting that, for example, it has issued subpoenas to request information from participating companies.

Another change it commends — related to the sticky issue of access to personal data by U.S. public authorities for national security purposes (which is what did for Safe Harbor) — is the appointment of new members of the Privacy and Civil Liberties Oversight Board (PCLOB) — to restore the Board’s quorum.

The denuded PCLOB has been a long running bone of contention for Privacy Shield critics.

“The Board’s report on the implementation of Presidential Policy-Directive No. 28 (PPD-28, which provides for privacy protections for non-Americans) has been made publicly available,” the Commission writes, referring to a key Obama era directive that it has previously said the Shield depends upon. “It confirms that these privacy protections for non-Americans are implemented across the U.S. intelligence community.”

It says it also took into account relevant developments in the U.S. legal system in the area of privacy during the review, noting that: “The Department of Commerce launched a consultation on a federal approach to data privacy to which the Commission contributed and the US Federal Trade Commission is reflecting on its current powers in this area.”

“In the context of the Facebook/Cambridge Analytica scandal, the Commission noted the Federal Trade Commission’s confirmation that its investigation of this case is ongoing,” it adds, kicking the can down the road on that particular data scandal.

Meanwhile, as you’d expect, business groups have welcomed another green light for data to keep being passed.

In a statement responding to the conclusion of the review, the Computer & Communications Industry Association said: “We commend the European Commission for its thorough review. Privacy Shield is a robust framework, with strong data protections, that allows for the daily transfers of commercial data between the world’s two biggest trading partners.”

AT&T collaborates on NSA spying through a web of secretive buildings in the U.S.

A new report from the Intercept sheds light on the NSA’s close relationship with communications provider AT&T.

The Intercept identified eight facilities across the U.S. that function as hubs for AT&T’s efforts to collaborate with the intelligence agency. The site first identified one potential hub of this kind in 2017 in lower Manhattan.

The report reveals that eight AT&T data facilities in the U.S. are regarded are high value sites to the NSA for giving the agency direct “backbone” access to raw data that passes through, including emails, web browsing, social media and any other form of unencrypted online activity. The NSA uses the web of eight AT&T hubs for a surveillance operation code named FAIRVIEW, a program previously reported by the New York Times. The program, first established in 1985, “involves tapping into international telecommunications cables, routers, and switches” and only coordinates directly with AT&T and not the other major U.S. mobile carriers.

AT&T’s deep involvement with the NSA monitoring program operated under the codename SAGUARO. Messaging, email and other web traffic accessed through the program was made searchable through XKEYSCORE, one of the NSA’s more infamous search-powered surveillance tools.

The Intercept explains how those sites give the NSA access to data beyond just AT&T subscribers:

“The data exchange between AT&T and other networks initially takes place outside AT&T’s control, sources said, at third-party data centers that are owned and operated by companies such as California’s Equinix. But the data is then routed – in whole or in part – through the eight AT&T buildings, where the NSA taps into it. By monitoring what it calls the ‘peering circuits’ at the eight sites, the spy agency can collect ‘not only AT&T’s data, they get all the data that’s interchanged between AT&T’s network and other companies,’ according to Mark Klein, a former AT&T technician who worked with the company for 22 years.”

The NSA describes these locations as “peering link router complex” sites while AT&T calls them “Service Node Routing Complexes” (SNRCs). The eight complexes are spread across the nation’s major cities with locations in Chicago, Dallas, Atlanta, Los Angeles, New York City, San Francisco, Seattle, and Washington, D.C. The Intercept report identifies these facilities:

“Among the pinpointed buildings, there is a nuclear blast-resistant, windowless facility in New York City’s Hell’s Kitchen neighborhood; in Washington, D.C., a fortress-like, concrete structure less than half a mile south of the U.S. Capitol; in Chicago, an earthquake-resistant skyscraper in the West Loop Gate area; in Atlanta, a 429-foot art deco structure in the heart of the city’s downtown district; and in Dallas, a cube-like building with narrow windows and large vents on its exterior, located in the Old East district.

… in downtown Los Angeles, a striking concrete tower near the Walt Disney Concert Hall and the Staples Center, two blocks from the most important internet exchange in the region; in Seattle, a 15-story building with blacked-out windows and reinforced concrete foundations, near the city’s waterfront; and in San Francisco’s South of Market neighborhood, a building where it was previously claimed that the NSA was monitoring internet traffic from a secure room on the sixth floor.”

While these facilities could allow for the monitoring of domestic U.S. traffic, they also process vast quantities of international traffic as it moves across the globe — a fact that likely explains why the NSA would view these AT&T nodes as such high value sites. The original documents, part of the leaked files provided by Edward Snowden, are available in the original report.

After twenty years of Salesforce, what Marc Benioff got right and wrong about the cloud

As we enter the 20th year of Salesforce, there’s an interesting opportunity to reflect back on the change that Marc Benioff created with the software-as-a-service (SaaS) model for enterprise software with his launch of Salesforce.com.

This model has been validated by the annual revenue stream of SaaS companies, which is fast approaching $100 billion by most estimates, and it will likely continue to transform many slower-moving industries for years to come.

However, for the cornerstone market in IT — large enterprise-software deals — SaaS represents less than 25 percent of total revenue, according to most market estimates. This split is even evident in the most recent high profile “SaaS” acquisition of GitHub by Microsoft, with over 50 percent of GitHub’s revenue coming from the sale of their on-prem offering, GitHub Enterprise.  

Data privacy and security is also becoming a major issue, with Benioff himself even pushing for a U.S. privacy law on par with GDPR in the European Union. While consumer data is often the focus of such discussions, it’s worth remembering that SaaS providers store and process an incredible amount of personal data on behalf of their customers, and the content of that data goes well beyond email addresses for sales leads.

It’s time to reconsider the SaaS model in a modern context, integrating developments of the last nearly two decades so that enterprise software can reach its full potential. More specifically, we need to consider the impact of IaaS and “cloud-native computing” on enterprise software, and how they’re blurring the lines between SaaS and on-premises applications. As the world around enterprise software shifts and the tools for building it advance, do we really need such stark distinctions about what can run where?

Source: Getty Images/KTSDESIGN/SCIENCE PHOTO LIBRARY

The original cloud software thesis

In his book, Behind the Cloud, Benioff lays out four primary reasons for the introduction of the cloud-based SaaS model:

  1. Realigning vendor success with customer success by creating a subscription-based pricing model that grows with each customer’s usage (providing the opportunity to “land and expand”). Previously, software licenses often cost millions of dollars and were paid upfront, each year after which the customer was obligated to pay an additional 20 percent for support fees. This traditional pricing structure created significant financial barriers to adoption and made procurement painful and elongated.
  2. Putting software in the browser to kill the client-server enterprise software delivery experience. Benioff recognized that consumers were increasingly comfortable using websites to accomplish complex tasks. By utilizing the browser, Salesforce avoided the complex local client installation and allowed its software to be accessed anywhere, anytime and on any device.
  3. Sharing the cost of expensive compute resources across multiple customers by leveraging a multi-tenant architecture. This ensured that no individual customer needed to invest in expensive computing hardware required to run a given monolithic application. For context, in 1999 a gigabyte of RAM cost about $1,000 and a TB of disk storage was $30,000. Benioff cited a typical enterprise hardware purchase of $385,000 in order to run Siebel’s CRM product that might serve 200 end-users.
  4. Democratizing the availability of software by removing the installation, maintenance and upgrade challenges. Drawing from his background at Oracle, he cited experiences where it took 6-18 months to complete the installation process. Additionally, upgrades were notorious for their complexity and caused significant downtime for customers. Managing enterprise applications was a very manual process, generally with each IT org becoming the ops team executing a physical run-book for each application they purchased.

These arguments also happen to be, more or less, that same ones made by infrastructure-as-a-service (IaaS) providers such as Amazon Web Services during their early days in the mid-late ‘00s. However, IaaS adds value at a layer deeper than SaaS, providing the raw building blocks rather than the end product. The result of their success in renting cloud computing, storage and network capacity has been many more SaaS applications than ever would have been possible if everybody had to follow the model Salesforce did several years earlier.

Suddenly able to access computing resources by the hour—and free from large upfront capital investments or having to manage complex customer installations—startups forsook software for SaaS in the name of economics, simplicity and much faster user growth.

Source: Getty Images

It’s a different IT world in 2018

Fast-forward to today, and in some ways it’s clear just how prescient Benioff was in pushing the world toward SaaS. Of the four reasons laid out above, Benioff nailed the first two:

  • Subscription is the right pricing model: The subscription pricing model for software has proven to be the most effective way to create customer and vendor success. Years ago already, stalwart products like Microsoft Office and the Adobe Suite  successfully made the switch from the upfront model to thriving subscription businesses. Today, subscription pricing is the norm for many flavors of software and services.
  • Better user experience matters: Software accessed through the browser or thin, native mobile apps (leveraging the same APIs and delivered seamlessly through app stores) have long since become ubiquitous. The consumerization of IT was a real trend, and it has driven the habits from our personal lives into our business lives.

In other areas, however, things today look very different than they did back in 1999. In particular, Benioff’s other two primary reasons for embracing SaaS no longer seem so compelling. Ironically, IaaS economies of scale (especially once Google and Microsoft began competing with AWS in earnest) and software-development practices developed inside those “web scale” companies played major roles in spurring these changes:

  • Computing is now cheap: The cost of compute and storage have been driven down so dramatically that there are limited cost savings in shared resources. Today, a gigabyte of RAM is about $5 and a terabyte of disk storage is about $30 if you buy them directly. Cloud providers give away resources to small users and charge only pennies per hour for standard-sized instances. By comparison, at the same time that Salesforce was founded, Google was running on its first data center—with combined total compute and RAM comparable to that of a single iPhone X. That is not a joke.
  • Installing software is now much easier: The process of installing and upgrading modern software has become automated with the emergence of continuous integration and deployment (CI/CD) and configuration-management tools. With the rapid adoption of containers and microservices, cloud-native infrastructure has become the de facto standard for local development and is becoming the standard for far more reliable, resilient and scalable cloud deployment. Enterprise software packed as a set of Docker containers orchestrated by Kubernetes or Docker Swarm, for example, can be installed pretty much anywhere and be live in minutes.

Sourlce: Getty Images/ERHUI1979

What Benioff didn’t foresee

Several other factors have also emerged in the last few years that beg the question of whether the traditional definition of SaaS can really be the only one going forward. Here, too, there’s irony in the fact that many of the forces pushing software back toward self-hosting and management can be traced directly to the success of SaaS itself, and cloud computing in general:

  1. Cloud computing can now be “private”: Virtual private clouds (VPCs) in the IaaS world allow enterprises to maintain root control of the OS, while outsourcing the physical management of machines to providers like Google, DigitalOcean, Microsoft, Packet or AWS. This allows enterprises (like Capital One) to relinquish hardware management and the headache it often entails, but retain control over networks, software and data. It is also far easier for enterprises to get the necessary assurance for the security posture of Amazon, Microsoft and Google than it is to get the same level of assurance for each of the tens of thousands of possible SaaS vendors in the world.
  2. Regulations can penalize centralized services: One of the underappreciated consequences of Edward Snowden’s leaks, as well as an awakening to the sometimes questionable data-privacy practices of companies like Facebook, is an uptick in governments and enterprises trying to protect themselves and their citizens from prying eyes. Using applications hosted in another country or managed by a third party exposes enterprises to a litany of legal issues. The European Union’s GDPR law, for example, exposes SaaS companies to more potential liability with each piece of EU-citizen data they store, and puts enterprises on the hook for how their SaaS providers manage data.
  3. Data breach exposure is higher than ever: A corollary to the point above is the increased exposure to cybercrime that companies face as they build out their SaaS footprints. All it takes is one employee at a SaaS provider clicking on the wrong link or installing the wrong Chrome extension to expose that provider’s customers’ data to criminals. If the average large enterprise uses 1,000+ SaaS applications and each of those vendors averages 250 employees, that’s an additional 250,000 possible points of entry for an attacker.
  4. Applications are much more portable: The SaaS revolution has resulted in software vendors developing their applications to be cloud-first, but they’re now building those applications using technologies (such as containers) that can help replicate the deployment of those applications onto any infrastructure. This shift to what’s called cloud-native computing means that the same complex applications you can sign up to use in a multi-tenant cloud environment can also be deployed into a private data center or VPC much easier than previously possible. Companies like BigID, StackRox, Dashbase and others are taking a private cloud-native instance first approach to their application offerings. Meanwhile SaaS stalwarts like Atlassian, Box, Github and many others are transitioning over to Kubernetes driven, cloud-native architectures that provide this optionality in the future.  
  5. The script got flipped on CIOs: Individuals and small teams within large companies now drive software adoption by selecting the tools (e.g., GitHub, Slack, HipChat, Dropbox), often SaaS, that best meet their needs. Once they learn what’s being used and how it’s working, CIOs are faced with the decision to either restrict network access to shadow IT or pursue an enterprise license—or the nearest thing to one—for those services. This trend has been so impactful that it spawned an entirely new category called cloud access security brokers—another vendor that needs to be paid, an additional layer of complexity, and another avenue for potential problems. Managing local versions of these applications brings control back to the CIO and CISO.

Source: Getty Images/MIKIEKWOODS

The future of software is location agnostic

As the pace of technological disruption picks up, the previous generation of SaaS companies is facing a future similar to the legacy software providers they once displaced. From mainframes up through cloud-native (and even serverless) computing, the goal for CIOs has always been to strike the right balance between cost, capabilities, control and flexibility. Cloud-native computing, which encompasses a wide variety of IT facets and often emphasizes open source software, is poised to deliver on these benefits in a manner that can adapt to new trends as they emerge.

The problem for many of today’s largest SaaS vendors is that they were founded and scaled out during the pre-cloud-native era, meaning they’re burdened by some serious technical and cultural debt. If they fail to make the necessary transition, they’ll be disrupted by a new generation of SaaS companies (and possibly traditional software vendors) that are agnostic toward where their applications are deployed and who applies the pre-built automation that simplifies management. This next generation of vendors will more control in the hands of end customers (who crave control), while maintaining what vendors have come to love about cloud-native development and cloud-based resources.

So, yes, Marc Benioff and Salesforce were absolutely right to champion the “No Software” movement over the past two decades, because the model of enterprise software they targeted needed to be destroyed. In the process, however, Salesforce helped spur a cloud computing movement that would eventually rewrite the rules on enterprise IT and, now, SaaS itself.