Microsoft pitched its facial recognition tech to the DEA, new emails show

Microsoft tried to sell its facial recognition technology to the Drug Enforcement Administration as far back as 2017, according to newly released emails.

The American Civil Liberties Union obtained the emails through a public records lawsuit it filed in October, challenging the secrecy surrounding the DEA’s facial recognition program. The ACLU shared the emails with TechCrunch.

The emails, dated between September 2017 and December 2018, show that Microsoft privately hosted DEA agents at its Reston, Va. office to demonstrate its facial recognition system, and that the DEA later piloted the technology.

It was during this time Microsoft’s president Brad Smith was publicly calling for government regulations covering the use of facial recognition.

But the emails also show that the DEA expressed concern with purchasing the technology, fearing criticism from the FBI’s use of facial recognition at the time that caught the attention of government watchdogs.

Critics have long said this face-matching technology violates Americans’ right to privacy, and that the technology disproportionately shows bias against people of color. But despite the rise of facial recognition by police and in public spaces, Congress has struggled to keep pace and introduce legislation that would oversee the as-of-yet unregulated space.

But things changed in the wake of the nationwide and global protests in the wake of the death of George Floyd, which prompted a renewed focus about law enforcement and racial injustice.

An email from a Microsoft account executive inviting DEA agents to its Reston, Va. office to demo its facial recognition technology. (Source: ACLU/supplied)

Microsoft was the third company last week to say it will no longer sell its facial recognition technology to police until more federal regulation is put into place, following in the footsteps of Amazon, which put a one-year moratorium on selling its technology to police. IBM went further, saying it will wind down its facial recognition business entirely.

But Microsoft, like Amazon, did not say if it would no longer sell to federal departments and agencies like the DEA.

“It is bad enough that Microsoft tried to sell a dangerous technology to a law enforcement agency tasked with spearheading the racist drug war, but it gets worse,” said Nathan Freed Wessler, a senior staff attorney at the ACLU. “Even after belatedly promising not to sell face surveillance tech to police last week, Microsoft has refused to say whether it would sell the technology to federal agencies like the DEA,” said Wessler.

“This is troubling given the U.S. Drug Enforcement Administration’s record, but it’s even more disturbing now that Attorney General Bill Barr has reportedly expanded this very agency’s surveillance authorities, which could be abused to spy on people protesting police brutality,” he said.

Lawmakers have since called for a halt to the DEA’s covert surveillance of protesters, powers that were granted by the Justice Department earlier in June as protests spread across the U.S. and around the world.

When reached, DEA spokesperson Michael Miller declined to answer our questions. A spokesperson for Microsoft did not respond to a request for comment.

EU states agree a tech spec for national coronavirus apps to work across borders

European Union countries and the Commission have agreed on a technical framework to enable regional coronavirus contacts tracing apps to work across national borders.

A number of European countries have launched contacts tracing apps at this point, with the aim of leveraging smartphone technologies in the fight against COVID-19, but none of these apps can yet work across national borders.

Last month, EU Member States agreed to a set of interoperability guidelines for tracing apps. Now they’ve settled on a technical spec for achieving cross-border working of apps. The approach has been detailed in a specification document published today by the eHealth Network.

The Commission has called the agreement on a tech spec an important step in the fight against COVID-19, while emphasizing tracing apps are only a supplement to manual contacts tracing methods.

Commenting in a statement, European commissioner for the Internal Market, Thierry Breton, said: “As we approach the travel season, it is important to ensure that Europeans can use the app from their own country wherever they are travelling in the EU. Contact tracing apps can be useful to limit the spread of coronavirus, especially as part of national strategies to lift confinement measures.”

The system will involve a Federation Gateway Service, run by the Commission, that will receive and pass on “relevant information” from national contact tracing apps and servers — in order to minimise the amount of data exchanged and reduce users’ data consumption, per a Commission press release.

From the tech spec:

The pattern preferred by the European eHealth Network is a single European Federation Gateway Service. Each national backend uploads the keys of newly infected citizens (‘diagnosis keys’) every couple of hours and downloads the diagnosis keys from the other countries participating in this scheme. That’s it. Data conversion and filtering is done in the national backends.

“The proximity information shared between apps will be exchanged in an encrypted way that prevents the identification of an individual person, in line with the strict EU guidelines on data protection for apps; no geolocation data will be used,” the Commission added.

The key caveat attached to the agreed interoperability system is that it currently only works to link up decentralized contacts tracing apps — such as the Corona-Warn-App launched today by Germany — or the national apps recently released in Italy, Latvia and Switzerland.

Centralized coronavirus contacts tracing apps — which do not store and process proximity data locally on the device but upload it to a central server for processing, such as France’s StopCovid app; the UK’s NHS COVID-19 app; or the currently suspended Norwegian Smittestopp app — will not immediately be able to plug into the interoperability architecture, as we explained in our report last month.

Apple and Google’s joint API for coronavirus exposure notifications also only supports decentralized tracing apps.

“This document presents the basic elements for interoperability for ‘COVID+ Keys driven solutions’ [i.e. decentralized tracing systems],” notes the eHealth Network. “It aims to keep data volumes to the minimum necessary for interoperability to ensure cost efficiency and trust between the participating Member States. This document is therefore addressed only to Member States implementing this type of protocol.”

The Commission has been calling for a common approach to the use of tech and data to fight COVID-19 for months. However national governments have not fallen uniformly into line — with, still, a mixture of decentralized and centralized approaches for tracing apps in play (although the former now comprise “the great majority of national approved apps”, per the Commission).

It’s also playing the diplomat — saying it “continues to support the work of Member States on extending interoperability also to centralised tracing apps”.

Although it has not provided any detail on how that might be achieved in a way that’s satisfactory for both app architecture camps, given associated privacy risks/security trade-offs of crossing opposing technical streams.

This means that citizens in European countries whose governments have chosen a centralized approach for coronavirus contacts tracing may find, on traveling elsewhere in the region, they will need to download another country’s national app to be able to receive and send coronavirus exposure notifications.

Even decentralized national apps aren’t able to exchange relevant data yet, though. The interoperability architecture’s gateway interface still needs to be deployed — and national apps launched and/or updated before all the relevant pieces can start talking. So there’s a way to go before any digital contacts tracing is working smoothly across European borders.

Meanwhile, some EU countries have already started to reopen their borders to other European countries — ahead of a wider reopening planned for the summer.

This week, for example, a few thousand German holidaymakers were allowed to travel to Spain’s Balearic Islands as part of a trial aimed at restarting tourism. So EU citizens are already flowing across borders before national apps are in a position to securely exchange data on exposure risk.

Basecamp launches Hey, a hosted email service for neat freaks

Project management software maker Basecamp has launched a feature-packed hosted email service, called Hey — which they tout as taking aim at the traditional chaos and clutter of the email inbox.

Hey includes a built in screener that asks users to confirm whether or not they want to receive email from a new address. Inbound emails a Hey user has consented to are then triaged into different trays — with a central “imbox” (“im” standing for important) containing only the comms the user specifies as important to them; while newsletters are intended to live a News Feed style tray, called The Feed, (where they’re automatically displayed partially opened for easy casual reading); while email receipts get stacked up in a for-reference ‘Paper Trail’ inbox view. 

Other notable features include baked in tracking pixel blocking (with Hey acting like a VPN and sharing its own IP address with trackers, rather than email senders learning yours when you open a mail with embedded trackers); an attachment library that lets you view all attachments you’ve ever received in one searchable place; and a ‘Reply Later’ feature that lets you tag emails you want to follow up on, teeing them up in a stack — clicking a ‘Focus & Reply’ button then displays all stacked emails in a single page so you can take a one-hit run at replying to everything you teed up to revisit.

The software is the literal opposite of an MVP — with all sorts of organizational workflow style hacks baked in at launch, such as the ability to merge different email threads; rename email subjects; set up notifications for individual contacts; take clippings from within emails to save to a reference library; and add your own sticky notes to emails as a way to keep further tabs on stuff you want to revisit or remember.

Some other salient points of note: Hey is not free (they’re offering a free 14 day trial but pricing thereafter is a flat $99 per year billed in one go for 100GB storage; certain vanity email addresses may cost you more); Hey is not end-to-end encrypted (they make an up front promise that they’re not data mining your inbox but they do hold the keys to access your info); Hey does not support IMAP or POP, so Basecamp is giving the middle finger to standard email protocols — instead you’re tethered to using only Hey’s apps forever (hence they have apps for web, Mac, Windows, Linux, iPhone, iPad, and Android right now); nor can you import email from another webmail service.

Asked by a Twitter user about the lack of support for IMAP, Basecamp CTO David Heinemeier Hansson confirmed it will never be supported, writing that: “Our changes to email requires the vertical integration we’ve done.”

While there are no custom domains at launch, Heinemeier Hansson noted they are coming “later this year”. Also on the slate for the same timeframe: Hey for Business.

Right now, Basecamp is limiting sign ups to the free trial of Hey via a wait list plus invite system.

As of yesterday, it said there were more than 50,000 people on the wait list — warning it might take “a couple of weeks” before they’re ready to accept direct sign-ups.

In the meanwhile, for anyone keen on a closer look of Basecamp’s reorganized spin on email, founder and CEO, Jason Fried, has recorded the below video for a walk through tour of Hey’s features…

Norway pulls its coronavirus contacts tracing app after privacy watchdog’s warning

One of the first national coronavirus contacts tracing apps to be launched in Europe is being suspended in Norway after the country’s data protection authority raised concerns that the software, called ‘Smittestopp’, poses a disproportionate threat to user privacy — including by continuously uploading people’s location.

Following a warning from the watchdog Friday, the Norwegian Institute of Public Health (FHI) said today it will stop uploading data from tomorrow — ahead of a June 23 deadline when the DPA had asked for use of the app to be suspended so that changes could be made. It added that it disagrees with the watchdog’s assessment but will nonetheless delete user data “as soon as possible”.

As of June 3, the app had been downloaded 1.6M times, and had around 600,000 active users, according to the FHI — which is just over 10% of Norway’s population; or around 14% of the population aged over 16 years.

“We do not agree with the Data Protection Agency’s assessment, but now we have to delete all data and pause work as a result of the notification,” said FHI director Camilla Stoltenberg in a statement [translated via Google Translate]. “With this, we weaken an important part of our preparedness for increased spread of infection, because we lose time in developing and testing the app. At the same time, we have a reduced ability to fight the spread of infection that is ongoing.

“The pandemic is not over. We have no immunity in the population, no vaccine, and no effective treatment. Without the Smittestopp app, we will be less equipped to prevent new outbreaks that may occur locally or nationally.”

Europe’s data protection framework allows for personal data to be processed for a pressing public health purpose — and Norway’s DPA had earlier agreed an app could be a suitable tool to combat the coronavirus emergency. Although the agency was not actively consulted during the app’s development, and had expressed reservations — saying it would closely monitor developments.

Developments that have led the watchdog to intervene are a low contagion rate in the country and a low download rate for the app — meaning it now takes the view that Smittestopp is no longer a proportionate intervention.

“We believe that FHI has not demonstrated that it is strictly necessary to use location data for infection detection,” said Bjørn Erik Thon, director of Norway’s DPA, in a statement posted on its website today.

Unlike many of the national coronavirus apps in Europe — which use only Bluetooth signals to estimate user proximity as a means of calculating exposure risk to COVID-19 — Norway’s app also tracks real-time GPS location data.

The country took the decision to track GPS before the European Data Protection Board — which is made up of representatives of DPAs across the EU — had put out guidelines, specifying that contact tracing apps “do not require tracking the location of individual users”; and suggesting the use of “proximity data” instead.

Additionally, Norway opted for a centralized app architecture, meaning user data is uploaded to a central server controlled by the health authority, instead of being stored locally on device — as is the case with decentralized coronavirus contacts tracing apps, such as the app being developed by Germany and one launched recently in Italy. (Apple and Google’s exposure notification API also exclusively supports decentralized app architectures.)

The FHI had been using what it describes as “anonymised” user data from the app to track movement patterns around the country — saying the data would be used to monitor whether restrictions intended to limit the spread of the virus (such as social distancing) were working as intended.

The DPA said today that it’s also unhappy users of the app have no ability to choose to grant permission only for coronavirus contacts tracing — but must also agree to their personal information being used for research purposes, contravening the EU data protection principle of purpose limitation.

Another objection it has is around how the app data was being anonymized and aggregated by the FHI — location data being notoriously difficult to robustly anonymize.

“It is FHI’s choice that they stop all data collection and storage right away. Now I hope they use the time until June 23 well, both to document the usefulness of the app and to make other necessary changes so that they can resume use,” said Thon. “The reason for the notification is the [DPA]’s assessment that Smittestopp can no longer be regarded as a proportionate encroachment on users’ basic privacy rights.”

“Smittestopp is a very privacy-intensive measure, even in an exceptional situation where society is trying to fight a pandemic. We believe that the utility is not present the way it is today, and that is how the technical solution is designed and working now,” he also said.

Commenting on the developments, Luca Tosoni, a research fellow at the University of Oslo’s Norwegian Research Center for Computers and Law, suggested the Norway DPA’s decision could lead to similar bans on contacts tracing apps elsewhere in Europe — should contagion levels drop to a similarly low level. (And rates of COVID-19 continue declining across the region, at this stage.)

“To my knowledge, this is the first instance in which a European DPA has imposed a ban on a contact-tracing app already in use in light of national developments regarding contagion levels,” he told us. “It is thus possible that other European DPAs will impose similar bans in the future and demand that contact-tracing apps be changed as soon as contagion levels substantially decrease also in other parts of Europe. Norway has currently one of the lowest contagion levels in Europe.”

“The ban was not only related to the app’s use of GPS data. The latter was probably the most important feature of the app that the Norwegian DPA has criticised, but not the only one to be seen as problematic,” Tosoni added. “Another element that was criticised by the Norwegian DPA was that the app’s users are currently unable to consent only to the use of their their data for infection tracking purposes without consenting to their data being used also for research purposes.

“The DPA also questioned the accuracy of the app in light of the current low level of contagion in Norway, and criticised the absence of an appropriate solution for aggregating and anonymising the data collected.”

Tosoni said the watchdog is expected to reassess the app in the next few weeks, including assessing any changes proposed by the developer, but he takes the view that it’s unlikely the DPA would deem a switch to Bluetooth-only tracing to be sufficient for the app’s use of personal data proportionate.

Even so, the FHI said today it hopes users will suspend the app (by disabling its access to GPS and Bluetooth in settings), rather than deleting it entirely — so the software could be more easily reactivated in future should it be deemed necessary and legal.

Amazon won’t say if its facial recognition moratorium applies to the feds

In a surprise blog post, Amazon said it will put the brakes on providing its facial recognition technology to police for one year.

The moratorium comes two days after IBM said in a letter it was leaving the facial recognition market altogether. Arvind Krishna, IBM’s chief executive, cited a “pursuit of justice and racial equity” in light of the recent protests sparked by the killing of George Floyd by a white police officer in Minneapolis last month.

Amazon’s statement — just 102 words in length — did not say why it was putting the moratorium in place, but noted that Congress “appears ready” to work on stronger regulations governing the use of facial recognition — again without providing any details. It’s likely in response to the Justice in Policing Act, a bill that would, if passed, restrict how police can use facial recognition technology.

“We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested,” said Amazon in the unbylined blog post.

But the statement did not say if the moratorium would apply to the federal government, the source of most of the criticism against Amazon’s facial recognition technology. Amazon also did not say in the statement what action it would take after the yearlong moratorium expires.

Amazon is known to have pitched its facial recognition technology, Rekognition, to federal agencies, like Immigration and Customs Enforcement. Last year, Amazon’s cloud chief Andy Jassy said in an interview the company would provide Rekognition to “any” government department.

When reached, Amazon spokesperson Kristin Brown declined to comment further or answer any of our questions.

There are dozens of companies providing facial recognition technology to police, but Amazon is by far the biggest. Amazon has come under the most scrutiny after its Rekognition face-scanning technology showed bias against people of color.

In 2018, the ACLU found that Rekognition falsely matched 28 members of Congress as criminals in a mugshot database. Amazon criticized the results, claiming the ACLU had lowered the facial recognition system’s confidence threshold. But a year later, the ACLU of Massachusetts found that Rekognition had falsely matched 27 New England professional athletes against a mugshot database. Both tests disproportionately mismatched Black people, the ACLU found.

Investors brought a proposal to Amazon’s annual shareholder meeting almost exactly a year ago that would have forcibly banned Amazon from selling its facial recognition technology to the government or law enforcement. Amazon defeated the vote with a wide margin.

Decrypted: DEA spying on protesters, DDoS attacks, Signal downloads spike

This week saw protests spread across the world sparked by the murder of George Floyd, an unarmed Black man, killed by a white police officer in Minneapolis last month.

The U.S. hasn’t seen protests like this in a generation, with millions taking to the streets each day to lend their voice and support. But they were met with heavily armored police, drones watching from above, and “covert” surveillance by the federal government.

That’s exactly why cybersecurity and privacy is more important than ever, not least to protect law-abiding protesters demonstrating against police brutality and institutionalized, systemic racism. It’s also prompted those working in cybersecurity — many of which are former law enforcement themselves — to check their own privilege and confront the racism from within their ranks and lend their knowledge to their fellow citizens.


THE BIG PICTURE

DEA allowed ‘covert surveillance’ of protesters

The Justice Department has granted the Drug Enforcement Administration, typically tasked with enforcing federal drug-related laws, the authority to conduct “covert surveillance” on protesters across the U.S., effectively turning the civilian law enforcement division into a domestic intelligence agency.

The DEA is one of the most tech-savvy government agencies in the federal government, with access to “stingray” cell site simulators to track and locate phones, a secret program that allows the agency access to billions of domestic phone records, and facial recognition technology.

Lawmakers decried the Justice Department’s move to allow the DEA to spy on protesters, calling on the government to “immediately rescind” the order, describing it as “antithetical” to Americans’ right to peacefully assembly.

WhatsApp resolves issue that exposed some users’ phone numbers in Google search results

WhatsApp has resolved an issue that caused phone numbers of some of its users to appear in Google search results.

The fix comes days after a researcher revealed that the phone number of WhatsApp users who created a simplified link to allow others to chat with them or join a group appeared in search results.

In a statement, a WhatsApp spokesperson said that this feature, called Click to Chat, is designed to help users, especially small and microbusinesses around the world connect with their customers.

“While we appreciate this researcher’s report and value the time that he took to share it with us, it did not qualify for a bounty since it merely contained a search engine index of URLs that WhatsApp users chose to make public. All WhatsApp users, including businesses, can block unwanted messages with the tap of a button,” the spokesperson added.

The Click to Chat feature allows users to create a short URL — wa.me/<phoneNumber> — that they can share with their friends or customers to facilitate quick conversation without having to first save their phone number to their contacts list.

India-based researcher Athul Jayaram, who revealed this issue, called it a privacy lapse. He claimed that as many as 300,000 phone numbers appeared in Google search results if someone looked up for “site:wa.me”.

Jayaram said the phone numbers appeared in search results because WhatsApp did not direct Google and other search engines to ignore indexing these links — a feature that search engines provide to any web administrator.

He confirmed on Tuesday that WhatsApp had made some change to inform web crawlers to not index certain links.

But Jayaram isn’t the first person to report that WhatsApp phone numbers were visible in Google search results. WaBetaInfo, a website that tracks changes in WhatsApp, reported this behaviour in February this year.

And as Jayaram points out, many WhatsApp users he contacted whose numbers appeared in Google search results were surprised to learn that this sensitive information was accessible on the public internet.

IBM ends all facial recognition work as CEO calls out bias and inequality

IBM CEO Arvind Krishna announced today that the company would no longer sell facial recognition services, calling for a “national dialogue” on whether it should be used at all. He also voiced support for a new bill aiming to reduce police violence and increase accountability.

In a letter reported by CNBC, written in support of the Justice in Policing Act introduced today, Krishna explains the company’s exit from the controversial business of facial identification as a service.

IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency. We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.

This careful approach to developing and deploying the technology is not a new one: IBM last year emphasized it with a new database of face data that was more diverse than anything available at the time. After all, like any program, these systems are only as good as the information you feed into them.

However, facial recognition does not seem to have been making the company much money, if any. To be fair the technology is really in its infancy and there are few applications where an enterprise vendor like IBM makes sense. Amazon’s controversial Rekognition service, while it has been tested by quite a few law enforcement entities, is not well thought of in the field. It would not benefit IBM much to attempt to compete with a product that is similarly just barely good enough to use.

Krishna’s letter also says that “vendors and users of Al systems have a shared responsibility to ensure that Al is tested for bias, particularly when used in law enforcement, and that such bias testing is audited and reported.” This is something of a parting shot to those in the field, Amazon in particular, that have been called out for the poor quality of facial recognition systems but have not ceased to market them.

The bill that Krishna writes in support of has dozens of co-sponsors in the House and Senate, and addresses a wide variety of issues faced by police departments and those they police. Among other things, it expands requirements for body cameras but limits the use of facial recognition in connection with them. It would provide grants for the hardware, but only if they are used under protocols publicly developed and listed.

The ACLU, in a statement issued regarding the bill, seemed to concur with its approach: “We need to invest in technologies that can help eliminate the digital divide, not technologies that create a surveillance infrastructure that exacerbates policing abuses and structural racism.”

Signal now has built-in face blurring for photos

Apps like Signal are proving invaluable in these days of unrest, and anything we can do to simplify and secure the way we share sensitive information is welcome. To that end Signal has added the ability to blur faces in photos sent via the app, making it easy to protect someone’s identity without leaving any trace on other, less secure apps.

After noting Signal’s support of the protests occurring all over the world right now against police brutality, the company’s founder Moxie Marlinspike writes in a blog post that “We’ve also been working to figure out additional ways we can support everyone in the street right now. One immediate thing seems clear: 2020 is a pretty good year to cover your face.”

Fortunately there are perfectly good tools out there both to find faces in photographs and to blur imagery (presumably irreversibly, given Signal’s past attention to detail in these matters, but the company has not returned request for comment). Put them together and boom, a new feature that lets you blur all the faces in a photo with a single tap.

This is helpful for the many users of Signal who use it to send sensitive information, including photos where someone might rather not be identifiable. Normally one would blur the face in another photo editor app, which is simple enough but not necessarily secure. Some editing apps, for instance, host computation-intensive processes on cloud infrastructure and may retain a copy of a photo being edited there — and who knows what their privacy or law enforcement policy may be?

If it’s sensitive at all, it’s better to keep everything on your phone and in apps you trust. And Signal is among the few apps trusted by the justifiably paranoid.

All face detection and blurring takes place on your phone, Marlinspike wrote. But he warned that the face detection isn’t 100 percent reliable, so be ready to manually draw or expand blur regions in case someone isn’t detected.

The new feature should appear in the latest versions of the app as soon as those are approved by Google and Apple.

Lastly Marlinspike wrote that the company is planning “distributing versatile face coverings to the community free of charge”; the picture shows a neck gaiter like those sold for warmth and face protection. Something to look forward to, then.

Zoom faces criticism for denying free users e2e encryption

What price privacy? Zoom is facing a fresh security storm after CEO Eric Yuan confirmed that a plan to reboot its battered security cred by (actually) implementing end-to-end encryption does not in fact extend to providing this level of security to non-paying users.

This Zoom ‘premium on privacy’ is necessary so it can provide law enforcement with access to call content, per Bloomberg, which reported on security-related remarks made by Yuan during an earnings call yesterday, when the company reported big gains thanks to the coronavirus pandemic accelerating uptake of remote working tools.

“Free users for sure we don’t want to give [e2e encryption] because we also want to work together with FBI, with local law enforcement in case some people use Zoom for a bad purpose,” Yuan said on the call.

Security experts took swiftly to Twitter to condemn Zoom’s ‘pay us or no e2e’ policy.

EFF associate research director, Gennie Gebhart, also critically discussed Zoom’s decision to withhold e2e encryption for free users in a Twitter thread late last month, following a feedback call with the company — criticizing it for spinning what she characterized as pure upsell as a safety consideration.

It’s a nuance-free cop-out to blanket-argue that ‘bad things happen on free accounts’, she suggested.

Fast forward to today and a tweet about the report of Yuan’s comments written by Bloomberg technology reporter, Nico Grant, triggered an intervention by none other than Alex Stamos — the former Facebook and Yahoo! security executive who signed up by as a consultant on Zoom’s security strategy back in April days after the company had been served with a class action lawsuit from shareholders for overstating security claims.

Stamos — who was CSO at Yahoo! during a period when the NSA was using a backdoor to scan user email and also headed up security at Facebook at a time when Russia implemented a massive disinformation campaign targeting the 2016 US presidential election — weighed in via Twitter to claim there’s a “difficult balancing act between different kinds of harms” which he said justifies Zoom’s decision to deny e2e encryption for all users.

Curiously, Stamos was also CSO at Facebook when the tech giant completed the roll out of e2e encryption on WhatsApp — providing this level of security to the then billion+ users of its free-to-use mobile messaging and video chat app.

Which might suggest Stamos’ conception of online “harms” has evolved considerably since 2016 — after all, he’s since landed at Stanford as an adjunct professor (where he researches “safe tech”). Although, in the same year (2016), he defended his employer’s decision not to make e2e encryption the default on Facebook Messenger. So Stamos’ unifying thread appears to be being paid to defend corporate decision-making while applying a gloss of ‘security expertise’.

His latest Twit(n)ter-vention runs to type, with the security consultant now defending Zoom’s management’s decision not to extend e2e encryption to free users of the product.

But his tweeted defence of AES encryption as a valid alternative to e2e encryption has attracted some pointed criticism from the crypto community — as an attack on established standards.

Nadim Kobeissi, a Paris-based applied cryptography researcher — who told us that his protocol modelling and analysis software was used by the Zoom team during development of its proposed e2e encrypted system for (paid product) meetings — called out Stamos for “insisting that AES encryption, which can be bypassed by Zoom Inc. at will, qualifies as real encryption”.

That’s “what’s truly misleading here”, Kobeissi tweeted.

In a phone call with TechCrunch, Kobeissi fleshed out his critique, saying he’s concerned, more broadly, that a current and (he said) much needed “Internet zeitgeist” focus on online safety is being hijacked by certain vested interests to push their own agenda in a way that could roll back major online security gains — such as the expansion of e2e encryption to free messaging apps like WhatsApp and Signal — and lead to a general deterioration of security ideals and standards.

Kobeissi pointed out that AES encryption — which Stamos defended — does not prevent server intercepts and snooping on calls. Nor does it offer a way for Zoom users to detect such an attack, with the crypto expert emphasizing it’s “fundamentally different from snooping-resistant encryption”.

Hence he characterized Stamos’ defence of AES as “misleading and manipulative” — saying it blurs a clearly established dividing line between e2e encryption and non-e2e.

“There are two problems [with the Zoom situation]: 1) There’s no e2e encryption for free users; and 2) there’s intentional deception,” Kobeissi told TechCrunch.

He also questioned why Stamos has not publicly pushed for Zoom to find ways to safely implement e2e encryption for free users — pointing, by way of example, to the franking ‘abuse report’ mechanism that Facebook recently applied to e2e encrypted “Secret Conversations” on Messenger.

“Why not improve on Facebook Messenger franking,” he suggested, calling for Zoom to use its acquisition of Keybase’s security team to invest and do research that would raise security standards for all users.

Such a mechanism could “absolutely” be applied to video and voice calls, he argued.

“I think [Stamos] has a deleterious effect on the kind of truth that ends up being communicated about these services,” Kobeissi added in further critical remarks about the former Facebook CSO — who he said comes across as akin to a “fixer” who gets called in “to render a company as acceptable as possible to the security community while letting it do what it wants”.

We’ve reached out to Zoom and Stamos for comment.