Signal says 1,900 users’ phone numbers exposed by Twilio breach

End-to-end encrypted messaging app Signal says attackers accessed the phone numbers and SMS verification codes for almost 2,000 users as part of the breach at communications giant Twilio last week.

Twilio, which provides phone number verification services to Signal, said on August 8 that malicious actors accessed the data of 125 customers after successfully phishing multiple employees. Twilio did not say who the customers were, but they are likely to include large organizations after Signal on Monday confirmed that it was one of those victims.

Signal said in a blog post Monday that it would notify about 1,900 users whose phone numbers or SMS verification codes were stolen when attackers gained access to Twilio’s customer support console.

“For about 1,900 users, an attacker could have attempted to re-register their number to another device or learned that their number was registered to Signal,” the messaging giant said. “Among the 1,900 phone numbers, the attacker explicitly searched for three numbers, and we’ve received a report from one of those three users that their account was re-registered.”

While this didn’t give the attacker access to message history, which Signal doesn’t store, or contact lists and profile information, which is protected by the user’s security PIN, Signal said “in the case that an attacker was able to re-register an account, they could send and receive Signal messages from that phone number.”

For those affected, the company says it will unregister Signal on all devices that the user is currently using — or that an attacker registered them to — and will require users to re-register Signal with their phone number on their preferred device. Signal also advises users to switch on registration lock, a feature that prevents an account from being re-registered on another device without the user’s security PIN.

Although the Twilio breach impacts a fraction of Signal’s 40 million-plus users, users have long bemoaned how Signal — considered one of the most secure messaging apps — requires users to register a phone number to create an account. Other end-to-end encryption apps, such as Wire, allow users to sign up with a username. While Signal has slowly moved to end its reliance on phone numbers, such as with the introduction of Signal PINs in 2020, this incident will likely reignite calls for it to move faster.

Telegram tops 700 million users, launches premium tier

Telegram has amassed over 700 million monthly active users and is rolling out a premium tier with additional features as the instant messaging platform pushes to monetize a portion of its large user base. The firm did not disclose how much it is charging for the premium tier, but the monthly subscription appears to be priced in the range of $5 to $6.

The premium tier adds a range of additional and improved features to the messaging app, which topped 500 million monthly active users in January 2021. Telegram Premium enables users to send files as large as 4GB (up from 2GB) and supports faster downloads, for instance, Telegram said.

Paying customers will also be able to follow up to 1,000 channels, up from 500 offered to free users, and create up to 20 chat folders with as many as 200 chats each. Telegram Premium users will also be able to add up to four accounts in the app and pin up to 10 chats.

The move is Dubai-headquartered firm’s attempt to keep its development “driven primarily by its users, not advertisers,” it said. It’s also the first time an instant messaging app with hundreds of millions of users has rolled out a premium tier. Signal, WhatsApp, Facebook Messenger, Apple’s Messages and Google’s Messages, some of Telegram’s top rivals, don’t offer a premium tier.

Some analysts had earlier hoped that Telegram would be able to monetize the platform through its blockchain token project. But after several delays and regulatory troubles, Telegram said in 2020 that it had abandoned the project and offered to return $1.2 billion it had raised from investors.

In March 2021, Telegram raised over $1 billion from a number of investors including Mubadala and Abu Dhabi Catalyst Partners by selling 5-year pre-IPO convertible bonds.

“Today is an important day in the history of Telegram – marking not only a new milestone, but also the beginning of Telegram’s sustainable monetization,” the firm said in a blog post Sunday.

Telegram founder and chief executive Pavel Durov said earlier this month the move to launch a premium tier was intended to respond to user demand for additional storage/bandwidth.

“After giving it some thought, we realized that the only way to let our most demanding fans get more while keeping our existing features free is to make those raised limits a paid option,” he said.

In India, the premium version is priced at $6 for iPhone users. Alex Barredo, a Spain-based technology commentator, reported seeing €5.49 ($5.77) as the monthly cost. A Telegram spokesperson did not immediately respond to a request for comment.

Premium users will also have the ability to convert voice messages into texts, gain access to exclusive stickers and reactions and use animated pictures as their profile photos. Paying customers will also be able to avoid seeing ads on the app. (In some markets, sponsored messages are shown in large, public one-to-many channels.)

Durov has pledged to keep a number of core features in the app free to users and also continue to build new features for the non-paying audience.

On Sunday, the firm said it is rolling out a feature, called join requests, to enable all users to join a public group without the need for an invite link. Another new feature aimed at free users will make it possible for verified groups and channels to show their badge at the top of the chat. The new update also supports rendering of animations at120 frames per second for new iPads and iPhones.

“This update includes over 100 fixes and optimizations to the mobile and desktop apps – eliminating bugs, improving speed, and expanding minor features,” Telegram said.

Ukrainians turn to encrypted messengers, offline maps and Twitter amid Russian invasion

Ukrainians have turned to offline mapping and encrypted communication apps in the wake of the Russian invasion of their country, which is displacing millions who have left their homes to either fight back or flee to neighboring countries. According to data from app store intelligence firm Apptopia, over the past several days Ukrainians have been downloading various communication apps, offline maps, and others where they can keep up with the latest news and information, like Twitter and streaming radio apps.

Currently, the top five apps in the country’s iOS App Store include the private messenger Signal, messaging app Telegram, Twitter, and offline messengers Zello and Bridgefy. Elsewhere in the top 10 is WhatsApp; Maps.Me, an offline maps app that’s now ranking a half dozen spots higher than Google Maps, which has now just pulled its live traffic info dubbed a security risk); and Starlink’s app from Space X — the latter which jumped up 39 spots after Elon Musk announced the satellite internet service was now active in the country. (Of course, to what extent the service is actually viable in the places it’s needed may be reflected in the app’s rank going forward.)

Among the top messaging apps, some saw greater adoption than others.

From the start of the Russian invasion on Feb. 24, 2022, through Sunday, Feb. 27, 2022 Telegram topped the charts with 54,200 new installs across both the App Store and Google Play combined — a 25% increase from the same time period in January. Meanwhile, the offline messaging app Bridgefy saw the largest percentage increase in new installs, growing a whopping 4,730.8% month-over-month from just 591 downloads during the same period last month to now 28,550 new installs over the past few days.

Another walkie-talkie app, Zello, grew downloads 99.3% from 12,540 in Jan. 24-27 to 24,990 during Feb. 24-27. Signal’s gains percentage-wise were a more modest 20.6%, but it has fairly strong adoption with 39,780 installs during the same time last month and 47,990 during the past several days.

Of course, not all messaging apps are created equally when it comes to security.

Signal is the most secure app, offering end-to-end encryption, with no data collected beyond an account creation date. Telegram, meanwhile, doesn’t offer end-to-end encryption by default, but it allows users to manually enable an encrypted “secret chats” feature. However, Telegram has been criticized by Signal’s founder Moxie Marlinspike for not being as secure as it claims, and those claims have been backed up by other security researchers and cryptographers over the years.

Recently, Marlinspike took to Twitter to again remind Ukrainians that Telegram was not secure, tweeting that Telegram is by default a cloud database with a plaintext copy of every message everyone has ever sent or received.

Walkie-talkie app Zello is allegedly end-to-end encrypted in one-to-one and group conversations, but it’s worth noting the company had faced a security breach in 2020. The popular protest app Bridgefy, which relies on Bluetooth and mesh network routing, has also faced a host of security issues over the past years. The app spiked in usage during protests in Hong Kong, India, Iran, Lebanon, Zimbabwe, and the U.S., but in 2020 was found to have serious vulnerabilities that led to it being dubbed a “privacy disaster.”

Though the company later rolled out support for end-to-end encryption, cryptographers who analyzed the app in 2021 have found those fixes to be insufficient.

Outside of messengers, other apps that have surged in Ukraine in recent days include those for streaming radio and Twitter.

Typically, Twitter ranks No. 1 in the News category on iOS, but its Overall rank tends to be around No. 90-130. As of Sunday, however, it’s No. 4 on Ukraine’s iOS App Store Top Overall chart, up 2 ranks from the day prior, and it’s No. 28 on Google Play, up 15 ranks. On that day, Twitter saw around 7,000 more downloads on iOS and Android, a lifetime high in terms of daily downloads.

Image Credits: Apptopia: Ukraine App Store 2/27/22

Streaming radio apps, Radios Ukraine and Simple Radio, have moved higher on the App Store as well, now sitting at No. 19 and No. 21, respectively. (Facebook is in between at No. 20.)

Google Play’s Top Charts in the country look a little different.

Signal, Bridgefy, Telegram, and Zello are also in the top 5 here, but so is Briar, an Android-only peer-to-peer messenger offering end-to-end encryption. Offline maps app Maps.Me is No. 11, and another two-way walkie-talkie, called simply Two Way, is No. 15., followed by WhatsApp.

Image Credits: Apptopia: Ukraine Google Play Store 2/27/22

On both app stores, a number of games sit in the top charts as well — likely downloaded to entertain the kids while families hide in makeshift bomb shelters or travel long distances to safety.

Elsewhere in the world, the Russia-Ukraine war is driving other apps to the top charts, Apptopia also noted.

In the U.S., news apps CNN and Fox News both spiked in recent days, while The Washington Post saw a record number of daily installs (15,000) ahead of the invasion on Feb. 19, as tensions between the countries were rising.

Image Credits: Apptopia: US users download news apps

And now that Russia has locked down access to news and social media, demand for VPN apps has grown. The top five VPN apps saw a sizable jump in daily downloads after Russia began restricting access to social media apps like Facebook and Twitter in the country, Apptopia found, as users downloaded tens of thousands more VPN apps per day than is usual. Russians should note, however, that VPNs aren’t necessarily good for security. As they just route internet traffic through someone other than your ISP, your security really depends on how much you trust your VPN provider.

Image Credits: Apptopia: Russian VPN apps spike

Additional reporting: Zack Whittaker

On Meta’s ‘regulatory headwinds’ and adtech’s privacy reckoning

What does Meta/Facebook’s favorite new phrase to bandy around in awkward earnings calls — as it warns of “regulatory headwinds” cutting into its future growth — actually mean when you unpack it?

It’s starting to look like this breezy wording means the law is finally catching up with murky adtech practices which have been operating under the radar for years — tracking and profiling web users without their knowledge or consent, and using that surveillance-gleaned intel to manipulate and exploit at scale regardless of individual objections or the privacy people have a legal right to expect.

This week a major decision in Europe found that a flagship ad industry tool which — since April 2018 — has claimed to be gathering people’s “consent” for tracking to run behavioral advertising has not in fact been doing so lawfully.

The IAB Europe was given two months to come up with a reform plan for its erroneously named Transparency and Consent Framework (TCF) — and a hard deadline of six months to clean up the associated parade of bogus pop-ups and consent mismanagement which force, manipulate or simply steal (“legitimate interest”) web users’ permission to microtarget them with ads.

The implications of the decision against the IAB and its TCF are that major ad industry reforms must come — and fast.

This is not just a little sail realignment as Facebook’s investor-soothing phrase suggests. And investors are perhaps cottoning on to the scale of the challenges facing the adtech giant’s business — given the 20% drop in its share price as it reported Q4 earnings this week.

Facebook’s ad business is certainly heavily exposed to any regulatory hurricane of enforcement against permission-less Internet tracking since it doesn’t offer its own users any opt out from behavioral targeting.

When asked about this the tech giant typically points to its “data policies” — where it instructs users it will track them and use their data for personalized ads but doesn’t actually ask for their permission. (It also claims any user data it sucks into its platform from third parties for ad targeting has been lawfully gathered by those partners in one long chain of immaculate adtech compliance!)

Fb also typically points to some very limited “controls” it provides users over the type of personalized ads they will be exposed to via its ad tools — instead of actually giving people genuine control over what’s done with their information which would, y’know, actually enable them to protect their privacy.

The problem is Meta can’t offer people a choice over what it does with their data because people’s data is the fuel that its ad targeting empire runs on.

Indeed, in Europe — where people do have a legal right to privacy — the adtech giant claims users of its social media services are actually in a contract with it to receive advertising! An argument that the majority of the EU’s data protection agencies look minded to laugh right out of the room, per documents revealed last year by local privacy advocacy group noyb which has been filing complaints about Facebook’s practices for years. So watch that space for thunderous regulatory “headwinds”.

(noyb’s founder, Max Schrems, is also the driving force behind another Meta earnings call caveat, vis-a-vis the little matter of “the viability of transatlantic data transfers and their potential impact on our European operations“, as its CFO Dave Wehner put it. That knotty issue may actually require Meta to federate its entire service if, as expected, an order comes to stop transferring EU users’ data over the pond, with all the operational cost and complexity that would entail… So that’s quite another stormy breeze on the horizon.)

While regulatory enforcement in Europe against adtech has been a very slow burn there is now movement that could create momentum for a cleansing reboot.

For one thing, given the interconnectedness of the tracking industry, a decision against a strategic component like the TCF (or indeed adtech kingpin Facebook) has implications for scores of data players and publishers who are plugged into this ecosystem. So knock-on effects will rattle down (and up) the entire adtech ‘value chain’. Which could create the sort of tipping point of mass disruption and flux that enables a whole system to flip to a new alignment. 

European legislators frustrated at the lack of enforcement are also piling further pressure on by backing limits on behavioral advertising being explicitly written into new digital rules that are fast coming down the pipe — making the case for contextual ad targeting to replace tracking. So the demands for privacy are getting louder, not going away.

Of course Meta/Facebook is not alone in being especially prone to regulatory headwinds; the other half of the adtech duopoly — Alphabet/Google — is also heavily exposed here.

As Bloomberg reported this week, digital advertising accounts for 98% of Meta’s revenue, and a still very chunky 81% of Alphabet’s — meaning the pair are especially sensitive to any regulatory reset to how ad data flows.

Bloomberg suggested the two giants may yet have a few more years’ grace before regulatory enforcement and increased competition could bite into their non-diversified ad businesses in a way that flips the fortunes of these data-fuelled growth engines.

But one factor that has the potential to accelerate that timeline is increased transparency.

Follow the data…

Even the most complex data trail leaves a trace. Adtech’s approach to staying under the radar has also, historically, been more one of hiding its people-tracking ops in plain sight all over the mainstream web vs robustly encrypting everything it does. (Likely as a result of how tracking grew on top of and sprawled all over web infrastructure at a time when regulators were even less interested in figuring out what was going on.)

Turns out, pulling on these threads can draw out a very revealing picture — as a comprehensive piece of research into digital profiling in the gambling industry, carried out by researcher Cracked Labs and just published last week, shows.

The report was commissioned by UK based gambling reform advocacy group, Clean Up Gambling, and quickly got picked up by the Daily Mail — in a report headlined: “Suicidal gambling addict groomed by Sky Bet to keep him hooked, investigation reveals”.

What Cracked Labs’ research report details — in unprecedented detail — is the scale and speed of the tracking which underlies an obviously non-compliant cookie banner presented to users of a number of gambling sites whose data flows it analyzed, offering the usual adtech fig-leaf mockery of (‘Accept-only’) compliance.

The report also explodes the notion that individuals being subject to this kind of pervasive, background surveillance could practically exercise their data rights.

Firstly, the effort asymmetry that would be required to go SARing such a long string of third parties is just ridiculous. But, more basically, the lack of transparency inherent to this kind of tracking means it’s inherently unclear who has been passed (or otherwise obtained) your information — so how can you ask what’s being done if you don’t even know who’s doing it?

If that is a system ‘functioning’ then it’s clear evidence of systemic dysfunction. Aka, the systemic lawlessness that the UK’s own data protection regulator already warned the adtech industry in a report of its own all the way back in 2019.

The individual impact of adtech’s “data-driven” marketing, meanwhile, is writ large in a quote in the Daily Mail’s report — from one of the “high value” gamblers the study worked with, who accuses the gambling service in question of turning him into an addict — and tells the newspaper: “It got to a point where if I didn’t stop, it was going to kill me. I had suicidal ideation. I feel violated. I should have been protected.”

“It was going to kill me” is an exceptionally understandable articulation of data-driven harms.

Here’s a brief overview of the scale of tracking Cracked Lab’s analysis unearthed, clipped from the executive summary:

“The investigation shows that gambling platforms do not operate in a silo. Rather, gambling platforms operate in conjunction with a wider network of third parties. The investigation shows that even limited browsing of 37 visits to gambling websites led to 2,154 data transmissions to 83 domains controlled by 44 different companies that range from well-known platforms like Facebook and Google to lesser known surveillance technology companies like Signal and Iovation, enabling these actors to embed imperceptible monitoring software during a user’s browsing experience. The investigation further shows that a number of these third-party companies receive behavioural data from gambling platforms in realtime, including information on how often individuals gambled, how much they were spending, and their value to the company if they returned to gambling after lapsing.”

A detailed picture of consentless ad tracking in a context with very clear and well understood links to harm (gambling) should be exceedingly hard for regulators to ignore.

But any enforcement of consent and privacy must and will be universal, as the law around personal data is clear.

Which in turn means that nothing short of a systemic adtech reboot will do. Root and branch reform.

Asked for its response to the Cracked Labs research, a spokeswoman for the UK’s Information Commissioner’s Office (ICO) told TechCrunch: “In relation to the report from the Clean Up Gambling campaign, I can confirm we are aware of it and we will consider its findings in light of our ongoing work in this area.”

We also asked the ICO why it has failed to take any enforcement action against the adtech industry’s systemic abuse of personal data in real-time bidding ad auctions — following the complaint it received in September 2018, and the issues raised in its own report in 2019.

The watchdog said that after it resumed its “work” in this area — following a pause during the coronavirus pandemic — it has issued “assessment notices” to six organisations. (It did not name these entities.)

“We are currently assessing the outcomes of our audit work. We have also been reviewing the use of cookies and similar technologies of a number of organisations,” the spokeswoman also said, adding: “Our work in this area is vast and complex. We are committed to publishing our final findings once our enquiries are concluded.”

But the ICO’s spokeswoman also pointed to a recent opinion issued by the former information commissioner before she left office last year, in which she urged the industry to reform — warning adtech of the need to purge current practices by moving away from tracking and profiling, cleaning up bogus consent claims and focusing on engineering privacy and data protection into whatever for of targeting it flips to next.

So the reform message at least is strong and clear, even if the UK regulator hasn’t found enough puff to crack out any enforcement yet.

Asked for its response to Cracked Labs’ findings, Flutter — the US-based company that owns Sky Betting & Gaming, the operator of the gambling sites whose data flows the research study tracked and analyzed — sought to deflect blame onto the numerous third parties whose tracking technologies are embedded in its websites (and only referenced generically, not by name, in its ‘Accept & close’ cookie notice).

So that potentially means onto companies like Facebook and Google.

“Protecting our customers’ personal data is of paramount importance to Sky Betting & Gaming, and we expect the same levels of care and vigilance from all of our partners and suppliers,” said the Sky Bet spokesperson.

“The Cracked Labs report references data from both Sky Betting & Gaming and the third parties that we work with. In most cases, we are not — and would never be — privy to the data collected by these parties in order to provide their services,” they added. “Sky Betting & Gaming takes its safer gambling responsibilities very seriously and, while we run marketing campaigns based on our customers’ expressed preferences and behaviours, we would never seek to intentionally advertise to anyone who may potentially be at risk of gambling harm.”

Regulatory inaction in the face of cynical industry buck passing — whereby a first party platform may seek to deny responsibility for tracking carried out by its partners, while third parties which also got data may claim its the publishers’ responsibility to obtain permission — can mire complaints and legal challenges to adtech’s current methods in frustrating circularity.

But this tedious dance should also be running out of floor. A number of rulings by Europe’s top court in recent years have sharpened guidance on exactly these sorts of legal liability issues, for example.

Moreover, as we get a better picture of how the adtech ecosystem ‘functions’ — thanks to forensic research work like this to track and map the tracking industry’s consentless data flows — pressure on regulators to tackle such obvious abuse will only amplify as it becomes increasingly easy to link abusive targeting to tangible harms, whether to vulnerable individuals with ‘sensitive’ interests like gambling; or more broadly — say in relation to tracking that’s being used as a lever for illegal discrimination (racial, sexual, age-based etc), or the democratic threats posed by population scale targeted disinformation which we’ve seen being deployed to try to skew and game elections for years now.

Google and Facebook respond

TechCrunch contacted a number of the third parties listed in the report as receiving behavioral data on the activities of one of the users of the Sky Betting sites a large number of times — to ask them about the legal basis and purposes for the processing — which included seeking comment from Facebook, Google and Microsoft.

Facebook and Google are of course huge players in the online advertising market but Microsoft appears to have ambitions to expand its advertising business. And recently it acquired another of the adtech entities that’s also listed as receiving user data in the report — namely Xandr (formerly AppNexus) — which increases its exposure to these particular gambling-related data flows.

(NB: the full list of companies receiving data on Sky Betting users also includes TechCrunch’s parent entity Verizon Media/Yahoo, along with tens of other companies, but we directed questions to the entities the report named as receiving “detailed behavioral data” and which were found receiving data the highest number of times*, which Cracked Labs suggests points to “extensive behavioural profiling”; although it also caveats its observation with the important point that: “A single request to a host operated by a third-party company that transmits wide-ranging information can also enable problematic data practices”; so just because data was sent fewer times doesn’t necessarily mean it is less significant.)

Of the third parties we contacted, at the time of writing only Google had provided an on-the-record comment.

Microsoft declined to comment.

Facebook provided some background information — pointing to its data and ad policies and referring to the partial user controls it offers around ads. It also confirmed that its ad policies do permit gambling as an targetable interest with what it described as “appropriate” permissions.

Meta/Facebook announced some changes to its ad platform last November — when it expanded what it refers to as its “Ad topic controls” to cover some “sensitive” topics — and it confirmed that gambling is included as a topic people can choose to see fewer ads with related content on.

But note that’s fewer gambling ads, not no gambling ads.

So, in short, Facebook admitted it uses behavioral data inferred from gambling sites for ad targeting — and confirmed that it doesn’t give users any way to completely stop that kind of targeting — nor, indeed, the ability to opt out from tracking-based advertising altogether.

While its legal basis for this tracking is — we must infer — its claim that users are in a contract with it to receive advertising.

Which will probably be news to a lot of users of Meta’s “family of apps”. But it’s certainly an interesting detail to ponder alongside the flat growth it just reported in Q4.

Google’s response did not address any of our questions in any detail, either.

Instead it sent a statement, attributed to a spokesperson, in which it claims it does not use gambling data for profiling — and further asserts it has “strict policies” in place that prevent advertisers from using this data.

Here’s what Google told us:

“Google does not build advertising profiles from sensitive data like gambling, and has strict policies preventing advertisers from using such data to serve personalised ads. Additionally, tags for our ad services are never allowed to transmit personally identifiable information to Google.”

Google’s statement does not specify the legal basis it is relying upon for processing sensitive gambling data in the first place. Nor — if it really isn’t using this data for profiling or ad targeting — why it’s receiving it at all.

We pressed Google on these points but the company did not respond to follow up questions.

Its statement also contains misdirection that’s typical of the adtech industry — when it writes that its tracking technologies “are never allowed to transmit personally identifiable information”.

Setting aside the obvious legalistic caveat — Google doesn’t actually state that it never gets PII; it just says its tags are “never allowed to transmit” PII; ergo it’s not ruling out the possibility of a buggy implementation leaking PII to it — the tech giant’s use of the American legal term “personally identifiable information” is entirely irrelevant in a European legal context.

The law that actually applies here concerns the processing of personal data — and personal data under EU/UK law is very broadly defined, covering not just obvious identifiers (like name or email address) but all sorts of data that can be connected to and used to identify a natural person, from IP address and advertising IDs to a person’s location or their device data and plenty more besides.

In order to process any such personal data Google needs a valid legal basis. And since Google did not respond to our questions about this it’s not clear what legal basis it relies upon for processing the Sky Betting user’s behavioral data.

“When data subject 2 asked Sky Betting & Gaming what personal data they process about them, they did not disclose information about personal data processing activities by Google. And yet, this is what we found in the technical tests,” says research report author Wolfie Christl, when asked for his response to Google’s statement.

“We observed Google receiving extensive personal data associated with gambling activities during visits to skycasino.com, including the time and exact amount of cash deposits.

“We did not find or claim that Google received ‘personally identifiable’ data, this is a distraction,” he adds. “But Google received personal data as defined in the GDPR, because it processed unique pseudonymous identifiers referring to data subject 2. In addition, Google even received the customer ID that Sky Betting & Gaming assigned to data subject 2 during user registration.

“Because Sky Betting & Gaming did not disclose information about personal data processing by Google, we cannot know how Google, SBG or others may have used personal data Google received during visits to skycasino.com.”

“Without technical tests in the browser, we wouldn’t even know that Google received personal data,” he added.

Christl is critical of Sky Betting for failing to disclose Google’s personal data processing or the purposes it processed data for.

But he also queries why Google received this data at all and what it did with it — zeroing in on another potential obfuscation in its statement.

“Google claims that it does not ‘build advertising profiles from sensitive data like gambling’. Did it build advertising profiles from personal data received during visits to skycasino.com or not? If not, did Google use personal data received from Sky Betting & Gaming for other kinds of profiling?”

Christl’s report includes a screengrab showing the cookie banner Sky Betting uses to force consent on its sites — by presenting users with a short statement at the bottom of the website, containing barely legible small print and which bundles information on multiple uses of cookies (including for partner advertising), next to a single, brilliantly illuminated button to “accept and close” — meaning users have no choice to deny tracking (short of not gambling/using the website at all).

Under EU/UK law, if consent is being relied upon as a legal basis to process personal data it must be informed, specific and freely given to be lawfully obtained. Or, put another way, you must actually offer users a genuine choice to accept or deny — and do so for each use of non-essential (i.e. non-tracking) cookies.

Moreover if the personal data in question is sensitive personal data — and behavioral data linked to gambling could certainly be that, given gambling addiction is a recognized health condition, and health data is classed as “special category personal data” under the law — there is a higher standard of explicit consent required, meaning a user would need to affirm every use of this type of highly sensitive information.

Yet, as the report shows, what actually happened in the case of the users whose visits to these gambling sites were analyzed was that their personal data was tracked and transmitted to at least 44 third party companies hundreds of times over the course of just 37 visits to the websites.

They did not report being asked explicitly for their consent as this tracking was going on. Yet their data kept flowing.

It’s clear that the adtech industry’s response to the tightening of European data protection law since 2018 has been the opposite of reform. It opted for compliance theatre — designing and deploying cynical cookie pop-ups that offer no genuine choice or at best create confusion and friction around opt-outs to drum up consent fatigue and push consumers to give in and ‘agree’ to give over their data so it can keep tracking and profiling.

Legally that should not have been possible of course. If the law was being properly enforced this cynical consent pantomime would have been kicked into touch long ago — so the starkest failure here is regulatory inaction against systemic law breaking.

That failure has left vulnerable web users to be preyed upon by dark pattern design, rampant tracking and profiling, automation and big data analytics and “data-driven” marketers who are plugging into an ecosystem that’s been designed and engineered to quantify individuals’ “value” to all sorts of advertisers — regardless of individuals’ rights and freedoms not to be subject to this kind of manipulation and laws that were intended to protect their privacy by default.

By making Subject Access Requests (SARs), the two data subjects in the report were able to uncover some examples of attributes being attached to profiles of Sky Betting site users — apparently based on inferences made by third parties off of the behavioral data gathered on them — which included things like an overall customer “value” score and product specific “value bands”, and a “winback margin” (aka a “predictive model for how much a customer would be worth if they returned over next 12 months”).

This level of granular, behavioral background surveillance enables advertising and gaming platforms to show gamblers personalized marketing messages and other custom incentives tightly designed to encourage them return to play — to maximize engagement and boost profits.

But at what cost to the individuals involved? Both literally, financially, and to their health and wellbeing — and to their fundamental rights and freedoms?

As the report notes, gambling can be addictive — and can lead to a gambling disorder. But the real-time monitoring of addictive behaviours and gaming “predilections” — which the report’s technical analysis lays out in high dimension detail — looks very much like a system that’s been designed to automate the identification and exploitation of people’s vulnerabilities.

How this can happen in a region with laws intended to prevent this kind of systematic abuse through data misuse is an epic scandal.

While the risks around gambling are clear, the same system of tracking and profiling is of course being systematically applied to websites of all sorts and stripes — whether it contains health information, political news, advice for new parents and so on — where all sorts of other manipulation and exploitation risks can come into play. So what’s going on on a couple of gambling sites is just the tip of the data-mining iceberg.

While regulatory enforcement should have put a stop to abusive targeting in the EU years ago, there is finally movement on this front — with the Belgian DPA’s decision against the IAB Europe’s TCF this week.

However where the UK might go on this front is rather more murky — as the government has been consulting on wide-ranging post-Brexit changes to domestic DP law, and specifically on the issue of consent to data processing, which could end up lowering the level of protection for people’s data and legitimizing the whole rotten system.

Asked about the ICO’s continued inaction on adtech, Rai Naik — a legal director of the data rights agency AWO, which supported the Cracked Labs research, and who has also been personally involved in long running litigation against adtech in the UK — said: “The report and our case work does raise questions about the ICO’s inaction to date. The gambling industry shows the propensity for real world harms from data.”

“The ICO should act proactively to protect individual rights,” he added.

A key part of the reason for Europe’s slow enforcement against adtech is undoubtedly the lack of transparency and obfuscating complexity the industry has used to cloak how it operates so people cannot understand what is being done with their data.

If you can’t see it, how can you object to it? And if there are relatively few voices calling out a problem, regulators (and indeed lawmakers) are less likely to direct their very limited resource at stuff that may seem to be humming along like business as usual — perhaps especially if these practices scale across a whole sector, from small players to tech giants.

But the obfuscating darkness of adtech’s earlier years is long gone — and the disinfecting sunlight is starting to flood in.

Last December the European Commission explicitly warned adtech giants over the use of cynical legal tricks to evade GDPR compliance — at the same time as putting the bloc’s regulators on notice to crack on with enforcement or face having their decentralized powers to order reform taken away.

So, by hook or by crook, those purifying privacy headwinds gonna blow.

*Per the report: “Among the third-party companies who received the greatest number of network requests while visiting skycasino.com, skybet.com, and skyvegas.com, are Adobe (499), Signal (401), Facebook (358), Google (240), Qubit (129), MediaMath (77), Microsoft (71), Ve Interactive (48), Iovation (28) and Xandr (22).”

Moxie Marlinspike is leaving Signal; here’s where we suspect he’s headed and why

Moxie Marlinspike, the founder of the hugely popular encrypted communications app Signal, announced today in a blog post that he is stepping down in a move that he says has been in the works for several months.

While unexpected, the move isn’t a shock to industry observers who have been watching the rise of MobileCoin, a cryptocurrency startup that counts Marlinspike as its earliest technical advisor.

Last spring, eight-year-old Signal, which has more than 40 million monthly users, began testing out an integration with MobileCoin, which says it’s focused on enabling privacy-protecting payments made through “near instantaneous transactions” over one’s phone. But as Wired reported last week, a “much broader phase of that experiment has quietly been underway since mid-November. That’s when Signal made the same feature accessible to all of its users without fanfare, offering the ability to send digital payments far more private than a credit card transaction—or a Bitcoin transfer—to many millions of phones.”

MobileCoin founder Joshua Goldbard told the outlet that the rollout has spurred massive adoption of the cryptocurrency, telling Wired that “there are over a hundred million devices on planet Earth right now that have the ability to turn on MobileCoin and send an end-to-end encrypted payment in five seconds or less.”

Notably, one needs to load one’s wallet with the cryptocurrency first, and as Wired notes, it is listed for sale on only a few smaller cryptocurrency exchanges, including FTX, and none yet offer it to U.S. consumers. (FTX founder and CEO Sam Bankman-Fried is one of many investors in MobileCoin through his quantitative trading firm and cryptocurrency liquidity provider, Alameda Research.)

Even Americans will have access to the currency shortly, however, Goldbard told Wired, pointing to recently signed agreements, including with the cryptocurrency payment processor Zero Hash, that should allow U.S. residents buy MobileCoin within the first few months of this year.

In the meantime, going worldwide has been good for MobileCoin, which last summer raised $66 million in Series B funding at a $1.066 billion valuation and, according to sources close to the company, is in the process of raising a Series C round at a valuation that one source describes as “in the high single-digit billions” of dollars.

MobileCoin’s growth has also raised questions about Signal and about Marlinspike, who has seemingly tried to maintain some distance between himself and MobileCoin. One apparent reason why centers on Signal employees, some of who told reporter Casey Newton last year that they viewed Signal’s exploration of cryptocurrency as risky and an invitation for bad actors to use the platform.

A potentially bigger, but related, concern of critics is that integrating a privacy coin could legal headaches for Signal. As Matt Green, a cryptographer at Johns Hopkins University, told Wired last week, “I’m very nervous they’re going to get themselves into a problematic situation by flirting with this kind of payment infrastructure when there’s so much legislation and regulation around it.”

It’s a valid concern. As we’ve noted  previously, cryptocurrencies and messaging apps haven’t historically mixed well owing to nervous regulators. Kik Messenger, the mobile messaging app founded by a group of University of Waterloo students in 2009, created a digital currency called Kin for its users to spend inside the platform. The project ultimately led to a years-long battle with the Securities & Exchange Commission that nearly decimated the company. Telegram, a much bigger messaging app than Signal — it claims more than 500 million monthly active users — similarly abandoned plans to offer its own decentralized cryptocurrency to anyone with a smartphone after years of battling with the SEC.

Even Facebook hasn’t been able to gain much with its own cryptocurrency project, with the longtime leader of that effort, David Marcus, announcing back in November that he was leaving the company.

If Marcus shows up at MobileCoin, we wouldn’t be surprised, but we’d be even less surprised to see Marlinspike get more involved in some capacity.

Neither Goldbard nor Marlinspike have responded to our requests today for more information, but asked if Marlinspike might consider taking over MobileCoin as CEO (the company doesn’t have one), a source close to him says he is “not doing that” and right now just “taking a break.”

We’ll see. We aren’t promising to eat our shoes if we’re wrong. (It happens!)

For now, Marlinspike writes in his post, he will remain on Signal’s board while Signal’s search for a new CEO continues. Its executive chair, WhatsApp cofounder Brian Acton, will serve as Signal’s interim CEO in the meantime.

Product-led growth and signal substitution syndrome: Bringing it all together

A few years back, my former colleagues and I at SiriusDecisions introduced what we called the Intent Data Framework (IDF). About a year ago, we updated the model to include non-behavioral signals and called it the Buyer Signals Framework (BSF).

Already, it’s clear we left something out of the IDF and even BSF: product-led growth.

Signal substitution syndrome

Both versions of the framework were attempts to address a misunderstanding that was, and still is, so rampant in B2B that I have a name for it — signal substitution syndrome. The nature of this syndrome is simple: In B2B, both marketing and sales practitioners tend to see each new source of information about their potential buyers — each signal type — as a substitute for the last one that didn’t work.

If people are using the product, the need is not prospective or theoretical, it is actual.

The history of B2B could be written in the successive failure of these signals to be what we all hoped for. Whether it was people showing up at trade show booths, people filling out bingo cards from the back of magazines, the people and bots filling out website forms, webinar registrations, syndicated content leads, third-party intent signals, review site users, etc.

The misunderstanding that underwrites signal substitution syndrome is that any of these signals should be considered as sufficient — or even halfway decent — signals of buyer intent unto themselves. To be sure, by happenstance, some leads have occasionally turned into business in a way that can be seen and understood.


Help TechCrunch find the best growth marketers for startups.

Provide a recommendation in this quick survey and we’ll share the results with everybody.


But if there’s one thing that my time as an analyst taught me, it’s that leads are a depressingly high failure rate (95%-99%) signal. Intent data by itself is worse. However, they are both better than whatever we had before. In fact, none of these signals are, by themselves, actually expressions of intent. Expressions of interest? Sure. Intent, not so fast.

How product-led growth fits in

Along comes product-led growth (PLG) with the idea that we’ll offer a free or very low-cost version of our solutions and use adoption of them as the new signals that will lead to enterprise deal generation. Of course, not every product is amenable to a PLG motion. It’s pretty hard to imagine Oracle PLG-ing their manufacturing cloud, for example.

Facebook whistleblower, Frances Haugen, raises trust and security questions over its e2e encryption

Frances Haugen, one of (now) multiple Facebook whistleblowers who have come forward in recent years with damning testimony related to product safety, gave testimony in front of the UK parliament today — where, in one key moment, she was invited to clarify her views on end-to-end encryption following a report in the British newspaper the Telegraph yesterday.

The report couched Facebook’s plan to extend its use of e2e encryption as “controversial” — aligning the newspaper’s choice of editorial spin with long-running UK government pressure on tech giants not to expand their use of strong encryption in order that platforms can be ordered to decrypt and hand over message content data on request.

In its interview with Haugen, the Telegraph sought to link her very public concerns about Facebook’s overall lack of accountability to this UK government anti-e2ee agenda — claiming she had suggested Facebook’s use of e2e encryption could disrupt efforts to protect Uighur dissidents from Chinese state efforts to inject their devices with malware.

The reported remarks were quickly seized upon by certain corners of the Internet (and at least one other ex-Facebook staffer who actually worked on adding e2e encryption to Messenger and is now self-styling as a ‘whistleblower’) — with concerns flying that her comments could be used to undermine e2e encryption generally and, therefore, the safety of scores of Internet users. 

Sounding unimpressed with the Telegraph’s spin, Haugen told UK lawmakers that her views on e2e encryption had been “misrepresented” — saying she fully supports “e2e open source encryption software”; and, indeed, that she uses it herself on a daily basis.

What she said she had actually been querying was whether Facebook’s claim to be implementing e2e encryption can be trusted, given the tech giant does not allow for full external inspection of its code as is the case with fully open source e2ee alternatives.

This is another reason why public oversight of the tech giant is essential, Haugen told the joint committee of the UK parliament which is scrutinizing (controversial) draft online safety legislation.

“I want to be very, very clear. I was mischaracterised in the Telegraph yesterday on my opinions around end-to-end encryption,” she said. “I am a strong supporter of access to open source end to end encryption software.

“I support access to end-to-end encryption and I use open source end-to-end encryption every day. My social support network is currently on an open source end-to-end encryption service.”

“Part of why I am such an advocate for open source software in this case is that if you’re an activist, if you’re someone who has a sensitive need, a journalist, a whistleblower — my primary form of social software is an open source, end-to-end encryption chat platform,” she also said, without naming exactly which platform she uses for her own e2ee messaging (Signal seems likely — a not-for-profit rival to Facebook-owned WhatsApp which has benefited from millions of dollars of investment from WhatsApp founder Brian Action, another former Fb staffer turned critic; so maybe ‘meta’ would in fact be a perfect new brand name for Facebook).

“But part of why that open source part is so important is you can see the code, anyone can go and look at it — and for the top open source end-to-end encryption platform those are some of the only ways you’re allowed to do chat in say the defence department in the US.

“Facebook’s plan for end-to-end encryption — I think — is concerning because we have no idea what they’re doing to do. We don’t know what it means, we don’t if people’s privacy is actually protected. It’s super nuanced and it’s also a different context. On the open source end-to-end encryption product that I like to use there is no directory where you can find 14 year olds, there is no directory where you can go and find the Uighur community in Bangkok. On Facebook it is trivially easy to access vulnerable populations and there are national state actors that are doing this.

“So I want to be clear, I am not against end-to-end encryption in Messenger but I do believe the public has a right to know what does that even mean? Are they really going to produce end-to-end encryption? Because if they say they’re doing end-to-end encryption and they don’t really do that people’s lives are in danger. And I personally don’t trust Facebook currently to tell the truth… I am concerned about them misconstruing the product that they’ve built — and they need regulatory oversight for that.”

In additional remarks to the committee she further summarized her position by saying: “I am concerned on one side that the constellation of factors related to Facebook makes it even more necessary for public oversight of how they do encryption there — that’s things like access to the directory, those amplification settings. But the second one is just about security. If people think they’re using an end-to-end encryption product and Facebook’s interpretation of that is different than what, say, an open source product would do — because an open source product we can all look at it and make sure that what is says on the label is in the can.

“But if Facebook claims they’ve built an end-to-end encryption thing and there’s really vulnerabilities people’s lives are on the line — and that’s what I’m concerned about. We need public oversight of anything Facebook does around end-to-end encryption because they are making people feel safe when they might be in danger.”

Haugen, a former Facebook staffer from the civic integrity team, is the source for a tsunami of recent stories about Facebook’s business after she leaked thousands of internal documents and research reports to the media, initially providing information to the Wall Street Journal, which published a slew of stories last month, including about the toxicity of Instagram for teens (aka the ‘Facebook Files‘), and subsequently releasing the data to a number of media outlets which have followed up with reports today on what they’re calling the Facebook Papers.   

The tl;dr of all these stories is Facebook prioritizes growth of its business over product safety — leading to a slew of harms that can affect individuals, other businesses and the public/society more generally whether as a result of inadequate AI systems that cannot properly identify and remove hate speech (leading to situations where its platform can whip up ethnic violence), or which allow engagement based ranking systems to routinely amplify extreme, radicalizing content without proper mind to risks (such as forming conspiracy-theory touting echo chambers forming around vulnerable individuals, isolating them from wider society), or overestimation of its ad reach leading to advertisers being systematically overcharged for its adtech.

During her testimony today, Haugen suggested Facebook’s AIs were unlikely to even be able to properly distinguish dialectal distinctions and nuances of meaning between UK English and US English — let alone the scores of languages in countries where it directs far less resource.

Parliamentarians probed her on myriad harms during around 2.5 hours of testimony — and some of her answers repeated earlier testimony she gave to lawmakers in the US.

Many of the UK committee’s questions asked for her view on what might be effective regulatory measures to close the accountability gap — both on Facebook and social media more generally — as MPs sought to identify profitable avenues for amending draft online safety legislation.

“The danger with Facebook is not individuals saying bad things, it is about the systems of amplification that disproportionately give people saying extreme polarising things the largest megaphone in the room,” argued Haugen.

Her list of suggestions for fixing a system of what she couched as broken incentives under Facebook’s current leadership included mandatory risk assessments — which she warned need to cover both product safety and organisational structure since she said much of the blame for Facebook’s problems lies with its “flat” organizational structure and a leadership team that rewards (and thus incentivizes) growth above all else, leaving no one internally who’s accountable for improving safety metrics.

Such risk assessments would need to be carefully overseen by regulators to avoid Facebook using its customary tactic in the face of critical scrutiny of just marking its own homework — or “dancing with data” as she put it.

Risk assessments should also involve the regulator “gathering from the community and saying are there other things that we should be concerned about”, she said, not just letting tech giants like Facebook define blinkered parameters for uselessly partial oversight — suggesting “a tandem approach like that that requires companies to articulate their solutions”.

“I think that’s a flexible approach; I think that might work for quite a long time. But it has to be mandatory and there have to be certain quality bars because if Facebook can phone in it I guarantee you they’ll phone it in,” she also told the committee.

Another recommendation Haugen had was for mandatory moderation of Facebook Groups when they exceed a certain number of users.

Whereas — left unmoderated — she said groups can be easily misappropriated and/or misused (using techniques like ‘virality-hacking’) to act as an “amplification point” for spreading discord or disseminate disinformation, including by foreign information operations.

“I strongly recommend that above a certain sized group they should be required to provide their own moderators and moderate every post,” she said. “This would naturally — in a content-agnostic way — regulate the impact of those large groups. Because if that group is actually valuable enough they will have no trouble recruiting volunteers.”

Haugen also suggested that Facebook should be forced to make a firehose of information available to external researchers (as Twitter, for example, already does) — in a privacy-safe way — which would allow outside academics and experts to drive accountability from the outside by investigating potential issues and identifying concerns freed from Facebook’s internal growth-focused lens.

Another of her recommendations was for regulators to demand segmented analysis from Facebook — so that oversight bodies get full transparency into populations that disproportionately experience harms on its platform.

“The median experience on Facebook is a pretty good experience — the real danger is that 20% of the population has a horrible experience or an experience that is dangerous,” she suggested.

She went on to argue that many of Facebook’s problems result from the sub-set of users who she said get “hyper exposed” to toxicity or to abuse — as a consequence of an engagement-driven design and growth focused mindset that rejects even small tweaks to inject friction/reduce virality (and which she suggested would only mean Facebook giving up “small slivers” of growth in the short term and yield a much more pleasant and probably more profitable product over the longer term).

“As we look at the harms of Facebook we need to think about these things as system problems — like the idea that these systems are designed products, these are intentional choices and that it’s often difficult to see the forest for the trees. That Facebook is a system of incentives, it’s full of good, kind, conscientious people who are working with bad incentives. And that there are lack of incentives inside the company to raise issues about flaws in the system and there’s lots of rewards for amplifying and making things grow more,” she told the committee.

“So I think there is a big challenge of Facebook’s management philosophy is that they can just pick good metrics and then let people run free. And so they have found themselves in a trap where in a world like that how do you propose changing the metric? It’s very very hard because 1,000 people might have directed their labor for six months trying to move that metric and changing the metric will disrupt all of that work.

“I don’t think any of it was intentional — I don’t think they set out to go down this path. And that’s why we need regulation — mandatory regulation, mandatory actions — to help pull them away from that spiral that they’re caught in.”

Legislation that seeks to rein in online harms by applying regulations to platform giants like Facebook must also not focus only on individual harms — but needs to respond to societal harms, she also emphasized.

“I think it is a grave danger to democracy and societies around the world to omit societal harm. A core part of why I came forward was I looked at the consequences of choices Facebook was making and I looked at things like the global south and I believe situations like Ethiopia are just part of the opening chapters of a novel that’s going to be horrific to read. We have to care about societal harm — not just for the global south but for our own societies.

“When an oil spill happens it doesn’t make it harder for us to regulate oil companies. But right now Facebook is closing the door on us being able to act — we have a slight window of time to regain people-control over AI; we have to take advantage of this moment.”

Facebook has been contacted for comment.

 

WhatsApp now lets users encrypt their chat backups in the cloud

WhatsApp is beginning to roll out a new feature that will provide its two billion users the option to encrypt their chat history backup in iCloud or Google Drive, patching a major loophole that has been exploited by governments to obtain and review private communication between individuals.

WhatsApp has long end-to-end encrypted chats between users on its app. But users have had no means to protect the backup of those chats stored in the cloud. (For iPhone users, the chat history is stored in iCloud, and Android users rely on Google Drive.)

It has been widely reported that law enforcement agencies across the globe have been able to access the private communications between suspect individuals on WhatsApp by exploiting this loophole.

WhatsApp, which processes over 100 billion messages a day, is closing that weak link, and tells TechCrunch that it’s providing this new feature to users in every market where the app is operational. The feature is optional, the company said. (It’s not uncommon for companies to withhold privacy features for legal and regulatory reasons. Apple’s new encrypted browsing feature isn’t available to users in certain authoritarian regimes, such as China, Belarus, Egypt, Kazakhstan, Saudi Arabia, Turkmenistan, Uganda and the Philippines.)

Mark Zuckerberg, founder and chief executive of Facebook, noted that WhatsApp is the first global messaging service at this scale to offer end-to-end encrypted messaging and backups. “Proud of the team for continuing to lead on security for your private conversations,” he said in a post on his Facebook page.

WhatsApp began testing the feature with a small group of users last month. The company devised a system to enable WhatsApp users on Android and iOS to lock their chat backups with encryption keys. WhatsApp says it will offer users two ways to encrypt their cloud backups.

Users on WhatsApp will see an option to generate a 64-digit encryption key to protect their chat backups in the cloud. Users can store the encryption key offline or in a password manager of their choice, or they can create a password that backs up their encryption key in a cloud-based “backup key vault” that WhatsApp has developed. The cloud-stored encryption key can’t be used without the user’s password, which isn’t known to WhatsApp.

“While end-to-end encrypted messages you send and receive are stored on your device, many people also want a way to back up their chats in case they lose their phone,” the company wrote in a blog post.

As we wrote last month, the move to introduce this additional layer of privacy is significant and one that can have far-reaching implications.

Thoughts, governments?

End-to-end encryption remains a thorny topic of discussion as governments across the globe continue to lobby for backdoors. Apple was pressured to not add encryption to iCloud Backups after the FBI complained, according to Reuters, and while Google has offered users the ability to encrypt their data stored in Google Drive, the company reportedly didn’t tell governments before it rolled out the feature.

India, WhatsApp’s biggest market by users, has introduced a new law that requires the company to devise a way to make “traceability” of questionable messages possible. WhatsApp has sued the Indian government over this new mandate, and said such a requirement effectively mandates “a new form of mass surveillance.”

The UK government — which isn’t exactly a fan of encryption — recently asked messaging apps to not use end-to-end encryption for kids’ accounts. Elsewhere in the world, Australia passed controversial laws three years ago that are designed to force tech companies to provide police and security agencies access to encrypted chats.

WhatsApp declined to discuss whether it had consulted about the new feature with lawmakers or government agencies.

Privacy-focused organizations including Electronic Frontier Foundation have lauded WhatsApp’s move.

“This privacy win from Facebook-owned WhatsApp is striking in its contrast to Apple, which has been under fire recently for its plans for on-device scanning of photos that minors send on Messages, as well as of every photo that any Apple user uploads to iCloud. While Apple has paused to consider more feedback on its plans, there’s still no sign that they will include fixing one of its longstanding privacy pitfalls: no effective encryption across iCloud backups,” the organization wrote.

“WhatsApp is raising the bar, and Apple and others should follow suit.”

Telegram says it added 70M users during day of Facebook and WhatsApp outage

Facebook’s hours-long outage on Monday may have hurt the company, its founder, shareholders, and many businesses that rely on the social juggernaut’s services. But for its instant messaging rivals, it was a very good day.

Telegram founder and chief executive Pavel Durov said on Tuesday that his instant messaging app added a staggering 70 million users yesterday in what he described as a “record increase in user registration and activity” for the service.

“I am proud of how our team handled the unprecedented growth because Telegram continued to work flawlessly for the vast majority of our users,” wrote Durov on his Telegram channel. But the day wasn’t so flawless.

“That said, some users in the Americas may have experienced slower speed than usual as millions of users from these continents rushed to sign up for Telegram at the same time,” he added.

Telegram, which recently topped 1 billion downloads, had 500 million monthly active users as of early this year.

Signal, which competes with both Telegram and WhatsApp, also added new users. It said yesterday in a tweet that “millions of new users” had joined the app.

This isn’t the first time Telegram and Signal have gained at the expense of their chief rival. The two added millions of users earlier this year as well when WhatsApp was struggling to explain exactly what its new privacy policy entailed.

“The smallest of events helped trigger the largest of outcomes,” said Brian Acton, the executive chairman of Signal’s holding company, of WhatsApp’s debacle earlier this year, in an interview with TechCrunch.

The headline was updated for clarity.

Signal, the encrypted messaging app, is currently down for many users

Signal is down for many users right now. Its status website says the encrypted messaging app is “experiencing technical difficulties” and many people are getting an in-app error message that says the same thing. The company says it is “working hard to restore service as quickly as possible.” TechCrunch has contacted Signal for comment.

Signal's in-app error message

Signal’s in-app error message

According to Downdetector.com, users started reporting outages around 11:05 PM Eastern Standard Time this evening, and it appears to be affecting people around the world.

In January, Signal experienced a surge in downloads on the App Store and Google Play after WhatsApp changed its data-sharing policy.

Over the past few months, Signal has continued to build out its feature set, adding a default timer for disappearing messages that automatically applies the settings to all new conversations.