It’s not rocket science, why Elon Musk’s Twitter takeover could be bad for privacy

Elon Musk has put an end to weeks of speculation with the announcement that Twitter has accepted his offer to buy the platform for $54.20 per share share, valuing the social media platform at about $44 billion.

While Musk’s drawn-out pursuit of Twitter has come to an end, for him at least, the next chapter of Twitter’s history and its hundreds of millions of users is just beginning.

The deal drew immediate fears that Musk, a self-styled “free speech absolutist,” could turn back the dials on content moderation, potentially unraveling years of work that curbed the unfettered spread of hate speech and misinformation. But experts have been just as quick to warn of the potential privacy implications of the $44 billion buyout to take Twitter private, at a time that even employees are unclear about the company’s future.

Per Musk’s short 78-word statement, one of his many proposed plans for Twitter raising eyebrows in the industry is the open-sourcing of the platform’s algorithmic code to make it publicly available. Musk claims this change — which Twitter has been mulling for some time — will help to boost trust in the platform, which has for years faced an onslaught of false news and security incidents breaches, including one that saw hackers hijack high-profile Twitter profiles — including Musk’s — to promote a cryptocurrency scam.

But cybersecurity experts fear that Musk’s open source vision for Twitter could make the platform more susceptible to attackers.

“The decision to open source this code likely means that it will be adopted by other social platforms, advertisers, and others who are looking to hone their user targeting,” Jamie Moles, senior technical manager at security firm ExtraHop, told TechCrunch. “Of course, as with any widely adopted open source code, there are significant security implications. As we’ve seen with Log4Shell and Spring4Shell, vulnerabilities in widely used open source applications are exponentially more valuable. Making its code open source may increase transparency for Twitter users, but it may also make Twitter a much bigger target for attackers.”

Moles says that if done properly, Musk’s plan to wage war on so-called spam bots, which have been used to spread malware and propagate political ideologies, could generate “new techniques that improve the detection and identification of spam emails, spam posts, and other malicious intrusion attempts,” he added. “It may well be a boon to security practitioners everywhere.”

Professor Eerke Boiten, head of the school of computer science and informatics at De Montfort University in the U.K., warned that open sourcing Twitter’s algorithm could lead to malicious actors “gaming” the algorithm, which could see people treated differently based on their personal characteristics.

“Think, for example, of external manipulation of the targeted advertising aspects of Twitter, which is an area of concern for privacy even before it gets gamed,” said Boiten. “It would then also accelerate the arms race of new ways of gaming and finding countermeasures.”

Musk’s short statement left much to the imagination. He did not say what his plans were for “authenticating all humans.” Some read it as a plan to extend Twitter’s existing user verification system, or planning to introduce a real-name policy that would require users to provide documented evidence of their legal name. The digital rights group, the Electronic Frontier Foundation, voiced concerns that real-name policies have on the human rights value of pseudonymous speech, and that Musk may have not considered the ramifications that a lack of anonymity can have on certain groups of people.

“Pseudonymity and anonymity are essential to protecting users who may have opinions, identities, or interests that do not align with those in power,” the EFF said in a blog post. “For example, policies that require real names on Facebook have been used to push out Native Americans; people using traditional Irish, Indonesian, and Scottish names; Catholic clergy; transgender people; drag queens; and sex workers. Political dissidents may be in grave danger if those in power are able to discover their true identities.”

The EFF also voiced concern about the continued lack of end-to-end encryption for Twitter direct messages: “Fears that a new owner of the platform would be able to read those messages are not unfounded,” the EFF added.

Boiten, too, believes Musk’s pseudonymity crackdown would be the most concerning aspect of Musk’s takeover. “Anonymity is in many contexts a prerequisite for privacy. Once Twitter is known to have authenticated its users, oppressive governments can demand the authenticating information from them endangering a lot of current subversive use in such countries,” he said. “I wonder how many anonymous Twitter accounts are currently run by Tesla employees — Elon Musk plays scrupulously by his own rules — so potential Tesla whistleblowers or unionizers wouldn’t be safe to get themselves authenticated on Twitter.”

In a tweet on Tuesday, Sen. Mark Warner, chair of the Senate Intelligence Committee, said Twitter has been “more forward-leaning than many of its competitors in its effort to tackle false, deceptive and manipulated content,” and though he said the company has “significant room for improvement,” Warner said he hopes that Musk will “work in good faith to keep these necessary reforms in place and prevent a backslide that is harmful to democracy.”

For now, Musk’s takeover bid for Twitter remains subject to shareholder and regulatory approval.

Web3’s early promise for artists tainted by rampant stolen works and likenesses

Jillian C. York didn’t want to be a non-fungible token.

A Berlin-based author and activist, York is also the Director for International Freedom of Expression at the Electronic Frontier Foundation. For some reason – York doesn’t agree with her inclusion there – her name also appears on a list of so-called cypherpunks on Wikipedia. Cypherpunks advocate for security, encryption, privacy – three things York supported but had never made her main focus.

“Of course, I can’t edit myself off that list and I don’t identify as a cypherpunk, despite the fact that I’ve advocated for cryptography,” she said. Because she respects Wikipedia’s editing rules, York was technically forced into a group she didn’t want to join.

On Christmas Eve 2021, however, York and a number of security advocates and cypherpunks on that list appeared as NFTs on the token market OpenSea. The tokens included artist renditions of each of the cypherpunks and York’s card featured her signature buzzcut peeking out from what looked like a background of circuits and fingerprints. She was now part of another group she didn’t want to join: those whose art or work had been stolen to make NFTs. She was outraged. First, the photo the creators used was copyright-protected and not actually her property.

Second, they spelled her name wrong.

The card, which was based on a photograph taken by a professional photographer, featured the name Jillion York. Furthermore, alongside York and her colleagues, the NFT collection featured outcasts in the security space like Richard Stallman and Jacob Appelbaum. York and several other people depicted in the cards wanted nothing to do with them.

“I don’t approve of this whatsoever and would like it removed,” tweeted York on December 26. Many other supporters and victims popped up with similar comments. A back and forth with OpenSea and the NFT creator, a company called ItsBlockchain, answered requests to remove all of the NFTs.

Many saw the irony in having to visit a central location to destroy a decentralized asset.

“Pretty absurd, and distressing, that in the new realm of Web3 digital property rights people can have their identities tokenized, without their consent, and sold as tradable commodities for the profit of others,” wrote Jacob Silverman, an editor for the New Republic.

York’s ordeal was over almost as soon as it began. The creator of the NFTs, Hitesh Malviya, contacted York and others and agreed to take down the images. In a few days, they were gone, replaced by a Medium post in which Malviya wrote that his team wanted to “educate the young community in crypto about Cypher Punks and how significant they were to this date to the evolution of blockchain technology.”

“Unfortunately, many Cypher Punks were against this idea and didn’t want to participate in any way,” he wrote. “So we apologize to each and every Cypher Punk for not taking consent and creating your NFTs.”

Malviya was testy when I asked him about the NFTs and why he thought he could use private photos and information – essentially someone’s art – for this money-making venture.

“We were not aware of the likeness laws in NFTs as the market is not regulated,” he said in a direct message. “And we spent three months of resources and time to create an educational series and this NFT collection. We learnt our lessons. I hope you got your answers. No more comments.”

York’s situation and the resulting tumult of commentary is part of a growing and confusing part of Web 3: when everything is permissionless, when do you need to get permission to use someone’s face, art, or data? And, more importantly, what’s to stop bad actors from turning everything, from your t-shirt design to even your naked body, into an NFT?

Unfortunately, York’s situation is not new and it’s creating an entirely new industry and toolchain aimed at protecting creators from get-rich-quick NFT creators.

Another wholesale NFT heist happened in April 2021 when artist Qing Han aka Quinni’s work was stolen and reposted on the same platform that York used, OpenSea. Quinni, beloved by fans for her artistic takes health and chronic illnes, died of cancer in February 2020. After her death, her brother and fellow artist, Ze Han, maintained her social media accounts and posted her work.

A year later, thieves posted Quinni’s work anonymously. After fan outcry, the art was taken down off of various NFT sites, including Opensea, and, as of this writing, all of it has been ostensibly removed from the blockchain. Her brother refuses to take part in NFTs after the theft.

“A reminder to report any of Qinni’s artwork being sold without authorization,” wrote Ze Han on Twitter. “There are no legitimate avenues where Qinni’s art is being sold (this may change in the future).”

This case forced many creators to become educated in NFTs. Developers created a number of tools that help creators, many who have no interest in cryptocurrency at all, find their stolen art while Twitter feeds popped up to highlight the thefts.

One major figure in the online sharing community, DeviantArt, is familiar with wholesale art theft.

“We host over half a billion pieces of art on the platform,” said Liat Karpel Gurwicz, DeviantArt’s CMO “Over the years we’ve dealt with theft and it’s nothing new. It’s something that we’ve always dealt with being an online art community even prior to there being actual regulation around it.”

Most recently the company created a bot that searches for user art on the blockchain. The bot compares art on popular NFT sites like OpenSea with images by registered users. Using machine learning, the bot finds art that looks similar to art already posted on DA’s servers. It streamlines the takedown process as well, showing artists how to contact Opensea and other providers.

DeviantArt COO Moti Levy said that the system doesn’t yet discriminate between art posted by legitimate owners and hijackers.

“If we find something that is a near-identical match, we will update our users,” he said. “In some cases, it might be their NFT. We don’t know who minted it.”

The company is finding success with the tool. DeviantArt Protect has already found 80,000 possible infringement cases with a 300% increase in notices sent between November and mid-December 2021. The company has also added anti-bot tools that keep NFT creators from swooping up whole collections of art as NFTs.

Ironically, the decentralized markets selling NFTs are starting to centralize around one or two providers. One of the most popular, OpenSea, has a full takedown team dedicated to situations like York’s or Quinni’s.

The company has taken off, reaching a stratospheric $13 billion valuation after a $300 million round in early January. The company is far and away the biggest player in the NFT market with an estimated 1.26 million active users and over 80 million NFTs. According to Dappradar, the platform took in $3.27 billion in transactions in the last thirty days and managed 2.33 million transactions. Its nearest competitor, Rarible, saw $14.92 million in transactions in the same period.

OpenSea has been open about its place in the ecosystem and claims that it is managing takedown requests by artists as quickly as it can.

“​​It is against our policy to sell NFTs that violate the publicity rights of others,” said an OpenSea spokesperson. “We regularly enforce this in multiple ways, including delisting and banning accounts when we are notified that usage of a likeness is not authorized.”

Interestingly, the company also seems to be cracking down on deep fakes or, as OpenSea calls it, non-consensual intimate imagery (NCII), a problem that hasn’t surfaced widely yet but could become pernicious for influencers and media stars.

“We have a zero-tolerance policy for NCII,” they said. “NFTs using NCII or similar images (including images doctored to look like someone that they are not) are prohibited, and we move quickly to ban accounts that post this material. We are actively expanding our efforts across customer support, trust and safety, and site integrity so we can move faster to protect and empower our community and creators.”

OpenSea’s efforts haven’t satisfied plenty of artists, many of whom were already skeptical of NFTs before they saw their own work and colleagues’ work hijacked on their platform. Many users are still finding their art on OpenSea and, when they publicly complain, they are inundated with support scammers who purport to be official representatives of platforms like OpenSea.

Because of this mess, Levy at DeviantArt said the company is exploring NFTs but refuses to offer them yet. In fact, he thinks his users don’t want them.

“In the long term, we think that Web3 is interesting and has potential, but for us, it would have to be done in a better way and in a way that protected artists and empowered them, not in a way that puts them in danger.”

A bill to ban geofence and keyword search warrants in New York gains traction

A New York bill that would ban state law enforcement from obtaining residents’ private user data from tech giants through the use of controversial search warrants will get another chance, two years after it was first introduced.

The Reverse Location Search Prohibition Act was reintroduced to the New York Assembly and Senate last year by a group of Democratic lawmakers after the bill previously failed to pass. Last week, the bill was referred to committee, the first major hurdle before it can be considered for a floor vote.

The bill, if passed, would be the first state law in the U.S. to end the use of geofence warrants and keyword search warrants, which rely on asking technology companies to turn over data about users who were near the scene of a crime or searched for particular keywords at a specific point in time.

For geofence warrants — also known as “reverse location” warrants — law enforcement asks a judge to order Google, which collects and stores billions of location data points from its users’ phones and apps, to turn over records on whose phones were in a certain geographical radius at the time of a crime to help identify possible suspects. Geofence warrants are a uniquely Google problem; law enforcement knows to tap Google’s databases of location data, which the search giant uses to drive its ads business, last year netting the company close to $150 billion in revenue.

It’s a similar process for Google searches; law enforcement asks a judge for a warrant to demand that Google turns over who searched for certain keywords during a particular window of time. Although less publicly known, keyword search warrants are in growing use and aren’t limited to just Google; Microsoft and Yahoo (which owns TechCrunch) have also been tapped for user data using this kind of legal process.

The use of these warrants has been called “fishing expeditions” by internet rights groups like the Electronic Frontier Foundation, which recently lent its support to the New York bill, along with the ACLU. Critics say these kinds of warrants are unconstitutionally broad and invasive because they invariably collect data on nearby innocent people with no connection to the crime.

TechCrunch reported last year that Minneapolis police used a geofence warrant to identify protesters accused of sparking violence in the wake of the police killing of George Floyd in 2020. Similar encounters reported by NBC News and The Guardian revealed how entirely innocent people have been tacitly accused of criminality simply for being close to the scene of the crime.

According to data published by Google, geofence warrants make up about one-quarter of all U.S. legal demands it receives. Since Google became widely known among law enforcement as a source for connecting location data and search terms to real-world suspects, Google processed more than 11,500 geofence warrants in 2020, up from less than a thousand in 2018 when the practice was still in its relative infancy.

New York state accounted for about 2-3 percent of all geofence warrants, amounting to hundreds of warrants in total.

Zellnor Myrie, a New York state senator who represents central Brooklyn and sponsored the senate bill, told TechCrunch: “In dense, urban communities like the ones I represent in Brooklyn, hundreds or thousands of innocent people who merely live or walk near a crime scene could be ensnared by a geofence warrant that would turn over their private location data. And keyword search warrants would identify users who have searched for a specific term, name or location. Our bill would ban these types of warrants and protect New Yorkers’ privacy.”

EFF sues spyware maker DarkMatter for illegally hacking Saudi activist

The Electronic Frontier Foundation (EFF) has filed a lawsuit against spyware maker DarkMatter, along with three former members of U.S. intelligence or military agencies, for allegedly hacking the iPhone of a prominent Saudi human rights activist. 

The lawsuit was filed on behalf of Loujain al-Hathloul, who claims she was among the victims of an illegal hacking campaign orchestrated by DarkMatter and three former U.S. intelligence officers hired by the UAE following the Arab Spring protests.

The former NSA operatives — named in the lawsuit as ExpressVPN CIO Daniel Gerike, Marc Baier and Ryan Adams — were part of the Project Raven hacking program, an effort by the UAE to spy on human rights activists, politicians, journalists and dissidents opposed to the government during the Arab Spring protests.

Back in September, the three former spies agreed to pay a cumulative $1.7 million after admitting to violations of the Computer Fraud and Abuse Act (CFAA) and prohibitions on selling sensitive military technology under a non-prosecution agreement with the U.S. Justice Department. They are also permanently banned from any jobs involving computer network exploitation, working for certain UAE organizations, exporting defense articles or providing defense services.

Al-Hathloul — best known for her efforts in calling for greater women’s rights in Saudi Arabia — claims the ex-spies exploited a vulnerability in iMessage to illegally hack into her iPhone in order to secretly monitor her communications and location. This, she claims, led to her “arbitrary arrest by the UAE’s security services and rendition to Saudi Arabia, where she was detained, imprisoned and tortured.”

The lawsuit alleges Gerike, Baier and Adams purchased malicious code from a U.S. company and intentionally directed the code to Apple servers in the U.S. to reach and place malicious software on al-Hathloul’s iPhone in violation the CFAA. It also alleges that they aided and abetted in crimes against humanity due to the fact the hacking of al-Hathloul’s phone was part of the UAE’s widespread and systematic attack against human rights defenders and activists.

The EFF, which filed the lawsuit alongside law firms Foley Hoag LLP and Boise Matthews LLP, says this is a “clear-cut” case of device hacking, whereby “DarkMatter operatives broke into al-Hathloul’s iPhone without her knowledge to insert malware, with horrific consequences.”

“Project Raven went beyond even the behaviour that we have seen from NSO Group, which has been caught repeatedly having sold software to authoritarian governments who use their tools to spy on journalists, activists and dissidents,” said Eva Galperin, cybersecurity director at EFF. “DarkMatter didn’t merely provide the tools; they oversaw the surveillance program themselves.”

In a statement, al-Hathloul said:

No government or individual should tolerate the misuse of spy malware to deter human rights or endanger the voice of the human conscious. This is why I have chosen to stand up for our collective right to remain safe online and limit government-backed cyber abuses of power.

I continue to realize my privilege to possibly act upon my beliefs. I hope this case inspires others to confront all sorts of cybercrimes while creating a safer space for all of us to grow, share and learn from one another without the threat of power abuses.

Sharing mobility data without compromising privacy

In recent years, rows of electric scooters and bikes lining sidewalks have become a common sight in cities around the United States.

The size of the e-scooter market alone is expected to surpass $40 billion by 2025, and Americans have taken more than 342 million trips on shared bikes and e-scooters since 2010.

Micromobility services generate a massive amount of mobility data, including potentially sensitive precise location data about users. Data from mobility services can provide valuable and timely insights to guide transportation and infrastructure policy, but the sharing of sensitive mobility data — between companies or between and with government agencies — can only be justified if issues of privacy and public trust are first addressed.

Innovative mobility options are providing cities with opportunities to solve the last-mile transportation problem, and the data from these services has a range of productive uses.

It can help city planners design transportation improvements, such as protected bike lanes, to keep users safe. Access to mobility data gives community advocates and government officials the ability to know in nearly real time how many mobility devices are in a certain area so cities can enforce limits to ensure neighborhoods are not overcrowded or underserved. This data can also streamline communications between companies and city governments, making it easier for mobility services to quickly adapt to events and emergencies in cities.

However, there are valid privacy concerns over the granularity and quantity of data that digitally enabled mobility services are able to collect and request to share with governments.

For example, a recent lawsuit filed against the Los Angeles Department of Transportation and the City of Los Angeles alleges that the city’s collection of e-scooter trip data through the Mobility Data Specification violates the Fourth Amendment to the U.S. Constitution and the California Electronic Communications Privacy Act. A lower court dismissed the lawsuit and the Electronic Frontier Foundation and the ACLU of Northern and Southern California recently asked a federal appeals court to revive it.

Additionally, a bill recently introduced in the California Legislature would require specific conditions to be met before mobility data is shared with public agencies or contractors. Under this bill, data could only be shared to assist transportation planning or protect the safety of users. The bill also requires that any trip data must be more than 24 hours old before being shared.

Near-real-time location data is often required to fulfill valid safety and regulatory enforcement purposes, but this data is very sensitive because it could reveal intimate aspects of an individual’s life. Patterns in location data could indicate personal habits, interpersonal relationships or religious practices.

While it is possible in some cases to “de-identify” location data tied to a specific individual or device, it is incredibly difficult to truly make any data set of precise location history truly anonymous. Even highly aggregated location data about patterns of large groups of people can unintentionally reveal sensitive information.

In 2017, a “global heat map” of user movement in the Strava fitness app inadvertently revealed the location of deployed military personnel in classified locations. Location data, even when de-identified or aggregated, should be subject to checks and controls to ensure the data remains protected and private.

Local governments and mobility companies are taking these issues of user privacy seriously. Over the past few months, the Future of Privacy Forum has worked with SAE’s Mobility Data Collaborative and public and private stakeholders to create a transportation-tailored privacy assessment tool that focuses on considerations for organizations that want to share mobility data in a privacy-sensitive manner.

The Mobility Data Sharing Assessment (MDSA) provides organizations in both the public and private sectors with operational guidance to conduct thoughtful, in-depth legal and privacy reviews of their data-sharing processes. Organizations that use this tool for sharing mobility data will be able to embed privacy and equity considerations into the design of mobility data-sharing agreements.

The goal of the MDSA is to enable responsible data sharing that protects individual privacy, respects community interests and equities, and encourages transparency to the public. By equipping organizations with an open-source, interoperable, customizable and voluntary framework that includes guidance, the barriers to sharing mobility data will be reduced.

This is the first version of the MDSA tool; it focuses specifically on ground-based mobility devices and location data. Some mobility vehicles like e-scooters now come equipped with on-board cameras, so in the future, the MDSA may be appended to add guidance about images and video collected by mobility devices.

The MDSA tool is open source and customizable, so organizations sharing this type of mobility data can edit it to consider the risks and benefits of sharing sensor or camera data that includes images.

Micromobility services can play a key role in improving access to jobs, food and health care. However, there are multiple factors for companies and government agencies to consider before sharing mobility data with other organizations, including the precision, immediacy and type of data shared. Organizations must assess these factors in a thoughtful, structured manner that considers any potential sources of bias.

That’s the key to using mobility data to maximize the benefits of services in the short term and build their infrastructure in the long term, allowing people to move about cities safer and faster.

Apple’s dangerous path

Hello friends, and welcome back to Week in Review.

Last week, we dove into the truly bizarre machinations of the NFT market. This week, we’re talking about something that’s a little bit more impactful on the current state of the web — Apple’s NeuralHash kerfuffle.

If you’re reading this on the TechCrunch site, you can get this in your inbox from the newsletter page, and follow my tweets @lucasmtny


the big thing

In the past month, Apple did something it generally has done an exceptional job avoiding — the company made what seemed to be an entirely unforced error.

In early August — seemingly out of nowhere** — the company announced that by the end of the year they would be rolling out a technology called NeuralHash that actively scanned the libraries of all iCloud Photos users, seeking out image hashes that matched known images of child sexual abuse material (CSAM). For obvious reasons, the on-device scanning could not be opted out of.

This announcement was not coordinated with other major consumer tech giants, Apple pushed forward on the announcement alone.

Researchers and advocacy groups had almost unilaterally negative feedback for the effort, raising concerns that this could create new abuse channels for actors like governments to detect on-device information that they regarded as objectionable. As my colleague Zach noted in a recent story, “The Electronic Frontier Foundation said this week it had amassed more than 25,000 signatures from consumers. On top of that, close to 100 policy and rights groups, including the American Civil Liberties Union, also called on Apple to abandon plans to roll out the technology.”

(The announcement also reportedly generated some controversy inside of Apple.)

The issue — of course — wasn’t that Apple was looking at find ways that prevented the proliferation of CSAM while making as few device security concessions as possible. The issue was that Apple was unilaterally making a massive choice that would affect billions of customers (while likely pushing competitors towards similar solutions), and was doing so without external public input about possible ramifications or necessary safeguards.

A long story short, over the past month researchers discovered Apple’s NeuralHash wasn’t as air tight as hoped and the company announced Friday that it was delaying the rollout “to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features.”

Having spent several years in the tech media, I will say that the only reason to release news on a Friday morning ahead of a long weekend is to ensure that the announcement is read and seen by as few people as possible, and it’s clear why they’d want that. It’s a major embarrassment for Apple, and as with any delayed rollout like this, it’s a sign that their internal teams weren’t adequately prepared and lacked the ideological diversity to gauge the scope of the issue that they were tackling. This isn’t really a dig at Apple’s team building this so much as it’s a dig on Apple trying to solve a problem like this inside the Apple Park vacuum while adhering to its annual iOS release schedule.

illustration of key over cloud icon

Image Credits: Bryce Durbin / TechCrunch /

Apple is increasingly looking to make privacy a key selling point for the iOS ecosystem, and as a result of this productization, has pushed development of privacy-centric features towards the same secrecy its surface-level design changes command. In June, Apple announced iCloud+ and raised some eyebrows when they shared that certain new privacy-centric features would only be available to iPhone users who paid for additional subscription services.

You obviously can’t tap public opinion for every product update, but perhaps wide-ranging and trail-blazing security and privacy features should be treated a bit differently than the average product update. Apple’s lack of engagement with research and advocacy groups on NeuralHash was pretty egregious and certainly raises some questions about whether the company fully respects how the choices they make for iOS affect the broader internet.

Delaying the feature’s rollout is a good thing, but let’s all hope they take that time to reflect more broadly as well.

** Though the announcement was a surprise to many, Apple’s development of this feature wasn’t coming completely out of nowhere. Those at the top of Apple likely felt that the winds of global tech regulation might be shifting towards outright bans of some methods of encryption in some of its biggest markets.

Back in October of 2020, then United States AG Bill Barr joined representatives from the UK, New Zealand, Australia, Canada, India and Japan in signing a letter raising major concerns about how implementations of encryption tech posed “significant challenges to public safety, including to highly vulnerable members of our societies like sexually exploited children.” The letter effectively called on tech industry companies to get creative in how they tackled this problem.


other things

Here are the TechCrunch news stories that especially caught my eye this week:

LinkedIn kills Stories
You may be shocked to hear that LinkedIn even had a Stories-like product on their platform, but if you did already know that they were testing Stories, you likely won’t be so surprised to hear that the test didn’t pan out too well. The company announced this week that they’ll be suspending the feature at the end of the month. RIP.

FAA grounds Virgin Galactic over questions about Branson flight
While all appeared to go swimmingly for Richard Branson’s trip to space last month, the FAA has some questions regarding why the flight seemed to unexpectedly veer so far off the cleared route. The FAA is preventing the company from further launches until they find out what the deal is.

Apple buys a classical music streaming service
While Spotify makes news every month or two for spending a massive amount acquiring a popular podcast, Apple seems to have eyes on a different market for Apple Music, announcing this week that they’re bringing the classical music streaming service Primephonic onto the Apple Music team.

TikTok parent company buys a VR startup
It isn’t a huge secret that ByteDance and Facebook have been trying to copy each other’s success at times, but many probably weren’t expecting TikTok’s parent company to wander into the virtual reality game. The Chinese company bought the startup Pico which makes consumer VR headsets for China and enterprise VR products for North American customers.

Twitter tests an anti-abuse ‘Safety Mode’
The same features that make Twitter an incredibly cool product for some users can also make the experience awful for others, a realization that Twitter has seemingly been very slow to make. Their latest solution is more individual user controls, which Twitter is testing out with a new “safety mode” which pairs algorithmic intelligence with new user inputs.


extra things

Some of my favorite reads from our Extra Crunch subscription service this week:

Our favorite startups from YC’s Demo Day, Part 1 
“Y Combinator kicked off its fourth-ever virtual Demo Day today, revealing the first half of its nearly 400-company batch. The presentation, YC’s biggest yet, offers a snapshot into where innovation is heading, from not-so-simple seaweed to a Clearco for creators….”

…Part 2
“…Yesterday, the TechCrunch team covered the first half of this batch, as well as the startups with one-minute pitches that stood out to us. We even podcasted about it! Today, we’re doing it all over again. Here’s our full list of all startups that presented on the record today, and below, you’ll find our votes for the best Y Combinator pitches of Day Two. The ones that, as people who sift through a few hundred pitches a day, made us go ‘oh wait, what’s this?’

All the reasons why you should launch a credit card
“… if your company somehow hasn’t yet found its way to launch a debit or credit card, we have good news: It’s easier than ever to do so and there’s actual money to be made. Just know that if you do, you’ve got plenty of competition and that actual customer usage will probably depend on how sticky your service is and how valuable the rewards are that you offer to your most active users….”


Thanks for reading, and again, if you’re reading this on the TechCrunch site, you can get this in your inbox from the newsletter page, and follow my tweets @lucasmtny

Lucas Matney

Apple delays plans to roll out CSAM detection in iOS 15

Apple has delayed plans to roll out its child sexual abuse (CSAM) detection technology that it chaotically announced last month, citing feedback from customers and policy groups.

That feedback, if you recall, has been largely negative. The Electronic Frontier Foundation said this week it had amassed more than 25,000 signatures from consumers. On top of that, close to 100 policy and rights groups, including the American Civil Liberties Union, also called on Apple to abandon plans to roll out the technology.

In a statement on Friday morning, Apple told TechCrunch:

“Last month we announced plans for features intended to help protect children from predators who use communication tools to recruit and exploit them, and limit the spread of Child Sexual Abuse Material. Based on feedback from customers, advocacy groups, researchers and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features.”

Apple’s so-called NeuralHash technology is designed to identify known CSAM on a user’s device without having to possess the image or knowing the contents of the image. Because a user’s photos stored in iCloud are end-to-end encrypted so that even Apple can’t access the data, NeuralHash instead scans for known CSAM on a user’s device, which Apple claims is more privacy-friendly than the current blanket scanning that cloud providers use.

But security experts and privacy advocates have expressed concern that the system could be abused by highly resourced actors, like governments, to implicate innocent victims or to manipulate the system to detect other materials that authoritarian nation states find objectionable.

Within a few weeks of announcing the technology, researchers said they were able to create “hash collisions” using NeuralHash, effectively tricking the system into thinking two entirely different images were the same.

iOS 15 is expected out later in the next few weeks.

Read more:

Crypto community slams ‘disastrous’ new amendment to Biden’s big infrastructure bill

Biden’s major bipartisan infrastructure plan struck a rare chord of cooperation between Republicans and Democrats, but changes it proposes to cryptocurrency regulation are tripping up the bill.

The administration intends to pay for $28 billion of its planned infrastructure spending by tightening tax compliance within the historically under-regulated arena of digital currency. That’s why cryptocurrency is popping up in a bill that’s mostly about rebuilding bridges and roads.

The legislation’s vocal critics argue that the bill’s effort to do so is slapdash, particularly a bit that would declare anyone “responsible for and regularly providing any service effectuating transfers of digital assets” to be a broker, subject to tax reporting requirements.

While that definition might be more straightforward in a traditional corner of finance, it could force cryptocurrency developers, companies and even anyone mining digital currencies to somehow collect and report information on users, something that by design isn’t even possible in a decentralized financial system.

Now, a new amendment to the critical spending package is threatening to make matters even worse.

Unintended consequences

In a joint letter about the bill’s text, Square, Coinbase, Ribbit Capital and other stakeholders warned of “financial surveillance” and unintended impacts for cryptocurrency miners and developers. The Electronic Frontier Foundation and Fight for the Future, two privacy-minded digital rights organizations, also slammed the bill.

Following the outcry from the cryptocurrency community, a pair of influential senators proposed an amendment to clarify the new reporting rules. Finance Committee Chairman Ron Wyden (D-OR) pushed back against the bill, proposing an amendment with fellow finance committee member Pat Toomey (R-PA) that would modify the bill’s language.

The amendment would establish that the new reporting “does not apply to individuals developing block chain technology and wallets,” removing some of the bill’s ambiguity on the issue.

“By clarifying the definition of broker, our amendment will ensure non-financial intermediaries like miners, network validators, and other service providers—many of whom don’t even have the personal-identifying information needed to file a 1099 with the IRS—are not subject to the reporting requirements specified in the bipartisan infrastructure package,” Toomey said.

Wyoming Senator Cynthia Lummis also threw her support behind the Toomey and Wyden amendment, as did Colorado Governor Jared Polis.

“Picking winners and losers”

The drama doesn’t stop there. With negotiations around the bill ongoing — the text could be finalized over the weekend — a pair of senators proposed a competing amendment that isn’t winning any fans in the crypto community.

That amendment, from Sen. Rob Portman (R-OH) and Mark Warner (D-VA), would exempt traditional cryptocurrency miners who participate in energy-intensive “proof of work” systems from new financial reporting requirements, while keeping those rules in place for those using a “proof of stake” system. Portman worked with the Treasury Department to author the cryptocurrency portion of the original infrastructure bill.

Rather than requiring an investment in computing hardware (and energy bills) capable of solving increasingly complex math problems, proof of stake systems rely on participants taking a financial stake in a given project, locking away some of the cryptocurrency to generate new coins.

Proof of stake is emerging as an attractive, climate-friendlier alternative that could reduce the need for heavy computing and huge amounts of energy required for proof of work mining. That makes it all the more puzzling that the latest amendment would specifically let proof of work mining off the hook.

Some popular digital currencies like Cardano are already built on proof of stake. Ethereum, the second biggest cryptocurrency, is in the process of migrating from a proof of work system to proof of stake to help scale its system and reduce fees. Bitcoin is the most notable digital currency that relies on proof of work.

The Warner-Portman amendment is being touted as a “compromise” but it’s not really halfway between the Wyden-Toomey amendment and the existing bill — it just introduces new problems that many crypto advocates view as a fresh existential threat to their work.

Prominent members of the crypto community including Square founder and Bitcoin booster Jack Dorsey have thrown their support behind the Wyden-Lummis-Toomey amendment while slamming the second proposal as misguided and damaging.

The executive director of Coincenter, a crypto think tank, called the Warner-Portman amendment “disastrous.” Coinbase CEO Brian Armstrong echoed that language. “At the 11th hour @MarkWarner has proposed an amendment that would decide which foundational technologies are OK and which are not in crypto,” he tweeted. “… We could find ourselves with the Senate deciding which types of crypto will survive government regulation.”

Unfortunately for the crypto community — and the promise of the proof of stake model — the White House is apparently throwing its weight behind the Warner-Portman amendment, though that could change as eleventh hour negotiations continue.

US lawmakers want to restrict police use of ‘Stingray’ cell tower simulators

According to BuzzFeed News, Democratic Senator Ron Wyden and Representative Ted Lieu will introduce legislation later today that seeks to restrict police use of international mobile subscriber identity (IMSI) catchers. More commonly known as Stingrays, police frequently use IMSI catchers and cell-site simulators to collect information on suspects and intercept calls, SMS messages and other forms of communication. Law enforcement agencies in the US currently do not require a warrant to use the technology. The Cell-Site Simulator Act of 2021 seeks to change that.

IMSI catchers mimic cell towers to trick mobile phones into connecting with them. Once connected, they can collect data a device sends out, including its location and subscriber identity key. Cell-site simulators pose a two-fold problem.

The first is that they’re surveillance blunt instruments. When used in a populated area, IMSI catchers can collect data from bystanders. The second is that they can also pose a safety risk to the public. The reason for this is that while IMSI catchers act like a cell tower, they don’t function as one, and they can’t transfer calls to a public wireless network. They can therefore prevent a phone from connecting to 9-1-1. Despite the dangers they pose, their use is widespread. In 2018, the American Civil Liberties Union found at least 75 agencies in 27 states and the District of Columbia owned IMSI catchers.

In trying to address those concerns, the proposed legislation would make it so that law enforcement agencies would need to make a case before a judge on why they should be allowed to use the technology. They would also need to explain why other surveillance methods wouldn’t be as effective. Moreover, it seeks to ensure those agencies delete any data they collect from those not listed on a warrant.

Although the bill reportedly doesn’t lay out a time limit on IMSI catcher use, it does push agencies to use the devices for the least amount of time possible. It also details exceptions where police could use the technology without a warrant. For instance, it would leave the door open for law enforcement to use the devices in contexts like bomb threats where an IMSI catcher can prevent a remote detonation.

“Our bipartisan bill ends the secrecy and uncertainty around Stingrays and other cell-site simulators and replaces it with clear, transparent rules for when the government can use these invasive surveillance devices,” Senator Ron Wyden told BuzzFeed News.

The bill has support from some Republicans. Senator Steve Daines of Montana and Representative Tom McClintock of California are co-sponsoring the proposed legislation. Organizations like the Electronic Frontier Foundation and the Electronic Privacy Information Center have also endorsed the bill.

This article was originally published on Engadget.

 

Ring won’t say how many users had footage obtained by police

Ring gets a lot of criticism, not just for its massive surveillance network of home video doorbells and its problematic privacy and security practices, but also for giving that doorbell footage to law enforcement. While Ring is making moves towards transparency, the company refuses to disclose how many users had their data given to police.

The video doorbell maker, acquired by Amazon in 2018, has partnerships with at least 1,800 U.S. police departments (and growing) that can request camera footage from Ring doorbells. Prior to a change this week, any police department that Ring partnered with could privately request doorbell camera footage from Ring customers for an active investigation. Ring will now let its police partners publicly request video footage from users through its Neighbors app.

The change ostensibly gives Ring users more control when police can access their doorbell footage, but ignores privacy concerns that police can access users’ footage without a warrant.

Civil liberties advocates and lawmakers have long warned that police can obtain camera footage from Ring users through a legal back door because Ring’s sprawling network of doorbell cameras are owned by private users. Police can still serve Ring with a legal demand, such as a subpoena for basic user information, or a search warrant or court order for video content, assuming there is evidence of a crime.

Ring received over 1,800 legal demands during 2020, more than double from the year earlier, according to a transparency report that Ring published quietly in January. Ring does not disclose sales figures but says it has “millions” of customers. But the report leaves out context that most transparency reports include: how many users or accounts had footage given to police when Ring was served with a legal demand?

When reached, Ring declined to say how many users had footage obtained by police.

That number of users or accounts subject to searches is not inherently secret, but an obscure side effect of how companies decide — if at all — to disclose when the government demands user data. Though they are not obligated to, most tech companies publish transparency reports once or twice a year to show how often user data is obtained by the government.

Transparency reports were a way for companies subject to data requests to push back against damning allegations of intrusive bulk government surveillance by showing that only a fraction of a company’s users are subject to government demands.

But context is everything. Facebook, Apple, Microsoft, Google, and Twitter all reveal how many legal demands they receive, but also specify how many users or accounts had data given. In some cases, the number of users or accounts affected can be twice or more than threefold the number of demands they received.

Ring’s parent, Amazon, is a rare exception among the big tech giants, which does not break out the specific number of users whose information was turned over to law enforcement.

“Ring is ostensibly a security camera company that makes devices you can put on your own homes, but it is increasingly also a tool of the state to conduct criminal investigations and surveillance,” Matthew Guariglia, policy analyst at the Electronic Frontier Foundation, told TechCrunch.

Guariglia added that Ring could release the numbers of users subject to legal demands, but also how many users have previously responded to police requests through the app.

Ring users can opt out of receiving requests from police, but this option would not stop law enforcement from obtaining a legal order from a judge for your data. Users can also switch on end-to-end encryption to prevent anyone other than the user, including Ring, from accessing their videos.