A new senate bill would create a US data protection agency

Europe’s data protection laws are some of the strictest in the world, and have long been a thorn in the side of the data-guzzling Silicon Valley tech giants since they colonized vast swathes of the internet.

Two decades later, one Democratic senator wants to bring many of those concepts to the United States.

Sen. Kirsten Gillibrand (D-NY) has published a bill which, if passed, would create a U.S. federal data protection agency designed to protect the privacy of Americans and with the authority to enforce data practices across the country. The bill, which Gillibrand calls the Data Protection Act, will address a “growing data privacy crisis” in the U.S., the senator said.

The U.S. is one of only a few countries without a data protection law, finding it in the same company as Venezuela, Libya, Sudan and Syria. Gillibrand said the U.S. is “vastly behind” other countries on data protection.

Gillibrand said a new data protection agency would “create and meaningfully enforce” data protection and privacy rights federally.

“The data privacy space remains a complete and total Wild West, and that is a huge problem,” the senator said.

The bill comes at a time where tech companies are facing increased attention by state and federal regulators over data and privacy practices. Last year saw Facebook settle a $5 billion privacy case with the Federal Trade Commission, which critics decried for failing to bring civil charges or levy any meaningful consequences. Months later, Google settled a child privacy case that cost it $170 million — costing the search giant about a day’s worth of its revenue.

Gillibrand pointedly called out Google and Facebook for “making a whole lot of money” from their empires of data, she wrote in a Medium post. Americans “deserve to be in control of your own data,” she wrote.

At its heart, the bill would — if signed into law — allow the newly created agency to hear and adjudicate complaints from consumers and declare certain privacy invading tactics as unfair and deceptive. As the government’s “referee,” the agency would let it take point on federal data protection and privacy matters, such as launching investigations against companies accused of wrongdoing. Gillibrand’s bill specifically takes issue with “take-it-or-leave-it” provisions, notably websites that compel a user to “agree” to allowing cookies with no way to opt-out. (TechCrunch’s parent company Verizon Media enforces a ‘consent required’ policy for European users under GDPR, though most Americans never see the prompt.)

Through its enforcement arm, the would-be federal agency would also have the power to bring civil action against companies, and fine companies of egregious breaches of the law up to $1 million a day, subject to a court’s approval.

The bill would transfer some authorities from the Federal Trade Commission to the new data protection agency.

Gillibrand’s bill lands just a month after California’s consumer privacy law took effect, more than a year after it was signed into law. The law extended much of Europe’s revised privacy laws, known as GDPR, to the state. But Gillibrand’s bill would not affect state laws like California’s, her office confirmed in an email.

Privacy groups and experts have already offered positive reviews.

Caitriona Fitzgerald, policy director at the Electronic Privacy Information Center, said the bill is a “bold, ambitious proposal.” Other groups, including Color of Change and Consumer Action, praised the effort to establish a federal data protection watchdog.

Michelle Richardson, director of the Privacy and Data Project at the Center for Democracy and Technology, reviewed a summary of the bill.

“The summary seems to leave a lot of discretion to executive branch regulators,” said Richardson. “Many of these policy decisions should be made by Congress and written clearly into statute.” She warned it could take years to know if the new regime has any meaningful impact on corporate behaviors.

Gillibrand’s bill stands alone — the senator is the only sponsor on the bill. But given the appetite of some lawmakers on both sides of the aisles to crash the Silicon Valley data party, it’s likely to pick up bipartisan support in no time.

Whether it makes it to the president’s desk without a fight from the tech giants remains to be seen.

Facebook Dating launch blocked in Europe after it fails to show privacy workings

Facebook has been left red-faced after being forced to call off the launch date of its dating service in Europe because it failed to give its lead EU data regulator enough advanced warning — including failing to demonstrate it had performed a legally required assessment of privacy risks.

Late yesterday Ireland’s Independent.ie newspaper reported that the Irish Data Protection Commission (DPC) had sent agents to Facebook’s Dublin office seeking documentation that Facebook had failed to provide — using inspection and document seizure powers set out in Section 130 of the country’s Data Protection Act.

In a statement on its website the DPC said Facebook first contacted it about the rollout of the dating feature in the EU on February 3.

“We were very concerned that this was the first that we’d heard from Facebook Ireland about this new feature, considering that it was their intention to roll it out tomorrow, 13 February,” the regulator writes. “Our concerns were further compounded by the fact that no information/documentation was provided to us on 3 February in relation to the Data Protection Impact Assessment [DPIA] or the decision-making processes that were undertaken by Facebook Ireland.”

Facebook announced its plan to get into the dating game all the way back in May 2018, trailing its Tinder-encroaching idea to bake a dating feature for non-friends into its social network at its F8 developer conference.

It went on to test launch the product in Colombia a few months later. And since then it’s been gradually adding more countries in South American and Asia. It also launched in the US last fall — soon after it was fined $5BN by the FTC for historical privacy lapses.

At the time of its US launch Facebook said dating would arrive in Europe by early 2020. It just didn’t think to keep its lead EU privacy regulator in the loop — despite the DPC having multiple (ongoing) investigations into other Facebook-owned products at this stage.

Which is either extremely careless or, well, an intentional fuck you to privacy oversight of its data-mining activities. (Among multiple probes being carried out under Europe’s General Data Protection Regulation, the DPC is looking into Facebook’s claimed legal basis for processing people’s data under the Facebook T&Cs, for example.)

The DPC’s statement confirms that its agents visited Facebook’s Dublin office on February 10 to carry out an inspection — in order to “expedite the procurement of the relevant documentation”.

Which is a nice way of the DPC saying Facebook spent a whole week still not sending it the required information.

“Facebook Ireland informed us last night that they have postponed the roll-out of this feature,” the DPC’s statement goes on.

Which is a nice way of saying Facebook fucked up and is being made to put a product rollout it’s been planning for at least half a year on ice.

The DPC’s head of communications, Graham Doyle, confirmed the enforcement action, telling us: “We’re currently reviewing all the documentation that we gathered as part of the inspection on Monday and we have posed further questions to Facebook and are awaiting the reply.”

“Contained in the documentation we gathered on Monday was a DPIA,” he added.

This begs the question why Facebook didn’t send the DPIA to the DPC on February 3 — unless of course this document did not actually exist on that date…

We’ve reached out to Facebook for comment — and to ask when it carried out the DPIA. Update: A Facebook spokesperson has now sent this statement:

It’s really important that we get the launch of Facebook Dating right so we are taking a bit more time to make sure the product is ready for the European market. We worked carefully to create strong privacy safeguards, and complete the data processing impact assessment ahead of the proposed launch in Europe, which we shared with the IDPC when it was requested.

We’ve asked the company why, if it’s “really important” to get the launch “right” it did not provide the DPC with the required documentation in advance — instead of the regulator having to send agents to Facebook’s offices to get it themselves. We’ll update this report with any response.

We’ve also asked the DPC to confirm its next steps. The regulator could ask Facebook to make changes to how the product functions in Europe if it’s not satisfied it complies with EU laws. So a delay may mean many things.

Under GDPR there’s a requirement for data controllers to bake privacy by design and default into products which are handling people’s information. (And a dating product clearly would be.)

While a DPIA — which is a process whereby planned processing of personal data is assessed to consider the impact on the rights and freedoms of individuals — is a requirement under the GDPR when, for example, individual profiling is taking place or there’s processing of sensitive data on a large scale.

Again, the launch of a dating product on a platform such as Facebook — which has hundreds of millions of regional users — would be a clear-cut case for such an assessment to be carried out ahead of any launch.

Radar, a location data startup, says its “big bet” is on putting privacy first

Pick any app on your phone, and there’s a greater than average chance that it’s tracking your location right now.

Sometimes they don’t even tell you. Your location can be continually collected and uploaded, then monetized by advertisers and other data tracking firms. These companies also sell the data to the government — no warrants needed. And even if you’re app-less, your phone company knows where you are at any given time, and for the longest time sold that data to anyone who wanted it.

Location data is some of the most personal information we have — yet few think much about it. Our location reveals where we go, when, and often why. It can be used to know our favorite places and our routines, and also who we talk to. And yet it’s spilling out of our phones ever second of every day to private companies, subject to little regulation or oversight, building up precise maps of our lives. Headlines sparked anger and pushed lawmakers into taking action. And consumers are becoming increasingly aware of their tracked activity thanks to phone makers, like Apple, alerting users to background location tracking. Foursquare, one of the biggest location data companies, even called on Congress to do more to regulate the sale of location data.

But location data is not going anywhere. It’s a convenience that’s just too convenient, and it’s an industry that’s growing from strength to strength. The location data market was valued at $10 billion last year, with it set to balloon in size by more than two-fold by 2027.

There is appetite for change, Radar, a location data startup based in New York, promised in a recent blog post that it will “not sell any data we collect, and we do not share location data across customers.”

It’s a promise that Radar chief executive Nick Patrick said he’s willing to bet the company on.

“We want to be that location layer that unlocks the next generation of experiences but we also want to do it in a privacy conscious way,” Patrick told TechCrunch. “That’s our big bet.”

Developers integrate Radar into their apps. Those app makers can create location geofences around their businesses, like any Walmart or Burger King. When a user enters that location, the app knows to serve relevant notifications or alerts, making it functionally just like any other location data provider.

But that’s where Patrick says Radar deviates.

“We want to be the most privacy-first player,” Patrick said. Radar bills itself as a location data software-as-a-service company, rather than an ad tech company like its immediate rivals. That may sound like a marketing point — it is — but it’s also an important distinction, Patrick says, because it changes how the company makes its money. Instead of monetizing the collected data, Radar prices its platform based on the number of monthly active users that use the apps with Radar inside.

“We’re not going to package that up into an audience segment and sell it on an ad exchange,” he said. “We’re not going to pull all of the data together from all the different devices that we’re installed on and do foot traffic analytics or attribution.”

But that trust doesn’t come easy, nor should it. Some of the most popular apps have lost the trust of their users through privacy-invasive privacy practices, like collecting locations from users without their knowledge or permission, by scanning nearby Bluetooth beacons or Wi-Fi networks to infer where a person is.

We were curious and ran some of the apps through a network traffic analyzer to see what was going on under the hood, like Joann, GasBuddy, Draft King and others. We found that Radar only activated when location permissions were granted on the device — something apps have tried to get around in the past. The apps we checked instantly sent our precise location data back to Radar — which was to be expected — along with the device type, software version, and little else. The data collected by Radar is significantly less than what other comparable apps share with their developers, but still allows integrations with third-party platforms to make use of that location data. Via, a popular ride-sharing app, uses a person’s location, collected by Radar, to deliver notifications and promotions to users at airports and other places of interest.

The company boasts its technology is used in apps on more than 100 million device installs.

“We see a ton of opportunity around enabling folks to build location, but we also see that the space has been mishandled,” said Patrick. “We think the location space in need of a technical leader but also an ethical leader that can enable the stuff in a privacy conscious way.”

It was a convincing pitch for Radar’s investors, which just injected $20 million into its Series B fundraise, led by Accel, a substantial step up from its $8 million Series A round. Patrick said the round will help the company build out the platform further. One feature on Radar’s to-do list was to allow the platform to take advantage of on-device processing, “no user event data ever touches Radar’s servers,” he aid Patrick. The raise will help the company expand its physical footprint on the west coast by opening an office in San Francisco. Its home base in New York will also expand, he said, increasing the company’s headcount from its current two-dozen employees.

“Radar stands apart due to its focus on infrastructure rather than ad tech,” said Vas Natarajan, a partner at Accel, who also took a seat on Radar’s board.

Two Sigma Ventures, Heavybit, Prime Set, and Bedrock Capital participated in the round.

Patrick said his pitch is also working for apps and developers, which recognize that their users are becoming more aware of privacy issues. He’s seen companies, some of which he now calls customers, that are increasingly looking for more privacy-focused partners and vendors, not least to bolster their own respective reputations.

It’s healthy to be skeptical. Given the past year, it’s hard to have any faith in any location data company, let alone embrace one. And yet it’s a compelling pitch for the app community that only through years of misdeeds and a steady stream of critical headlines is being forced to repair its image.

But a company’s words are only as strong as its actions, and only time will tell if they hold up.

UK names its pick for social media ‘harms’ watchdog

The UK government has taken the next step in its grand policymaking challenge to tame the worst excesses of social media by regulating a broad range of online harms — naming the existing communications watchdog, Ofcom, as its preferred pick for enforcing rules around ‘harmful speech’ on platforms such as Facebook, Snapchat and TikTok in future.

Last April the previous Conservative-led government laid out populist but controversial proposals to legislate to lay a duty of care on Internet platforms — responding to growing public concern about the types of content kids are being exposed to online.

Its white paper covers a broad range of online content — from terrorism, violence and hate speech, to child exploitation, self-harm/suicide, cyber bullying, disinformation and age-inappropriate material — with the government setting out a plan to require platforms to take “reasonable” steps to protect their users from a range of harms.

However digital and civil rights campaigners warn the plan will have a huge impact on online speech and privacy, arguing it will put a legal requirement on platforms to closely monitor all users and apply speech-chilling filtering technologies on uploads in order to comply with very broadly defined concepts of harm — dubbing it state censorship. Legal experts are also critical.

The (now) Conservative majority government has nonetheless said it remains committed to the legislation.

Today it responded to some of the concerns being raised about the plan’s impact on freedom of expression, publishing a partial response to the public consultation on the Online Harms White Paper, although a draft bill remains pending, with no timeline confirmed.

“Safeguards for freedom of expression have been built in throughout the framework,” the government writes in an executive summary. “Rather than requiring the removal of specific pieces of legal content, regulation will focus on the wider systems and processes that platforms have in place to deal with online harms, while maintaining a proportionate and risk-based approach.”

It says it’s planning to set a different bar for content deemed illegal vs content that has “potential to cause harm”, with the heaviest content removal requirements being planned for terrorist and child sexual exploitation content. Whereas companies will not be forced to remove “specific pieces of legal content”, as the government puts it.

Ofcom, as the online harms regulator, will also not be investigating or adjudicating on “individual complaints”.

“The new regulatory framework will instead require companies, where relevant, to explicitly state what content and behaviour they deem to be acceptable on their sites and enforce this consistently and transparently. All companies in scope will need to ensure a higher level of protection for children, and take reasonable steps to protect them from inappropriate or harmful content,” it writes.

“Companies will be able to decide what type of legal content or behaviour is acceptable on their services, but must take reasonable steps to protect children from harm. They will need to set this out in clear and accessible terms and conditions and enforce these effectively, consistently and transparently. The proposed approach will improve transparency for users about which content is and is not acceptable on different platforms, and will enhance users’ ability to challenge removal of content where this occurs.”

Another requirement will be that companies have “effective and proportionate user redress mechanisms” — enabling users to report harmful content and challenge content takedown “where necessary”.

“This will give users clearer, more effective and more accessible avenues to question content takedown, which is an important safeguard for the right to freedom of expression,” the government suggests, adding that: “These processes will need to be transparent, in line with terms and conditions, and consistently applied.”

Ministers say they have not yet made a decision on what kind of liability senior management of covered businesses may face under the planned law, nor on additional business disruption measures — with the government saying it will set out its final policy position in the Spring.

“We recognise the importance of the regulator having a range of enforcement powers that it uses in a fair, proportionate and transparent way. It is equally essential that company executives are sufficiently incentivised to take online safety seriously and that the regulator can take action when they fail to do so,” it writes.

It’s also not clear how businesses will be assessed as being in (or out of) scope of the regulation.

“Just because a business has a social media page that does not bring it in scope of regulation,” the government response notes. “To be in scope, a business would have to operate its own website with the functionality to enable sharing of user-generated content, or user interactions. We will introduce this legislation proportionately, minimising the regulatory burden on small businesses. Most small businesses where there is a lower risk of harm occurring will not have to make disproportionately burdensome changes to their service to be compliant with the proposed regulation.”

The government is clear in the response that Online harms remains “a key legislative priority”.

“We have a comprehensive programme of work planned to ensure that we keep momentum until legislation is introduced as soon as parliamentary time allows,” it writes, describing today’s response report “an iterative step as we consider how best to approach this complex and important issue” — and adding: “We will continue to engage closely with industry and civil society as we finalise the remaining policy.”

Incoming in the meanwhile the government says it’s working on a package of measures “to ensure progress now on online safety” — including interim codes of practice, including guidance for companies on tackling terrorist and child sexual abuse and exploitation content online; an annual government transparency report, which it says it will publish “in the next few months”; and a media literacy strategy, to support public awareness of online security and privacy.

It adds that it expects social media platforms to “take action now to tackle harmful content or activity on their services” — ahead of the more formal requirements coming in.

Facebook-owned Instagram has come in for high level pressure from ministers over how it handles content promoting self-harm and suicide after the media picked up on a campaign by the family of a schoolgirl who killed herself after been exposed to Instagram content encouraging self-harm.

Instagram subsequently announced changes to its policies for handling content that encourages or depicts self harm/suicide — saying it would limit how it could be accessed. This later morphed into a ban on some of this content.

The government said today that companies offering online services that involve user generated content or user interactions are expected to make use of what it dubs “a proportionate range of tools” — including age assurance, and age verification technologies — to prevent kids from accessing age-inappropriate content and “protect them from other harms”.

This is also the piece of the planned legislation intended to pick up the baton of the Digital Economy Act’s porn block proposals — which the government dropped last year, saying it would bake equivalent measures into the forthcoming Online Harms legislation.

The Home Office has been consulting with social media companies on devising robust age verification technologies for many months.

In its own response statement today, Ofcom — which would be responsible for policy detail under the current proposals — said it will work with the government to ensure “any regulation provides effective protection for people online”, and, pending appointment, “consider what we can do before legislation is passed”.

The Online Harms plan is not the online Internet-related work ongoing in Whitehall, with ministers noting that: “Work on electoral integrity and related online transparency issues is being taken forward as part of the Defending Democracy programme together with the Cabinet Office.”

Back in 2018 a UK parliamentary committee called for a levy on social media platforms to fund digital literacy programs to combat online disinformation and defend democratic processes, during an enquiry into the use of social media for digital campaigning. However the UK government has been slower to act on this front.

The former chair of the DCMS committee, Damian Collins, called today for any future social media regulator to have “real powers in law” — including the ability to “investigate and apply sanctions to companies which fail to meet their obligations”.

In the DCMS committee’s final report parliamentarians called for Facebook’s business to be investigated, raising competition and privacy concerns.

Jam lets you safely share streaming app passwords

Can’t afford Netflix and HBO and Spotify and Disney+…? Now there’s an app specially built for giving pals your passwords while claiming to keep your credentials safe. It’s called Jam, and the questionably legal service launched in private beta this morning. Founder John Backus tells TechCrunch in his first interview about Jam that it will let users save login details with local encryption, add friends you can then authorize to access your password for a chosen service, and broadcast to friends which of your subscriptions have room for people to piggyback on.

Jam is just starting to add users off its rapidly growing waitlist that you can join here, but when users get access, it’s designed to stay free to use. In the future, Jam could build a business by helping friends split the costs of subscriptions. There’s clearly demand. Over 80% of 13-24 year olds have given out or used someone else’s online TV password, according a study by Hub of over 2000 US consumers.

“The need for Jam was obvious. I don’t want to find out my ex-girlfriend’s roommate has been using my account again. Everyone shares passwords, but for consumers there isn’t a secure way to do that. Why?” Backus asks. “In the enterprise world, team password managers reflect the reality that multiple people need to access the same account, regularly. Consumers don’t have the same kind of system, and that’s bad for security and coordination.”

Thankfully, Backus isn’t some amateur when it comes to security. The Stanford computer science dropout and Thiel Fellow founded identity verification startup Cognito and decentralized credit scoring app Bloom. “Working in crypto at Bloom and with sensitive data at Cognito, I have a lot of experience building secure products with cryptography at the core.

He also tells me since everything saved in Jam is locally encrypted, even he can’t see it and nothing would be exposed if the company was hacked. It uses similar protocols to 1Password, “Plaintext login information is never sent to our server, nor is your master password” and “we use pretty straightforward public key cryptography.” Remember, your friend could always try to hijack and lock you out, though. And while those protocols may be hardened, TechCrunch can’t verify they’re perfectly implemented and fully secure within Jam.

Whether facilitating password sharing is legal, and whether Netflix and its peers will send an army of lawyers to destroy Jam, remain open questions. We’ve reached out to several streaming companies for comment. When asked on Twitter about Jam helping users run afoul of their terms of service, Backus claims that “plenty of websites give you permission to share your account with others (with vary degrees of constraints) but users often don’t know these rules.” 

However, sharing is typically supposed to be amongst a customer’s own devices or within their household, or they’re supposed to pay for a family plan. We asked Netflix, Hulu, CBS, Disney, and Spotify for comment, and did not receive any on the record comments. However, Spotify’s terms of service specifically prohibit providing your password to any other person or using any other person’s username and password”. Netflix’s terms insist that “the Account Owner should maintain control over the Netflix ready devices that are used to access the service and not reveal the password or details of the Payment Method associated to the account to anyone.”

Some might see Jam as ripping off the original content creators, though Backus claims that “Jam isn’t trying to take money out of anyone’s pocket. Spotify offers [family plan sharing for people under the same roof]. Many other companies offer similar bundled plans. I think people just underutilize things like this and it’s totally fair game.”

Netflix’s Chief Product Officer said in October that the company is monitoring password sharing and it’s looking at “consumer-friendly ways to push on the edges of that.” Meanwhile, The Alliance For Creativity and Entertainment that includes Netflix, Disney, Amazon, Comcast, and major film studios announced that its members will collaborate to address “piracy” including “what facilitates unauthorized access, including improper password sharing and inadequate encryption.”

That could lead to expensive legal trouble for Jam. “My past startups have done well, so I’ve had the pleasure of self-funding Jam so far” Backus says. But if lawsuits emerge or the app gets popular, he might need to find outside investors. “I only launched about 5 hours ago, but I’ll just say that I’m already in the process of upgrading my database tier due to signup growth.”

Eventually, the goal is not to monetize not through a monthly subscription like Backus expects competitors including password-sharing browser extensions might charge. Instead “Jam will make money by helping users save money. We want to make it easy fo users to track what they’re sharing and with whom so that they can settle up the difference at the end of each month” Backus explains. It could charge “either a small fee in exchange for automatically settling debts between users and/or charging a percentage of the money we save users by recommending more efficient sharing setups.” Later, he sees a chance to provide recommendations for optimizing account management across networks of people while building native mobile apps.

“I think Jam is timed perfectly to line up with multiple different booming trends in how people are using the internet”, particularly younger people says Backus. Hub says 42% of all US consumers have used someone else’s online TV service password, while amongst 13 to 24 year olds, 69% have watched Netflix on someone else’s password. “When popularity and exclusivity are combined with often ambiguous, even sometimes nonexistent, rules about legitimate use, it’s almost an invitation to subscribers to share the enjoyment with friends and family” says Peter Fondulas, the principal at Hub and co-author of the study. “Wall Street has already made its displeasure clear, but in spite of that, password sharing is still very much alive and well.”

From that perspective, you could liken Jam to sex education. Password sharing abstinence has clearly failed. At least people should learn how to do it safely.

California’s new privacy law is off to a rocky start

California’s new privacy law was years in the making.

The law, California’s Consumer Privacy Act — or CCPA — became law on January 1, allowing state residents to reclaim their right to access and control their personal data. Inspired by Europe’s GDPR, the CCPA is the largest statewide privacy law change in a generation. The new law lets users request a copy of the data that tech companies have on them, delete the data when they no longer want a company to have it, and demand that their data isn’t sold to third parties. All of this is much to the chagrin of the tech giants, some of which had spent millions to comply with the law and have many more millions set aside to deal with the anticipated influx of consumer data access requests.

But to say things are going well is a stretch.

Many of the tech giants that kicked and screamed in resistance to the new law have acquiesced and accepted their fate — at least until something different comes along. The California tech scene had more than a year to prepare, but some have made it downright difficult and — ironically — more invasive in some cases for users to exercise their rights, largely because every company has a different interpretation of what compliance should look like.

Alex Davis is just one California resident who tried to use his new rights under the law to make a request to delete his data. He vented his annoyance on Twitter, saying companies have responded to CCPA by making requests “as confusing and difficult as possible in new and worse ways.”

“I’ve never seen such deliberate attempts to confuse with design,” he told TechCrunch. He referred to what he described as “dark patterns,” a type of user interface design that tries to trick users into making certain choices, often against their best interests.

“I tried to make a deletion request but it bogged me down with menus that kept redirecting… things to be turned on and off,” he said.

Despite his frustration, Davis got further than others. Just as some companies have made it easy for users to opt-out of having their data sold by adding the legally required “Do not sell my info” links on their websites, many have not. Some have made it near-impossible to find these “data portals,” which companies set up so users can request a copy of their data or delete it altogether. For now, California companies are still in a grace period — but have until July when the CCPA’s enforcement provisions kick in. Until then, users are finding ways around it — by collating and sharing links to data portals to help others access their data.

“We really see a mixed story on the level of CCPA response right now,” said Jay Cline, who heads up consulting giant PwC’s data privacy practice, describing it as a patchwork of compliance.

PwC’s own data found that only 40% of the largest 600 U.S. companies had a data portal. Only a fraction, Cline said, extended their portals to users outside of California, even though other states are gearing up to push similar laws to the CCPA.

But not all data portals are created equally. Given how much data companies store on us — personal or otherwise — the risks of getting things wrong are greater than ever. Tech companies are still struggling to figure out the best way to verify each data request to access or delete a user’s data without inadvertently giving it away to the wrong person.

Last year, security researcher James Pavur impersonated his fiancee and tricked tech companies into turning over vast amounts of data about her, including credit card information, account logins and passwords and, in one case, a criminal background check. Only a few of the companies asked for verification. Two years ago, Akita founder Jean Yang described someone hacking into her Spotify account and requesting her account data as an “unfortunate consequence” of GDPR, which mandated companies operating on the continent allow users access to their data.

(Image: Twitter/@jeanqasaur)

The CCPA says companies should verify a person’s identity to a “reasonable degree of certainty.” For some that’s just an email address to send the data.

Others require sending in even more sensitive information just to prove it’s them.

Indeed, i360, a little-known advertising and data company, until recently asked California residents for a person’s full Social Security number. This recently changed to just the last four-digits. Verizon (which owns TechCrunch) wants its customers and users to upload their driver’s license or state ID to verify their identity. Comcast asks for the same, but goes the extra step by asking for a selfie before it will turn over any of a customer’s data.

Comcast asks for the same amount of information to verify a data request as the controversial facial recognition startup, Clearview AI, which recently made headlines for creating a surveillance system made up of billions of images scraped from Facebook, Twitter and YouTube to help law enforcement trace a person’s movements.

As much as CCPA has caused difficulties, it has helped forge an entirely new class of compliance startups ready to help large and small companies alike handle the regulatory burdens to which they are subject. Several startups in the space are taking advantage of the $55 billion expected to be spent on CCPA compliance in the next year — like Segment, which gives customers a consolidated view of the data they store; Osano which helps companies comply with CCPA; and Securiti, which just raised $50 million to help expand its CCPA offering. With CCPA and GDPR under their belts, their services are designed to scale to accommodate new state or federal laws as they come in.

Another startup, Mine, which lets users “take ownership” of their data by acting as a broker to allow users to easily make requests under CCPA and GDPR, had a somewhat bumpy debut.

The service asks users to grant them access to a user’s inbox, scanning for email subject lines that contain company names and using that data to determine which companies a user can request their data from or have their data deleted. (The service requests access to a user’s Gmail but the company claims it will “never read” users’ emails.) Last month during a publicity push, Mine inadvertently copied a couple of emailed data requests to TechCrunch, allowing us to see the names and email addresses of two requesters who wanted Crunch, a popular gym chain with a similar name, to delete their data.

(Screenshot: Zack Whittaker/TechCrunch)

TechCrunch alerted Mine — and the two requesters — to the security lapse.

“This was a mix-up on our part where the engine that finds companies’ data protection offices’ addresses identified the wrong email address,” said Gal Ringel, co-founder and chief executive at Mine. “This issue was not reported during our testing phase and we’ve immediately fixed it.”

For now, many startups have caught a break.

The smaller, early-stage startups that don’t yet make $25 million in annual revenue or store the personal data on more than 50,000 users or devices will largely escape having to immediately comply with CCPA. But it doesn’t mean startups can be complacent. As early-stage companies grow, so will their legal responsibilities.

“For those who did launch these portals and offer rights to all Americans, they are in the best position to be ready for these additional states,” said Cline. “Smaller companies in some ways have an advantage for compliance if their products or services are commodities, because they can build in these controls right from the beginning,” he said.

CCPA may have gotten off to a bumpy start, but time will tell if things get easier. Just this week, California’s attorney general Xavier Becerra released newly updated guidance aimed at trying to “fine tune” the rules, per his spokesperson. It goes to show that even California’s lawmakers are still trying to get the balance right.

But with the looming threat of hefty fines just months away, time is running out for the non-compliant.

ACLU says it’ll fight DHS efforts to use app locations for deportations

The American Civil Liberties Union plans to fight newly revealed practices by the Department of Homeland Security which used commercially available cell phone location data to track suspected illegal immigrants.

“DHS should not be accessing our location information without a warrant, regardless whether they obtain it by paying or for free. The failure to get a warrant undermines Supreme Court precedent establishing that the government must demonstrate probable cause to a judge before getting some of our most sensitive information, especially our cell phone location history,” said Nathan Freed Wessler, a staff attorney with the ACLU’s Speech, Privacy, and Technology Project.

Earlier today, The Wall Street Journal reported that the Homeland Security through its Immigration and Customs Enforcement (ICE) and Customs & Border Protection (CBP) agencies were buying geolocation data from commercial entities to investigate suspects of alleged immigration violations.

The location data, which aggregators acquire from cellphone apps including games, weather, shopping, and search services, is being used by Homeland Security to detect undocumented immigrants and others entering the U.S. unlawfully, the Journal reported.

According to privacy experts interviewed by the Journal, since the data is publicly available for purchase, the government practices don’t appear to violate the law — despite being what may be the largest dragnet ever conducted by the U.S. government using the aggregated data of its citizens.

It’s also an example of how the commercial surveillance apparatus put in place by private corporations in Democratic societies can be legally accessed by state agencies to create the same kind of surveillance networks used in more authoritarian countries like China, India, and Russia.

“This is a classic situation where creeping commercial surveillance in the private sector is now bleeding directly over into government,” said Alan Butler, general counsel of the Electronic Privacy Information Center, a think tank that pushes for stronger privacy laws, told the newspaper.

Behind the government’s use of commercial data is a company called Venntel. Based in Herndon, Va., the company acts as a government contractor and shares a number of its executive staff with Gravy Analytics, a mobile-advertising marketing analytics company. In all, ICE and the CBP have spent nearly $1.3 million on licenses for software that can provide location data for cell phones. Homeland Security says that the data from these commercially available records is used to generate leads about border crossing and detecting human traffickers.

The ACLU’s Wessler has won these kinds of cases in the past. He successfully argued before the Supreme Court in the case of Carpenter v. United States that geographic location data from cellphones was a protected class of information and couldn’t be obtained by law enforcement without a warrant.

CBP explicitly excludes cell tower data from the information it collects from Venntel, according to a spokesperson for the agency told the Journal — in part because it has to under the law. The agency also said that it only access limited location data and that data is anonymized.

However, anonymized data can be linked to specific individuals by correlating that anonymous cell phone information with the real world movements of specific individuals which can be either easily deduced or tracked through other types of public records and publicly available social media.

ICE is already being sued by the ACLU for another potential privacy violation. Late last year the ACLU said that it was taking the government to court over the DHS service’s use of so-called “stingray” technology that spoofs a cell phone tower to determine someone’s location.

At the time, the ACLU cited a government oversight report in 2016 which indicated that both CBP and ICE collectively spent $13 million on buying dozens of stingrays, which the agencies used to “locate people for arrest and prosecution.”

Blackbox welfare fraud detection system breaches human rights, Dutch court rules

An algorithmic risk scoring system deployed by the Dutch state to try to predict the likelihood that social security claimants will commit benefits or tax fraud breaches human rights law, a court in the Netherlands has ruled.

The Dutch government’s System Risk Indication (SyRI) legislation uses a non-disclosed algorithmic risk model to profile citizens and has been exclusively targeted at neighborhoods with mostly low-income and minority residents. Human rights campaigners have dubbed it a ‘welfare surveillance state’.

A number of civil society organizations in the Netherlands and two citizens instigated the legal action against SyRI — seeking to block its use. The court has today ordered an immediate halt to the use of the system.

The ruling is being hailed as a landmark judgement by human rights campaigners, with the court basing its reasoning on European human rights law — specifically the right to a private life that’s set out by Article 8 of the European Convention on Human Rights (ECHR) — rather than a dedicated provision in the EU’s data protection framework (GDPR) which relates to automated processing.

GDPR’s Article 22 includes the right for individuals not to be subject to solely automated individual decision-making where they can produce significant legal effects. But there can be some fuzziness around whether this applies if there’s a human somewhere in the loop, such as to review a decision on objection.

In this instance the court has sidestepped such questions by finding SyRI directly interferes with rights set out in the ECHR.

Specifically, the court found that the SyRI legislation fails a balancing test in Article 8 of the ECHR which requires that any social interest to be weighed against the violation of individuals’ private life — with a fair and reasonable balance being required.

In its current form the automated risk assessment system failed this test, in the court’s view.

Legal experts suggest the decision sets some clear limits on how the public sector in the UK can make use of AI tools — with the court objecting in particular to the lack of transparency about how the algorithmic risk scoring system functioned.

In a press release about the judgement (translated to English using Google Translate) the court writes that the use of SyRI is “insufficiently clear and controllable”. While, per Human Rights Watch, the Dutch government refused during the hearing to disclose “meaningful information” about how SyRI uses personal data to draw inferences about possible fraud.

The court clearly took a dim view of the state trying to circumvent scrutiny of human rights risk by pointing to an algorithmic ‘blackbox’ and shrugging.

The UN special rapporteur on extreme poverty and human rights, Philip Alston — who intervened in the case by providing the court with a human rights analysis — welcomed the judgement, describing it as “a clear victory for all those who are justifiably concerned about the serious threats digital welfare systems pose for human rights”.

“This decision sets a strong legal precedent for other courts to follow. This is one of the first times a court anywhere has stopped the use of digital technologies and abundant digital information by welfare authorities on human rights grounds,” he added in a press statement.

Back in 2018 Alston warned that the UK government’s rush to apply digital technologies and data tools to socially re-engineer the delivery of public services at scale risked having an immense impact on the human rights of the most vulnerable.

So the decision by the Dutch court could have some near-term implications for UK policy in this area.

The judgement does not shut the door on the use by states of automated profiling systems entirely — but does make it clear that in Europe human rights law must be central to the design and implementation of rights risking tools.

It also comes at a key time when EU policymakers are working on a framework to regulate artificial intelligence — with the Commission pledging to devise rules that ensure AI technologies are applied ethically and in a human-centric way.

It remains to be seen whether the Commission will push for pan-EU limits on specific public sector uses of AI — such as for social security assessments. A recent leaked draft of a white paper on AI regulation suggests it’s leaning towards risk-assessments and a patchwork of risk-based rules. 

Russia’s push back against Big Tech has major consequences for Apple

Tech companies are getting so large that Russia is fast-tracking laws aimed at developing “digital sovereignty,” and their next target is Apple . How will these regulations affect tech companies looking to do business in the country?

This month, Donald Trump took to Twitter to criticize Apple for not unlocking two iPhones belonging to the Pensacola shooter, another volley in the struggle between big tech and the world’s governing bodies. But even the White House’s censure pales in comparison to the Kremlin’s ongoing plans. Apple, as the timing would have it, also happens to be in Vladimir Putin’s sights.

The company’s long-running policy of not preloading third-party software onto its devices is coming up against a new piece of Russian legislation requiring every smart device to be sold with certain applications already installed, many of which are produced by the government. Inside the country, the policy has even been called the “law against Apple” for how it disproportionately affects the tech giant. While the law was passed last November, the Russian Federal Antimonopoly Service released the full list of apps only last week.

These regulations form the latest move in what’s turning out to be one of the largest national campaigns for digital control outside of Asia. These laws have been steadily accumulating since 2014 and are described as a way of consolidating sovereignty over the digital space — threatening to push companies out of the country if they fail to comply. Apple, for instance, will have to choose by July 1 whether maintaining access to the Russian market is worth making a revolutionary change in their policy. The same choice is given to any company wishing to do business in the country.

Ancestry.com rejected a police warrant to access user DNA records on a technicality

DNA profiling company Ancestry.com has narrowly avoided complying with a search warrant in Pennsylvania after a search warrant was rejected on technical grounds, a move that is likely to help law enforcement refine their efforts to obtain user information despite the company’s efforts to keep the data private.

Little is known about the demands of the search warrant, only that a court in Pennsylvania approved law enforcement to “seek access” to Utah-based Ancestry.com’s database of more than 15 million DNA profiles.

TechCrunch was not able to identify the search warrant or its associated court case, which was first reported by BuzzFeed News on Monday. But it’s not uncommon for criminal cases still in the early stages of gathering evidence to remain under seal and hidden from public records until a suspect is apprehended.

DNA profiling companies like Ancestry.com are increasingly popular with customers hoping to build up family trees by discovering new family members and better understanding their cultural and ethnic backgrounds. But these companies are also ripe for picking by law enforcement, which want access to genetic databases to try to solve crimes from DNA left at crime scenes.

In an email to TechCrunch, the company confirmed that the warrant was “improperly served” on the company and was flatly rejected.

“We did not provide any access or customer data in response,” said spokesperson Gina Spatafore. “Ancestry has not received any follow-up from law enforcement on this matter.”

Ancestry.com, the largest of the DNA profiling companies, would not go into specifics, but the company’s transparency report said it rejected the warrant on “jurisdictional grounds.”

“I would guess it was just an out of state warrant that has no legal effect on Ancestry.com in its home state,” said Orin S. Kerr, law professor at the University of California, Berkeley, in an email to TechCrunch. “Warrants normally are only binding within the state in which they are issued, so a warrant for Ancestry.com issued in a different state has no legal effect,” he added.

But the rejection is likely to only stir tensions between police and the DNA profiling services over access to the data they store.

Ancestry.com’s Spatafore said it would “always advocate for our customers’ privacy and seek to narrow the scope of any compelled disclosure, or even eliminate it entirely.” It’s a sentiment shared by 23andMe, another DNA profiling company, which last year said that it had “successfully challenged” all of its seven legal demands, and as a result has “never turned over any customer data to law enforcement.”

The statements were in response to criticism that rival GEDmatch had controversially allowed law enforcement to search its database of more than a million records. The decision to allow in law enforcement was later revealed as crucial in helping to catch the notorious Golden Gate Killer, one of the most prolific murderers in U.S. history.

But the move was widely panned by privacy advocates for accepting a warrant to search its database without exhausting its legal options.

It’s not uncommon for companies to receive law enforcement demands for user data. Most tech giants, like Apple, Facebook, Google and Microsoft, publish transparency reports detailing the number of legal demands and orders they receive for user data each year or half-year.

Although both Ancestry.com and 23andMe provide transparency reports, detailing the amount of law enforcement demands for user data they receive, not all are as forthcoming. GEDmatch still does not publish its data demand figures, nor does MyHeritage, which said it “does not cooperate” with law enforcement. FamilyTreeDNA said it was “working” on publishing a transparency report.

But as police continue to demand data from DNA profiling and genealogy companies, they risk turning customers away — a lose-lose for both police and the companies.

Vera Eidelman, staff attorney with the ACLU’s Speech, Privacy, and Technology Project, said it would be “alarming” if law enforcement were able to get access to these databases containing millions of people’s information.

“Ancestry did the right thing in pushing back against the government request, and other companies should follow suit,” said Eidelman.