Will online privacy make a comeback in 2020?

Last year was a landmark for online privacy in many ways, with something of a consensus emerging that consumers deserve protection from the companies that sell their attention and behavior for profit.

The debate now is largely around how to regulate platforms, not whether it needs to happen.

The consensus among key legislators acknowledges that privacy is not just of benefit to individuals but can be likened to public health; a level of protection afforded to each of us helps inoculate democratic societies from manipulation by vested and vicious interests.

The fact that human rights are being systematically abused at population-scale because of the pervasive profiling of Internet users — a surveillance business that’s dominated in the West by tech giants Facebook and Google, and the adtech and data broker industry which works to feed them — was the subject of an Amnesty International report in November 2019 that urges legislators to take a human rights-based approach to setting rules for Internet companies.

“It is now evident that the era of self-regulation in the tech sector is coming to an end,” the charity predicted.

Democracy disrupted

The dystopian outgrowth of surveillance capitalism was certainly in awful evidence in 2019, with elections around the world attacked at cheap scale by malicious propaganda that relies on adtech platforms’ targeting tools to hijack and skew public debate, while the chaos agents themselves are shielded from democratic view.

Platform algorithms are also still encouraging Internet eyeballs towards polarized and extremist views by feeding a radicalized, data-driven diet that panders to prejudices in the name of maintaining engagement — despite plenty of raised voices calling out the programmed antisocial behavior. So what tweaks there have been still look like fiddling round the edges of an existential problem.

Worse still, vulnerable groups remain at the mercy of online hate speech which platforms not only can’t (or won’t) weed out, but whose algorithms often seem to deliberately choose to amplify — the technology itself being complicit in whipping up violence against minorities. It’s social division as a profit-turning service.

The outrage-loving tilt of these attention-hogging adtech giants has also continued directly influencing political campaigning in the West this year — with cynical attempts to steal votes by shamelessly platforming and amplifying misinformation.

From the Trump tweet-bomb we now see full-blown digital disops underpinning entire election campaigns, such as the UK Conservative Party’s strategy in the 2019 winter General Election, which featured doctored videos seeded to social media and keyword targeted attack ads pointing to outright online fakes in a bid to hack voters’ opinions.

Political microtargeting divides the electorate as a strategy to conquer the poll. The problem is it’s inherently anti-democratic.

No wonder, then, that repeat calls to beef up digital campaigning rules and properly protect voters’ data have so far fallen on deaf ears. The political parties all have their hands in the voter data cookie-jar. Yet it’s elected politicians whom we rely upon to update the law. This remains a grave problem for democracies going into 2020 — and a looming U.S. presidential election.

So it’s been a year when, even with rising awareness of the societal cost of letting platforms suck up everyone’s data and repurpose it to sell population-scale manipulation, not much has actually changed. Certainly not enough.

Yet looking ahead there are signs the writing is on the wall for the ‘data industrial complex’ — or at least that change is coming. Privacy can make a comeback.

Adtech under attack

Developments in late 2019 such as Twitter banning all political ads and Google shrinking how political advertisers can microtarget Internet users are notable steps — even as they don’t go far enough.

But it’s also a relatively short hop from banning microtargeting sometimes to banning profiling for ad targeting entirely.

Alternative online ad models (contextual targeting) are proven and profitable — just ask search engine DuckDuckGo . While the ad industry gospel that only behavioral targeting will do now has academic critics who suggest it offer far less uplift than claimed, even as — in Europe — scores of data protection complaints underline the high individual cost of maintaining the status quo.

Startups are also innovating in the pro-privacy adtech space (see, for example, the Brave browser).

Changing the system — turning the adtech tanker — will take huge effort, but there is a growing opportunity for just such systemic change.

This year, it might be too much to hope for regulators get their act together enough to outlaw consent-less profiling of Internet users entirely. But it may be that those who have sought to proclaim ‘privacy is dead’ will find their unchecked data gathering facing death by a thousand regulatory cuts.

Or, tech giants like Facebook and Google may simple outrun the regulators by reengineering their platforms to cloak vast personal data empires with end-to-end encryption, making it harder for outsiders to regulate them, even as they retain enough of a fix on the metadata to stay in the surveillance business. Fixing that would likely require much more radical regulatory intervention.

European regulators are, whether they like it or not, in this race and under major pressure to enforce the bloc’s existing data protection framework. It seems likely to ding some current-gen digital tracking and targeting practices. And depending on how key decisions on a number of strategic GDPR complaints go, 2020 could see an unpicking — great or otherwise — of components of adtech’s dysfunctional ‘norm’.

Among the technologies under investigation in the region is real-time bidding; a system that powers a large chunk of programmatic digital advertising.

The complaint here is it breaches the bloc’s General Data Protection Regulation (GDPR) because it’s inherently insecure to broadcast granular personal data to scores of entities involved in the bidding chain.

A recent event held by the UK’s data watchdog confirmed plenty of troubling findings. Google responded by removing some information from bid requests — though critics say it does not go far enough. Nothing short of removing personal data entirely will do in their view, which sums to ads that are contextually (not micro)targeted.

Powers that EU data protection watchdogs have at their disposal to deal with violations include not just big fines but data processing orders — which means corrective relief could be coming to take chunks out of data-dependent business models.

As noted above, the adtech industry has already been put on watch this year over current practices, even as it was given a generous half-year grace period to adapt.

In the event it seems likely that turning the ship will take longer. But the message is clear: change is coming. The UK watchdog is due to publish another report in 2020, based on its review of the sector. Expect that to further dial up the pressure on adtech.

Web browsers have also been doing their bit by baking in more tracker blocking by default. And this summer Marketing Land proclaimed the third party cookie dead — asking what’s next?

Alternatives and workarounds will and are springing up (such as stuffing more in via first party cookies). But the notion of tracking by background default is under attack if not quite yet coming unstuck.

Ireland’s DPC is also progressing on a formal investigation of Google’s online Ad Exchange. Further real-time bidding complaints have been lodged across the EU too. This is an issue that won’t be going away soon, however much the adtech industry might wish it.

Year of the GDPR banhammer?

2020 is the year that privacy advocates are really hoping that Europe will bring down the hammer of regulatory enforcement. Thousands of complaints have been filed since the GDPR came into force but precious few decisions have been handed down. Next year looks set to be decisive — even potentially make or break for the data protection regime.

Facebook data misuse and voter manipulation back in the frame with latest Cambridge Analytica leaks

More details are emerging about the scale and scope of disgraced data company Cambridge Analytica’s activities in elections around the world — via a cache of internal documents that’s being released by former employee and self-styled whistleblower, Brittany Kaiser.

The now shut down data modelling company, which infamously used stolen Facebook data to target voters for President Donald Trump’s campaign in the 2016 U.S. election, was at the center of the data misuse scandal that, in 2018, wiped billions off Facebook’s share price and contributed to a $5BN FTC fine for the tech giant last summer.

However plenty of questions remain, including where, for whom and exactly how Cambridge Analytica and its parent entity SCL Elections operated; as well as how much Facebook’s leadership knew about the dealings of the firm that was using its platform to extract data and target political ads — helped by some of Facebook’s own staff.

Certain Facebook employees were referring to Cambridge Analytica as a “sketchy” company as far back as September 2015 — yet the tech giant only pulled the plug on platform access after the scandal went global in 2018.

Facebook CEO Mark Zuckerberg has also continued to maintain that he only personally learned about CA from a December 2015 Guardian article, which broke the story that Ted Cruz’s presidential campaign was using psychological data based on research covering tens of millions of Facebook users, harvested largely without permission. (It wasn’t until March 2018 that further investigative journalism blew the lid off the story — turning it into a global scandal.)

Former Cambridge Analytica business development director Kaiser, who had a central role in last year’s Netflix documentary about the data misuse scandal (The Great Hack), began her latest data dump late last week — publishing links to scores of previously unreleased internal documents via a Twitter account called @HindsightFiles. (At the time of writing Twitter has placed a temporary limit on viewing the account — citing “unusual activity”, presumably as a result of the volume of downloads it’s attracting.)

Since becoming part of the public CA story Kaiser has been campaigning for Facebook to grant users property rights over their data. She claims she’s releasing new documents from her former employer now because she’s concerned this year’s US election remains at risk of the same type of big-data-enabled voter manipulation that tainted the 2016 result.

“I’m very fearful about what is going to happen in the US election later this year, and I think one of the few ways of protecting ourselves is to get as much information out there as possible,” she told The Guardian.

“Democracies around the world are being auctioned to the highest bidder,” is the tagline clam on the Twitter account Kaiser is using to distribute the previously unpublished documents — more than 100,000 of which are set to be released over the coming months, per the newspaper’s report.

The releases are being grouped into countries — with documents to-date covering Brazil, Kenya and Malaysia. There is also a themed release dealing with issues pertaining to Iran, and another covering CA/SCL’s work for Republican John Bolton’s Political Action Committee in the U.S.

The releases look set to underscore the global scale of CA/SCL’s social media-fuelled operations, with Kaiser writing that the previously unreleased emails, project plans, case studies and negotiations span at least 65 countries.

A spreadsheet of associate officers included in the current cache lists SCL associates in a large number of countries and regions including Australia, Argentina, the Balkans, India, Jordan, Lithuania, the Philippines, Switzerland and Turkey, among others. A second tab listing “potential” associates covers political and commercial contacts in various other places including Ukraine and even China.

A UK parliamentary committee which investigated online political campaigning and voter manipulation in 2018 — taking evidence from Kaiser and CA whistleblower Chris Wylie, among others — urged the government to audit the PR and strategic communications industry, warning in its final report how “easy it is for discredited companies to reinvent themselves and potentially use the same data and the same tactics to undermine governments, including in the UK”.

“Data analytics firms have played a key role in elections around the world. Strategic communications companies frequently run campaigns internationally, which are financed by less than transparent means and employ legally dubious methods,” the DCMS committee also concluded.

The committee’s final report highlighted election and referendum campaigns SCL Elections (and its myriad “associated companies”) had been involved in in around thirty countries. But per Kaiser’s telling its activities — and/or ambitions — appear to have been considerably broader and even global in scope.

Documents released to date include a case study of work that CA was contracted to carry out in the U.S. for Bolton’s Super PAC — where it undertook what is described as “a personality-targeted digital advertising campaign with three interlocking goals: to persuade voters to elect Republican Senate candidates in Arkansas, North Carolina and New Hampshire; to elevate national security as an issue of importance and to increase public awareness of Ambassador Bolton’s Super PAC”.

Here CA writes that it segmented “persuadable and low-turnout voter populations to identify several key groups that could be influenced by Bolton Super PAC messaging”, targeting them with online and Direct TV ads — designed to “appeal directly to specific groups’ personality traits, priority issues and demographics”. 

Psychographic profiling — derived from CA’s modelling of Facebook user data — was used to segment U.S. voters into targetable groups, including for serving microtargeted online ads. The company badged voters with personality-specific labels such as “highly neurotic” — targeting individuals with customized content designed to pray on their fears and/or hopes based on its analysis of voters’ personality traits.

The process of segmenting voters by personality and sentiment was made commercially possible by access to identity-linked personal data — which puts Facebook’s population-scale collation of identities and individual-level personal data squarely in the frame.

It was a cache of tens of millions of Facebook profiles, along with responses to a personality quiz app linked to Facebook accounts, which was sold to Cambridge Analytica in 2014, by a company called GSR, and used to underpin its psychographic profiling of U.S. voters.

In evidence to the DCMS committee last year GSR’s co-founder, Aleksandr Kogan, argued that Facebook did not have a “valid” developer policy at the time, since he said the company did nothing to enforce the stated T&Cs — meaning users’ data was wide open to misappropriation and exploitation.

The UK’s data protection watchdog also took a dim view. In 2018 it issued Facebook with the maximum fine possible, under relevant national law, for the CA data breach — and warned in a report that democracy is under threat. The country’s information commissioner also called for an “ethical pause” of the use of online microtargeting ad tools for political campaigning.

No such pause has taken place.

Meanwhile for its part, since the Cambridge Analytica scandal snowballed into global condemnation of its business, Facebook has made loud claims to be ‘locking down’ its platform — including saying it would conduct an app audit and “investigate all apps that had access to large amounts of information”; “conduct a full audit of any app with suspicious activity”; and “ban any developer from our platform that does not agree to a thorough audit”.

However, close to two years later, there’s still no final report from the company on the upshot of this self ‘audit’.

And while Facebook was slapped with a headline-grabbing FTC fine on home soil, there was in fact no proper investigation; no requirement for it to change its privacy-hostile practices; and blanket immunity for top execs — even for any unknown data violations in the 2012 to 2018 period. So, ummm

In another highly curious detail, GSR’s other co-founder, a data scientist called Joseph Chancellor, was in fact hired by Facebook in late 2015. The tech giant has never satisfactorily explained how it came to recruit one of the two individuals at the center of a voter manipulation data misuse scandal which continues to wreak hefty reputational damage on Zuckerberg and his platform. But being able to ensure Chancellor was kept away from the press during a period of intense scrutiny looks pretty convenient.

Last fall, the GSR co-founder was reported to have left Facebook — as quietly, and with as little explanation given, as when he arrived on the tech giant’s payroll.

So Kaiser seems quite right to be concerned that the data industrial complex will do anything to keep its secrets — given it’s designed and engineered to sell access to yours. Even as she has her own reasons to want to keep the story in the media spotlight.

Platforms whose profiteering purpose is to track and target people at global scale — which function by leveraging an asymmetrical ‘attention economy’ — have zero incentive to change or have change imposed upon them. Not when the propaganda-as-a-service business remains in such high demand, whether for selling actual things like bars of soap, or for hawking ideas with a far darker purpose.

BigID bags another $50M round as data privacy laws proliferate

Almost exactly 4 months to the day after BigID announced a $50 million Series C, the company was back today with another $50 million round. The Series D came entirely from Tiger Global Management. The company has raised a total of $144 million.

What warrants $100 million in interest from investors in just four months is BigID’s mission to understand the data a company has and manage that in the context of increasing privacy regulation including GDPR in Europe and CCPA in California, which went into effect this month.

BigID CEO and co-founder Dimitri Sirota admits that his company formed at the right moment when it launched in 2016, but says he and his co-founders had an inkling that there would be a shift in how governments view data privacy.

“Fortunately for us, some of the requirements that we said were going to be critical, like being able to understand what data you collect on each individual across your entire data landscape, have come to [pass],” Sirota told TechCrunch. While he understands that there are lots of competing companies going after this market, he believes that being early helped his startup establish a brand identity earlier than most.

Meanwhile, the privacy regulation landscape continues to evolve. Even as California privacy legislation is taking effect, many other states and countries are looking at similar regulations. Canada is looking at overhauling its existing privacy regulations.

Sirota says that he wasn’t actually looking to raise either the C or the D, and in fact still has B money in the bank, but when big investors want to give you money on decent terms, you take it while the money is there. These investors clearly see the data privacy landscape expanding and want to get involved. He recognizes that economic conditions can change quickly, and it can’t hurt to have money in the bank for when that happens.

That said, Sirota says you don’t raise money to keep it in the bank. At some point, you put it to work. The company has big plans to expand beyond its privacy roots and into other areas of security in the coming year. Although he wouldn’t go into too much detail about that, he said to expect some announcements soon.

For a company that is only four years old, it has been amazingly proficient at raising money with a $14 million Series A and a $30 million Series B in 2018, followed by the $50 million Series C last year, and the $50 million round today. And Sirota said, he didn’t have to even go looking for the latest funding. Investors came to him — no trips to Sand Hill Road, no pitch decks. Sirota wasn’t willing to discuss the company’s valuation, only saying the investment was minimally diluted.

BigID, which is based in New York City, already has some employees in Europe and Asia, but he expects additional international expansion in 2020. Overall the company has around 165 employees at the moment and he sees that going up to 200 by mid-year as they make a push into some new adjacencies.

ByteDance & TikTok have secretly built a Deepfakes maker

TikTok parent company ByteDance has teamed up with one of the most controversial apps to let you insert your face into videos starring someone else. TechCrunch has learned that ByteDance has developed an unreleased feature using life-like Deepfakes technology that the app’s code refers to as Face Swap. Code in both TikTok and its Chinese sister app Douyin asks users to take a multi-angle biometric scan of their face, then choose from a selection of videos they want to add their face to and share.

Users scan themselves, pick a video, and have their face overlaid on the body of someone in the clip with ByteDance’s new Face Swap feature

The Face Swap option was built atop the API of Chinese Deepfakes app Zao, which uses artificial intelligence to blend one person’s face into another’s body as they move and synchronize their expressions. Zao went viral in September despite privacy and security concerns about how users’ facial scans might be abused.

The Deepfakes feature, if launched in Douyin and TikTok, could create a more controlled environment where face swapping technology plus a limited selection of source videos  can be used for fun instead of spreading misinformation. It might also raise awareness of the technology so more people are aware that they shouldn’t believe everything they see online. But it’s also likely to heighten fears about what Zao and ByteDance could do with such sensitive biometric data — similar to what’s used to set up FaceID on iPhones. Zao was previously blocked by China’s WeChat for presenting “security risks”.

Several other tech companies have recently tried to consumerize watered-down versions of Deepfakes. The app Morphin lets you overlay a computerized rendering of your face on actors in GIFs. Snapchat offered a FaceSwap option for years that would switch the visages of two people in frame, or replace one on camera with one from your camera roll, and there are standalone apps that do that too like Face Swap Live. Then last month, TechCrunch spotted Snapchat’s new Cameos for inserting a real selfie into video clips it provides, though the results aren’t meant to look confusingly realistic.

But ByteDance’s teamup with Zao could bring convincingly life-like Deepfakes to TikTok and Douyin, two of the world’s most popular apps with over 1.5 billion downloads.

Zao in the Chinese iOS App Store

Zao in the Chinese iOS App Store

Hidden Inside TikTok and Douyin

TechCrunch received a tip about the news from Israeli in-app market research startup Watchful.ai. The company had discovered code for the Deepfakes feature in the latest version of TikTok’s and Douyin’s Android apps. Watchful.ai was able to activate the code in Douyin to generate screenshots of the feature, though it’s not currently available to the public.

First, users scan their face into TikTok. This also serves as an identity check to make sure you’re only submitting your own face so you can’t make unconsented Deepfakes of anyone else using an existing photo or a single shot of their face. By asking you to blink, nod, and open and close your mouth while in focus and proper lighting, Douyin can ensure you’re a live human and create a manipulable scan of your face that it can stretch and move to express different emotions or fill different scenes.

You’ll then be able to pick from videos ByteDance claims to have the rights to use, and it will replace the face of whoever’s in the clip with your own. You can then share or download the Deepfake video, though it will include an overlayed watermark the company claims will help distinguish the content as not being real.

Code in th apps reveals that the Face Swap feature relies on the Zao API. There are many references to this API including strings like:

zaoFaceParams
/media/api/zao/video/create
enter_zaoface_preview_page
isZaoVideoType
zaoface_clip_edit_page

Watchful also discovered unpublished updates to TikTok and Douyin’s terms of service that cover privacy and usage of the Deepfakes feature. Inside the US version of TikTok’s Android app, English text in the code explains the feature and some of its terms of use:

Your facial pattern will be used for this feature. Read the Drama Face Terms of Use and Privacy Policy for more details. Make sure you’ve read and agree to the Terms of Use and Privacy Policy before continuing. 1. To make this feature secure for everyone, real identity verification is required to make sure users themselves are using this feature with their own faces. For this reason, uploaded photos can’t be used; 2. Your facial pattern will only be used to generate face-change videos that are only visible to you before you post it. To better protect your personal information, identity verification is required if you use this feature later. 3. This feature complies with Internet Personal Information Protection Regulations for Minors. Underage users won’t be able to access this feature. 4. All video elements related to this feature provided by Douyin have acquired copyright authorization.”

ZHEJIANG, CHINA – OCTOBER 18 2019 Two us senators have sent a letter to the us national intelligence agency saying TikTok could pose a threat to us national security and should be investigated. Visitors visit the booth of douyin(Tiktok) at the 2019 smart expo in hangzhou, east China’s zhejiang province, Oct. 18, 2019.- PHOTOGRAPH BY Costfoto / Barcroft Media (Photo credit should read Costfoto / Barcroft Media via Getty Images)

A longer terms of use and privacy policy was also found in Chinese within Douyin. Translated into English, some highlights from the text include:

  • “The ‘face-changing’ effect presented by this function is a fictional image generated by the superimposition of our photos based on your photos. In order to show that the original work has been modified and the video generated using this function is not a real video, we will mark the video generated using this function. Do not erase the mark in any way.”

  • “The information collected during the aforementioned detection process and using your photos to generate face-changing videos is only used for live detection and matching during face-changing. It will not be used for other purposes . . . And matches are deleted immediately and your facial features are not stored.”

  • “When you use this function, you can only use the materials provided by us, you cannot upload the materials yourself. The materials we provide have been authorized by the copyright owner”.

  • “According to the ‘Children’s Internet Personal Information Protection Regulations’ and the relevant provisions of laws and regulations, in order to protect the personal information of children / youths, this function restricts the use of minors”.

We reached out to TikTok and Douyin for comment regarding the Deepfakes feature, when it might launch, how the privacy of biometric scans are protected, the age limit, and the nature of its relationship with Zao. However, TikTok declined to answer those questions. Instead a spokesperson insisted that “after checking with the teams I can confirm this is definitely not a function in TikTok, nor do we have any intention of introducing it. I think what you may be looking at is something slated for Douyin – your email includes screenshots that would be from Douyin, and a privacy policy that mentions Douyin. That said, we don’t work on Douyin here at TikTok.” They later told TechCrunch that “The inactive code fragments are being removed to eliminate any confusion”, which implicitly confirms that Face Swap code was found in TikTok.

A Douyin spokesperson told TechCrunch that “Douyin has no cooperation with Zao” despite references to Zao in the code. They also denied that the Face Swap terms of service appear in TikTok despite TechCrunch reviewing code from the app showing those terms of service and the feature’s functionality.

This is suspicious, and doesn’t explain why code for the Deepfakes feature and special terms of service in English for the feature appear in TikTok, and not just Douyin where the app can already be activated and a longer terms of service was spotted. TikTok’s US entity has previously denied complying with censorship requests from the Chinese government in contradiction to sources who told the Washington Post and that TikTok did censor some political and sexual content at China’s behest.

It’s possible that the Deepfakes Face Swap feature never officially launches in China or the US. But it’s fully functional, even if unreleased, and demonstrates ByteDance’s willingness to embrace the controversial technology despite its reputation for misinformation and non-consensual pornography. At least it’s restricting the use of the feature by minors, only letting you face-swap yourself, and preventing users from uploading their own source videos. That avoid it being used to create dangerous misinformation Deepfakes like the one making House Speaker Nancy Pelosi seem drunk.

“It’s very rare to see a major social networking app restrict a new, advanced feature to their users 18 and over only” Watchful.ai co-founder and CEO Itay Kahana tells TechCrunch. “These deepfake apps might seem like fun on the surface, but they should not be allowed to become trojan horses, compromising IP rights and personal data, especially personal data from minors who are overwhelmingly the heaviest users of TikTok to date.”

TikTok has already been banned by the US Navy and ByteDance’s acquisition and merger of Musically into TikTok is under investigation by the Comittee On Foreign Investment In The United States. Deepfake fears could further heighten scrutiny.

With the proper safeguards, though, face-changing technology could usher in a new era of user generated content where the creator is always at the center of the action. It’s all part of a new trend of personalized media that could be big in 2020. Social media has evolved from selfies to Bitmoji to Animoji to Cameos and now consumerized Deepfakes. When there are infinite apps and videos and notifications to distract us, making us the star could be the best way to hold our attention.

Here’s where California residents can stop companies selling their data

California’s new privacy law is now in effect, allowing state residents to take better control of the data that’s collected on them — from social networks, banks, credit agencies, and more.

There’s just one catch: the companies, many of which lobbied against the law, don’t make it easy.

California’s Consumer Privacy Act (CCPA) allows anyone who resides in the state to access and obtain copies of the data that companies store on them, the right to delete that data, and to opt-out of companies selling or monetizing their data. It’s the biggest state-level overhaul of privacy rules in a generation. State regulators can impose fines and other sanctions for companies that violate the law — although, the law’s enforcement provisions do not take effect until July. That’s probably a good thing for companies, given most major tech giants operating in the state are not ready to comply with the law.

Just as companies did with Europe’s GDPR, many companies have sprung up new privacy policies in preparation and new data portals, which allow consumers access to their data and to opt-out of their data being sold on to third-parties, such as advertisers. But good luck finding them. Most companies aren’t transparent about where their data portals are, often out of sight and buried in privacy policies, near-guaranteeing that nobody will find them.

Just two days into the new law, and some are already fixing it for the average Californian.

Damian Finol created a running directory of company pages that allow California residents to opt-out of their data being sold, and request their information. The directory is updated frequently, and so far includes banks, retail giants, airlines, car rental services, gaming giants, and cell companies — to name a few.

caprivacy.me is a simple directory of links to where California residents can tell companies not to sell their data, and request what data companies store on them. (Screenshot: TechCrunch)

The project is still in its infancy but relies on community contributions (and anyone can submit a suggestion), he said. In less than a day, it already racked up more than 80 links.

“I’m passionate about privacy and allowing people to declare what their personal privacy model is,” Finol told TechCrunch.

“I grew up queer in the Latin America in the 1990s, so keeping private the truth about me was vital. Nowadays, I think of my LGBTQ siblings in places like the Middle East where if their privacy is violated, they can face capital punishment,” he said, explaining his motivations behind the directory.

There’s no easy way — yet — to opt-out in one go. Anyone in California who wants to opt-out has to go through each link. But once it’s done, it’s done. Put on a pot of coffee and get started.

The California Consumer Privacy Act officially takes effect today

California’s much-debated privacy law officially takes effect today, a year and a half after it was passed and signed — but it’ll be six more months before you see the hammer drop on any scofflaw tech companies that sell your personal data without your permission.

The California Consumer Privacy Act, or CCPA, is a state-level law that requires, among other things, that companies notify users of the intent to monetize their data, and give them a straightforward means of opting out of said monetization.

Here’s a top-level summary of some of its basic tenets:

  • Businesses must disclose what information they collect, what business purpose they do so for and any third parties they share that data with.
  • Businesses will be required to comply with official consumer requests to delete that data.
  • Consumers can opt out of their data being sold, and businesses can’t retaliate by changing the price or level of service.
  • Businesses can, however, offer “financial incentives” for being allowed to collect data.
  • California authorities are empowered to fine companies for violations.

The law is described in considerably more detail here, but the truth is that it will probably take years before its implications for businesses and regulators are completely understood and brought to bear. In the meantime the industries that will be most immediately and obviously affected are panicking.

A who’s-who of internet-reliant businesses has publicly opposed the CCPA. While they have been careful to avoid saying such regulation is unnecessary, they have said that this regulation is unnecessary. What we need, they say, is a federal law.

That’s true as far as it goes — it would protect more people and there would be less paperwork for companies that now must adapt their privacy policies and reporting to CCPA’s requirements. But the call for federal regulation is transparently a stall tactic, and an adequate bill at that level would likely take a year or more of intensive work even at the best of times, let alone during an election year while the President is being impeached.

So California wisely went ahead and established protections for its own residents, though as a consequence it will have aroused the ire of many companies based there.

A six-month grace period follows today’s official activation of the CCPA; This is a normal and necessary part of breaking in such a law, when honest mistakes can go unpunished and the inevitable bugs in the system can be squelched.

But starting in June offenses will be assessed with fines at the scale of thousands of dollars per violation, something that adds up real quick at the scales companies like Google and Facebook work in.

Adapting to the CCPA will be difficult, but as the establishment of GDPR in Europe has shown, it’s far from impossible, and at any rate the former’s requirements are considerably less stringent. Still, if your company isn’t already working on getting in compliance, better get started.