Facebook’s photo transfer tool opens to more users in Europe, LatAm and Africa

Facebook is continuing to open up access to a data porting tool it launched in Ireland in December. The tool lets users of its network transfer photos and videos they have stored on its servers directly to another photo storage service, such as Google Photos, via encrypted transfer.

A Facebook spokesman confirmed to TechCrunch that access to the transfer tool is being rolled out today to the UK, the rest of the European Union and additional countries in Latin America and Africa.

Late last month Facebook also opened up access to multiple markets in APAC and LatAm, per the spokesman. The tech giant has previously said the tool will be available worldwide in the first half of 2020.

The setting to “transfer a copy of your photos and videos” is accessed via the Your Facebook Information settings menu.

The tool is based on code developed via Facebook’s participation in the Data Transfer Project (DTP) — a collaborative effort starting in 2018 and backed by the likes of Apple, Facebook, Google, Microsoft and Twitter — who committed to build a common framework using open source code for connecting any two online service providers in order to support “seamless, direct, user initiated portability of data between the two platforms”.

In recent years the dominance of tech giants has led to an increase in competition complaints — garnering the attention of policymakers and regulators.

In the EU, for instance, competition regulators are now eyeing the data practices of tech giants including Amazon, Facebook and Google. While, in the US, tech giants including Google, Facebook, Amazon, Apple and Microsoft are also facing antitrust scrutiny. And as more questions are being asked about antitrust big tech has been under pressure to respond — hence the collective push on portability.

Last September Facebook also released a white paper laying out its thinking on data portability which seeks to frame it as a challenge to privacy — in what looks like an attempt to lobby for a regulatory moat to limit portability of the personal data mountain it’s amassed on users.

At the same time, the release of a portability tool gives Facebook something to point regulators to when they come calling — even as the tools only allows users to port a very small portion of the personal data the service holds on them. Such tools are also only likely to be sought out by the minority of more tech savvy users.

Facebook’s transfer tool also currently only supports direct transfer to Google’s cloud storage — greasing a pipe for users to pass a copy of their facial biometrics from one tech giant to another.

We checked, and from our location in the EU, Google Photos is the only direct destination offered via Facebook’s drop-down menu thus far:

However the spokesman implied wider utility could be coming — saying the DTP project updated adapters for photos APIs from Smugmug (which owns Flickr); and added new integrations for music streaming service Deezer; decentralized social network Mastodon; and Tim Berners-Lee’s decentralization project Solid.

Though it’s not clear why there’s no option offered as yet within Facebook to port direct to any of these other services. Presumably additional development work is still required by the third party to implement the direct data transfer.  (We’ve asked Facebook for more on this and will update if we get a response.)

The aim of the DTP is to develop a standardized version to make it easier for others to join without having to “recreate the wheel every time they want to build portability tools”, as the spokesman put it, adding: “We built this tool with the support of current DTP partners, and hope that even more companies and partners will join us in the future.”

He also emphasized that the code is open source and claimed it’s “fairly straightforward” for a company that wishes to plug its service into the framework especially if they already have  a public API.

“They just need to write a DTP adapter against that public API,” he suggested.

“Now that the tool has launched, we look forward to working with even more experts and companies – especially startups and new platforms looking to provide an on-ramp for this type of service,” the spokesman added.

Adtech giant Criteo is being investigated by France’s data watchdog

Adtech giant Criteo is under investigation by the French data protection watchdog, the CNIL, following a complaint filed by privacy rights campaign group Privacy International.

“I can confirm that the CNIL has opened up an investigation into Criteo . We are in the trial phase, so we can’t communicate at this stage,” a CNIL spokesperson told us.

Privacy International has been campaigning for more than a year for European data protection agencies to investigate several adtech players and data brokers involved in programmatic advertising.

Yesterday it said the French regulator has finally opened a probe of Criteo.

“CNIL’s confirmation that they are investigating Criteo is important and we warmly welcome it,” it said in the  statement. “The AdTech ecosystem is based on vast privacy infringements, exploiting people’s data on a daily basis. Whether its through deceptive consent banners or by infesting mental health websites these companies enable a surveillance environment where all you moves online are tracked to profile and target you, with little space to contest.”

We’ve reached out to Criteo for comment.

Back in November 2018, a few months after Europe’s updated data protection framework (GDPR) came into force, Privacy International filed complaints against a number of companies operating in the space — including Criteo.

A subsequent investigation by the rights group last year also found adtech trackers on mental health websites sharing sensitive user data for ad targeting purposes.

Last May Ireland’s Data Protection Commission also opened a formal investigation into Quantcast, following Privacy International’s complaint and a swathe of separate GDPR complaints targeting the real-time bidding (RTB) process involved in programmatic advertising.

The crux of the RTB complaints is that the process is inherently insecure since it entails the leaky broadcasting of people’s personal data with no way for it to be controlled once it’s out there vs GDPR’s requirement for personal data to be processed securely.

In June the UK’s Information Commission’s Office also fired a warning shot at the behavioral ad industry — saying it had “systemic concerns” about the compliance of RTB. Although the regulator has so far failed to take any enforcement action, despite issuing another blog post last December in which it discussed the “industry problem” with lawfulness — preferring instead to encourage adtech to reform itself. (Relevant: Google announcing it will phase out support for third party cookies.)

In its 2018 adtech complaint, Privacy International called for France’s CNIL, the UK’s ICO and Ireland’s DPC to investigate Criteo, Quantcast and a third company called Tapad — arguing their processing of Internet users’ data (including special category personal data) has no lawful basis, neither fulfilling GDPR’s requirements for consent nor legitimate interest.

Privacy International’s complaint argued that additional GDPR principles — including transparency, fairness, purpose limitation, data minimisation, accuracy and integrity and confidently — were also not being fulfilled; and called for further investigation to ascertain compliance with other legal rights and safeguards GDPR gives Europeans over their personal data, including the right to information; access; rights related to automated decision making and profiling; data protection and by design and default; and data protection impact assessments.

In specific complaints against Criteo, Privacy International raised concerns about its Shopper Graph tool, which is used to predict real-time product interest, and which Criteo has touted as having data on nearly three-quarters of the worlds’ shoppers, fed by cross-device online tracking of people’s digital activity which is not limited to cookies and gets supplemented by offline data; and its Dynamic Retargeting tool, which enables the retargeting of tracked shoppers with behaviorally targeted ads via Criteo sharing data with scores of ‘partners’ including publishers and ad exchanges involved in the RTB process to auction online ad slots.

At the time of the original complaint Privacy International said Criteo told it it was relying on consent to track individuals obtained via its advertising (and publisher) partners — who, per GDPR, would need to obtain informed, specific and freely given consent up-front before dropping any tracking cookies (or other tracer technologies) — as well as claiming a legal base known as legitimate interest, saying it believed this was a valid ground so that it could comply with its contractual obligations toward its clients and partners.

However legitimate interests requires a balancing test to be carried out to consider impacts on the individual’s interests, as part of a wider assessment process to determine whether it can be applied.

It’s Privacy International’s contention that neither consent nor legitimate interest is valid in Criteo’s case.

Now the CNIL will look in detail at its data processing to determine whether or not there are GDPR violations. If it finds breaches of the law, the regulation allows for monetary penalties to be issued that can scale as high as 4% of a company’s global turnover. EU data protection agencies can also order changes to how data is processed.

Commenting on the CNIL’s investigation of Criteo, Dr Lukasz Olejnik, an independent privacy researcher and consultant — whose research on the privacy implications of RTB predates all the aforementioned complaints — told us: “I am not surprised with the investigation as in Real-Time Bidding transparency and consent were always very problematic and at best non-obvious. I don’t know how retrospective consent could be reconciled.”

“It is rather beyond doubt that a thorough privacy impact assessment (data protection impact assessment) had to be conducted for many aspects of such systems or its uses, so this particular angle of the complaint should not controversial,” Olejnik added.

“My long views on Real-Time Bidding is that it was not a technology created with particular focus on security and privacy. As a transformative technology in the long-term it also contributed to broader issues like the dissemination of harmful content like political disinformation.”

The CNIL probe certainly adds to Criteo’s business woes, with the company reporting declining revenue last year and predicting more to come in 2020. More aggressive moves by browser makers to bake in tracker blocking is clearly having an impact on its core business.

In a recent interview with Digiday Criteo CEO Megan Clarken talked about wanting to broaden the range of services it offers to advertisers and reduce its reliance on its traditional retargeting.

The company has also been investing heavily in artificial intelligence in recent years — ploughing in $23M in 2018 to open an AI lab in Paris.

Australia sues Facebook over Cambridge Analytica, fine could scale to $529BN

Australia’s privacy watchdog is suing Facebook over the Cambridge Analytica data breach — which, back in 2018, became a global scandal that wiped billions off the tech giant’s share price yet only led to Facebook picking up a $5BN FTC fine.

Should Australia prevail in its suit against the tech giant the monetary penalty could be exponentially larger.

Australia’s Privacy Act sets out a provision for a civil penalty of up to $1,700,000 to be levied per contravention — and the national watchdog believes there were 311,074 local Facebook users in the cache of ~86M profiles lifted by Cambridge Analytica . So the potential fine here is circa $529BN. (A very far cry from the £500k Facebook paid in the UK over the same data misuse scandal.)

In a statement published on its website today the Office of the Australian Information Commissioner (OAIC) says it has lodged proceedings against Facebook in a federal court alleging the company committed serious and/or repeated interferences with privacy.

The suit alleges the personal data of Australian Facebook users was disclosed to the This is Your Digital Life app for a purpose other than that for which it was collected — thereby breaching Australia’s Privacy Act 1988. It further claims the data was exposed to the risk of being disclosed to Cambridge Analytica and used for political profiling purposes, and passed to other third parties.

This is Your Digital Life was an app built by an app developer called GSR that was hired by Cambridge Analytica to obtain and process Facebook users’ data for political ad targeting purposes.

The events from which the suit stems took place on Facebook’s platform between March 2014 and May 2015 when user data was being siphoned off by GSR, under contract with Cambridge Analytica — which worked with US political campaigns, including Ted Cruz’s presidential campaign and later (the now) president Donald Trump.

GSR was co-founded by two psychology researchers, Aleksandr Kogan and Joseph Chancellor. And in a still unexplained twist in the saga, Facebook hired Chancellor, in about November 2015, which was soon after some of its own staffers had warned internally about the “sketchy” business Cambridge Analytica was conducting on its ad platform. Chancellor has never spoken to the press and subsequently departed Facebook as quietly and serendipitously as he arrived.

In a concise statement summing up its legal action against Facebook the OIAC writes:

Facebook disclosed personal information of the Affected Australian Individuals. Most of those individuals did not install the “This is Your Digital Life” App; their Facebook friends did. Unless those individuals undertook a complex process of modifying their settings on Facebook, their personal information was disclosed by Facebook to the “This is Your Digital Life” App by default. Facebook did not adequately inform the Affected Australian Individuals of the manner in which their personal information would be disclosed, or that it could be disclosed to an app installed by a friend, but not installed by that individual.

Facebook failed to take reasonable steps to protect those individuals’ personal information from unauthorised disclosure. Facebook did not know the precise nature or extent of the personal information it disclosed to the “This is Your Digital Life” App. Nor did it prevent the app from disclosing to third parties the personal information obtained. The full extent of the information disclosed, and to whom it was disclosed, accordingly cannot be known. What is known, is that Facebook disclosed the Affected Australian Individuals’ personal information to the “This is Your Digital Life” App, whose developers sold personal information obtained using the app to the political consulting firm Cambridge Analytica, in breach of Facebook’s policies.

As a result, the Affected Australian Individuals’ personal information was exposed to the risk of disclosure, monetisation and use for political profiling purposes.

Commenting in a statement, Australia’s information commissioner and privacy commissioner, Angelene Falk, added: “All entities operating in Australia must be transparent and accountable in the way they handle personal information, in accordance with their obligations under Australian privacy law. We consider the design of the Facebook platform meant that users were unable to exercise reasonable choice and control about how their personal information was disclosed.

“Facebook’s default settings facilitated the disclosure of personal information, including sensitive information, at the expense of privacy. We claim these actions left the personal data of around 311,127 Australian Facebook users exposed to be sold and used for purposes including political profiling, well outside users’ expectations.”

Reached for comment, a Facebook spokesperson sent this statement:

We’ve actively engaged with the OAIC over the past two years as part of their investigation. We’ve made major changes to our platforms, in consultation with international regulators, to restrict the information available to app developers, implement new governance protocols and build industry-leading controls to help people protect and manage their data. We’re unable to comment further as this is now before the Federal Court.

Grindr sold by Chinese owner after US raised national security concerns

Chinese gaming giant Beijing Kunlun has agreed to sell popular gay dating app Grindr for about $608 million, ending a tumultuous four years under Chinese ownership.

Reuters reports that the Chinese company sold its 98% stake in Grindr to a U.S.-based company, San Vicente Acquisition Partners.

The app, originally developed in Los Angeles, raised national security concerns after it was acquired by Beijing Kunlun in 2016 for $93 million. That ownership was later scrutinized by a U.S. government national security panel, the Committee on Foreign Investment in the United States (CFIUS), which reportedly told the Beijing-based parent company that its ownership of Grindr constituted a national security threat.

CFIUS expressed concern that data from the app’s some 27 million users could be used by the Chinese government. Last year, it was reported that while under Chinese ownership, Grindr allowed engineers in Beijing access to the personal data of millions of U.S. users, including their private messages and HIV status.

Little is known about San Vicente Acquisition, but a person with knowledge of the deal said that the company is made up of a group of investors that’s fully owned and controlled by Americans. Reuters said that one of those investors is James Lu, a former executive at Chinese search giant Baidu.

The deal is subject to shareholder approval and a review by CFIUS.

A spokesperson for Grindr declined to comment on the record.

Cathay Pacific fined £500k by UK’s ICO over data breach disclosed in 2018

Cathay Pacific has been issued with a £500,000 penalty by the UK’s data watchdog for security lapses which exposed the personal details of some 9.4 million customers globally — 111,578 of whom were from the UK.

The penalty, which is the maximum fine possible under relevant UK law, was announced today by the Information Commissioner’s Office (ICO), following a multi-month investigation. It pertains to a breach disclosed by the airline in fall 2018.

At the time Cathay Pacific said it had first identified unauthorized access to its systems in March, though it did not explain why it took more than six months to make a public disclosure of the breach.

The failure to secure its systems resulted in unauthorised access to passengers’ personal details, including names, passport and identity details, dates of birth, postal and email addresses, phone numbers and historical travel information.

Today the ICO said the earliest date of unauthorised access to Cathay Pacific’s systems was October 14, 2014. While the earliest known date of unauthorised access to personal data was February 7, 2015.

“The ICO found Cathay Pacific’s systems were entered via a server connected to the internet and malware was installed to harvest data,” the regulator writes in a press release, adding that it found “a catalogue of errors” during the investigation, including back-up files that were not password protected; unpatched Internet-facing servers; use of operating systems that were no longer supported by the developer; and inadequate antivirus protection.

Since Cathay’s systems were compromised in this breach the UK has transposed an update to the European Union’s data protection’s framework into its national law which bakes in strict disclosure requirements for breaches involving personal data — requiring data controllers inform national regulators within 72 hours of becoming aware of a breach.

The General Data Protection Regulation (GDPR) also includes a much more substantial penalties regime — with fines that can scale as high as 4% of global annual turnover.

However owing to the timing of the unauthorized access the ICO has treated this breach as falling under previous UK data protection legislation.

Under GDPR the airline would likely have faced a substantially larger fine.

Commenting on Cathay Pacific’s penalty in a statement, Steve Eckersley, the ICO’s director of investigations, said:

People rightly expect when they provide their personal details to a company, that those details will be kept secure to ensure they are protected from any potential harm or fraud. That simply was not the case here.

This breach was particularly concerning given the number of basic security inadequacies across Cathay Pacific’s system, which gave easy access to the hackers. The multiple serious deficiencies we found fell well below the standard expected. At its most basic, the airline failed to satisfy four out of five of the National Cyber Security Centre’s basic Cyber Essentials guidance.

Under data protection law organisations must have appropriate security measures and robust procedures in place to ensure that any attempt to infiltrate computer systems is made as difficult as possible.

Reached for comment the airline reiterated its regret over the data breach and said it has taken steps to enhance its security “in the areas of data governance, network security and access control, education and employee awareness, and incident response agility”.

“Substantial amounts have been spent on IT infrastructure and security over the past three years and investment in these areas will continue,” Cathay Pacific said in the statement. “We have co-operated closely with the ICO and other relevant authorities in their investigations. Our investigation reveals that there is no evidence of any personal data being misused to date. However, we are aware that in today’s world, as the sophistication of cyber attackers continues to increase, we need to and will continue to invest in and evolve our IT security systems.”

“We will continue to co-operate with relevant authorities to demonstrate our compliance and our ongoing commitment to protecting personal data,” it added.

Last summer the ICO slapped another airline, British Airways, with a far more substantial fine for a breach that leaked data on 500,000 customers, also as a result of security lapses.

In that case the airline faced a record £183.39M penalty — totalling 1.5% of its total revenues for 2018 — as the timing of the breach occurred when the GDPR applied.

FCC proposes $200M in fines for wireless carriers that sold your location for years

The FCC has officially and finally determined that the major wireless carriers in the U.S. broke the law by secretly selling subscribers’ location data for years with almost no constraints or disclosure. But its Commissioners decry the $200 million penalty proposed to be paid by these enormously rich corporations, calling it disproportionate to the harm caused to consumers.

Under the proposed fines, T-Mobile would pay $91M; AT&T, $57M; Verizon, $48M; and Sprint, $12M. (Disclosure: TechCrunch is owned by Verizon Media. This does not affect our coverage in the slightest.)

The case has stretched on for more than a year and a half after initial reports that private companies were accessing and selling real-time subscriber location data to anyone willing to pay. Such a blatant abuse of consumers’ privacy caused an immediate outcry, and carriers responded with apparent chagrin — but failed to terminate or even evaluate these programs in a timely fashion. It turns out they were run with almost no oversight at all, with responsibility delegated to the third party companies to ensure compliance.

Meanwhile the FCC was called on to investigate the nature of these offenses, and spent more than a year doing so in near-total silence, with even its own Commissioners calling out the agency’s lack of communication on such a serious issue.

Finally, in January, FCC Chairman Ajit Pai — who, it really must be noted here, formerly worked for one of the main companies implicated, Securus — announced that the investigation had found the carriers had indeed violated federal law and would soon be punished.

Today brings the official documentation of the fines, as well as commentary from the Commission. The general feeling seems to be that while it’s commendable to recognize this violation and propose what could be considered  substantial fines, the whole thing is, as Commissioner Rosenworcel put it, “a day late and a dollar short.”

The scale of the fines, they say, has little to do with the scale of the offenses — and that’s because the investigation did not adequately investigate or attempt to investigate the scale of those offenses. Essentially, the FCC didn’t even look at the number or nature of actual instances of harm — it just asked the carriers to provide the number of contracts entered into.

And why not go after the individual companies? They’re not being fined at all. Even if the FCC lacked the authority to do so, it could have handed off the case to Justice or local authorities that could determine whether these companies violated other laws.

As Rosenworcel notes in her own statement, the fines are also extraordinarily generous even beyond this minimal method of calculating harm:

The agency proposes a $40,000 fine for the violation of our rules—but only on the first day. For every day after that, it reduces to $2,500 per violation. The FCC heavily discounts the fines the carriers potentially owe under the law and disregards the scope of the problem. On top of that, the agency gives each carrier a thirty-day pass from this calculation. This thirty day “get-out-of-jail-free” card is plucked from thin air.

Given that this investigation took place over such a long period, it’s strange that it did not seek to hear from the public or subpoena further details from the companies facilitating the violations. Meanwhile the carriers sought to declare a huge proportion of their responses to the FCC’s questions confidential, including publicly available information, and the agency didn’t question these assertions until Starks and Rosenworcel intervened.

$200M sounds like a lot, but divided among several billion-dollar communications organizations it’s peanuts, especially when you consider that these location-selling agreements may have netted far more than that in the years they were active. Only the carriers know exactly how many times their subscribers’ privacy was violated, and how much money they made from that abuse. And because the investigation has ended without the authority over these matters asking about it, we likely never will know.

The proposed fines, called a Notice of Apparent Liability, are only a tentative finding, and the carriers have 30 days to respond or ask for an extension — the latter of which is the more likely. Once they respond (perhaps challenging the amount or something else) the FCC can take as long as it wants to come up with a final fine amount. And once that is issued, there is no requirement that the fine actually be collected — and the FCC has in fact declined to collect before once the heat died down, though not with a penalty of this scale.

The only thing that led to this case being investigated at all was public attention, and apparently public attention is necessary to ensure the federal government follows through on its duties.

Clearview said its facial recognition app was only for law enforcement as it courted private companies

After claiming that it would only sell its controversial facial recognition software to law enforcement agencies, a new report suggests that Clearview AI is less than discerning about its client base. According to Buzzfeed News, the small, secretive company looks to have shopped its technology far and wide. While Clearview counts ICE, the U.S. Attorney’s Office for the Southern District of New York and the retail giant Macy’s among its paying customers, many more private companies are testing the technology through 30-day free trials. Non-law enforcement entities that appeared on Clearview’s client list include Walmart, Eventbrite, the NBA, Coinbase, Equinox, and many others.

According to the report, even if a company or organization has no formal relationship with Clearview, its individual employees might be testing the software. “In some cases… officials at a number of those places initially had no idea their employees were using the software or denied ever trying the facial recognition tool,” Buzzfeed News reports.

In one example, the NYPD denied a relationship with Clearview, even as as many as 30 officers within the department conducted 11,000 searches through the software, according to internal logs.

A week ago, Clearview’s CEO Hoan Ton-That was quoted on Fox Business stating that his company’s technology is “strictly for law enforcement”—a claim the company’s budding client list appears to contradict.

“This list, if confirmed, is a privacy, security, and civil liberties nightmare,” ACLU Staff Attorney Nathan Freed Wessler said of the revelations. “Government agents should not be running our faces against a shadily assembled database of billions of our photos in secret and with no safeguards against abuse.”

On top of its reputation as an invasive technology, critics argue that facial recognition tech isn’t accurate enough to be used in the high-consequence settings it’s often touted for. Facial recognition software has notoriously struggled to accurately identify non-white, non-male faces, a phenomenon that undergirds arguments that biased data has the potential to create devastating real-world consequences.

Little is known about the technology that powers Clearview’s own algorithms and accuracy beyond that the company scrapes public images from many online sources, aggregates that data, and allows users to search it for matches. In light of Clearview’s reliance on photos from social networks, Facebook, YouTube, and Twitter have all issued the company cease-and-desist letters for violating their terms of use.

Clearview’s small pool of early investors includes the private equity firm Kirenaga Partners and famed investor and influential tech conservative Peter Thiel. Thiel, who sits on the board of Facebook, also co-founded Palantir, a data analytics company that’s become a favorite of law enforcement.

Clearview said its facial recognition app was only for law enforcement as it courted private companies

After claiming that it would only sell its controversial facial recognition software to law enforcement agencies, a new report suggests that Clearview AI is less than discerning about its client base. According to Buzzfeed News, the small, secretive company looks to have shopped its technology far and wide. While Clearview counts ICE, the U.S. Attorney’s Office for the Southern District of New York and the retail giant Macy’s among its paying customers, many more private companies are testing the technology through 30-day free trials. Non-law enforcement entities that appeared on Clearview’s client list include Walmart, Eventbrite, the NBA, Coinbase, Equinox, and many others.

According to the report, even if a company or organization has no formal relationship with Clearview, its individual employees might be testing the software. “In some cases… officials at a number of those places initially had no idea their employees were using the software or denied ever trying the facial recognition tool,” Buzzfeed News reports.

In one example, the NYPD denied a relationship with Clearview, even as as many as 30 officers within the department conducted 11,000 searches through the software, according to internal logs.

A week ago, Clearview’s CEO Hoan Ton-That was quoted on Fox Business stating that his company’s technology is “strictly for law enforcement”—a claim the company’s budding client list appears to contradict.

“This list, if confirmed, is a privacy, security, and civil liberties nightmare,” ACLU Staff Attorney Nathan Freed Wessler said of the revelations. “Government agents should not be running our faces against a shadily assembled database of billions of our photos in secret and with no safeguards against abuse.”

On top of its reputation as an invasive technology, critics argue that facial recognition tech isn’t accurate enough to be used in the high-consequence settings it’s often touted for. Facial recognition software has notoriously struggled to accurately identify non-white, non-male faces, a phenomenon that undergirds arguments that biased data has the potential to create devastating real-world consequences.

Little is known about the technology that powers Clearview’s own algorithms and accuracy beyond that the company scrapes public images from many online sources, aggregates that data, and allows users to search it for matches. In light of Clearview’s reliance on photos from social networks, Facebook, YouTube, and Twitter have all issued the company cease-and-desist letters for violating their terms of use.

Clearview’s small pool of early investors includes the private equity firm Kirenaga Partners and famed investor and influential tech conservative Peter Thiel. Thiel, who sits on the board of Facebook, also co-founded Palantir, a data analytics company that’s become a favorite of law enforcement.

Amazon Transcribe can now automatically redact personally identifiable information

Amazon Transcribe, the AWS-based speech-to-text service, launched a small but important new feature this morning that, if implemented correctly, can automatically hide your personally identifiable information from call transcripts.

One of the most popular use cases for Transcribe is to create a record of customer calls. Almost by default, that involves exchanging information like your name, address or a credit card number. In my experience, some call centers stop the recording when you’re about to exchange credit card numbers, for example, but that’s not always the case.

With this new feature, Transcribe can automatically identify information like a social security number, credit card number, bank account number, name, email address, phone number and mailing address and redact that. The tool automatically replaces this information with ‘[PII]’ in the transcript.

There are, of course, other tools that can remove PII from existing documents. Often, though, these are focused on data loss prevention tools and aim to keep data from leaking out of the company when you share documents with outsiders. With this new Transcribe tool, at least some of this data will never be available for sharing (unless, of course, you keep a copy of the audio).

In total, Transcribe currently supports 31 languages. Of those, it can transcribe 6 in real-time for captioning and other use cases.

Facebook has paused election reminders in Europe after data watchdog raises transparency concerns

Big tech’s lead privacy regulator in Europe has intervened to flag transparency concerns about a Facebook election reminder feature — asking the tech giant to provide it with information about what data it collects from users who interact with the notification and how their personal data is used, including whether it’s used for targeting them with ads.

Facebook confirmed to TechCrunch it has paused use of the election reminder feature in the European Union while it works on addressing the Irish Data Protection Commission (DPC)’s concerns.

Facebook’s Election Day Reminder (EDR) feature is a notification the platform can display to users on the day of an election — ostensibly to encourage voter participation. However, as ever with the data-driven ad business, there’s a whole wrapper of associated questions about what information Facebook’s platform might be harvesting when it chooses to deploy the nudge — and how the (ad) business is making use of the data.

On an FAQ on its website about the election reminder Facebook writes vaguely that users “may see reminders and posts about elections and voting”.

Facebook does not explain what criteria it uses to determine whether to target (or not to target) a particular user with an election reminder.

Yet a study carried out by Facebook in 2012, working with academics from the University of California at San Diego, found an election day reminder sent via its platform on the day of the 2010 US congressional elections boosted voter turnout by about 340,000 people — which has led to concern that selective deployment of election reminders by Facebook could have the potential to influence poll outcomes.

If, for example, Facebook chose to target an election reminder at certain types of users who it knows via its profiling of them are likely to lean towards voting a particular way. Or if the reminder was targeted at key regions where a poll result could be swung with a small shift in voter turnout. So the lack of transparency around how the tool is deployed by Facebook is also concerning. 

Under EU law, meanwhile, entities processing personal data that reveals political opinions must also meet a higher standard of regulatory compliance for this so-called “special category data” — including around transparency and consent. (If relying on user consent to collect this type of data it would need to be explicit — requiring a clear, purpose-specific statement that the user affirms, for instance.)

In a statement today the DPC writes that it notified Facebook of a number of “data protection concerns” related to the EDR ahead of the recent Irish General Election — which took place February 8 — raising particular concerns about “transparency to users about how personal data is collected when interacting with the feature and subsequently used by Facebook”.

The DPC said it asked Facebook to make some changes to the feature but because these “remedial actions” could not be implemented in advance of the Irish election it says Facebook decided not to activate the EDR during that poll.

We understand the main issue for the regulator centers on the provision of in-context transparency for users on how their personal data would be collected and used when they engaged with the feature — such as the types of data being collected and the purposes the data is used for, including whether it’s used for advertising purposes.

In its statement, the DPC says that following its intervention Facebook has paused use of the EDR across the EU, writing: “Facebook has confirmed that the Election Day Reminder feature will not be activated during any EU elections pending a response to the DPC addressing the concerns raised.”

It’s not clear how long this intervention-triggered pause will last — neither the DPC nor Facebook have given a timeframe for when the transparency problems might be resolved.

We reached out to Facebook with questions on the DPC’s intervention.

The company sent this statement, attributed to a spokesperson:

We are committed to processing people’s information lawfully, fairly, and in a transparent manner. However, following concerns raised by the Irish Data Protection Commission around whether we give users enough information about how the feature works, we have paused this feature in the EU for the time being. We will continue working with the DPC to address their concerns.

“We believe that the Election Day reminder is a positive feature which reminds people to vote and helps them find their polling place,” Facebook added.

Forthcoming elections in Europe include Slovak parliamentary elections this month; North Macedonian and Serbian parliamentary elections, which are due to take place in April; and UK local elections in early May.

The intervention by the Irish DPC against Facebook is the second such public event in around a fortnight — after the regulator also published a statement revealing it had raised concerns about Facebook’s planned launch of a dating feature in the EU.

That launch was also put on ice following its intervention, although Facebook claimed it chose to postpone the rollout to get the launch “right”; while the DPC said it’s waiting for adequate responses and expects the feature won’t be launched before it gets them.

It looks like public statements of concern could be a new tactic by the regulator to try to address the sticky challenge of reining in big tech.

The DPC is certainly under huge pressure to deliver key decisions to prove that the EU’s flagship General Data Protection Regulation (GDPR) is functioning as intended. Critics say it’s taking too long, even as its case load continues to pile up.

No GDPR decisions on major cases involving tech giants including Facebook and Google have yet been handed down in Dublin — despite the GDPR fast approaching its second birthday.

At the same time it’s clear tech giants have no shortage of money, resources and lawyers to inject friction into the regulatory process — with the aim of slowing down any enforcement.

So it’s likely the DPC is looking for avenues to bag some quick wins — by making more of its interventions public and thereby putting pressure on a major player like Facebook to respond to publicity generated by it going public with “concerns”.