A Christian-friendly payments processor spilled 6 million transaction records online

A little-known payments processor, which bills itself as a Christian-friendly company that does “not process credit card transactions for morally objectionable businesses,” left a database containing years’ worth of customer payment transactions online.

The database contained 6.7 million records since 2013, and was updating by the day. But the database was not protected with a password, allowing anyone to look inside.

Security researcher Anurag Sen found the database. TechCrunch identified its owner as Cornerstone Payment Systems, which provides payment processing to ministries, non-profits, and other morally aligned businesses across the U.S., including churches, religious radio personalities, and pro-life groups.

Payment processors handle credit and debit card transactions on behalf of a business.

A review of a portion of the database showed each record contained payee names, email addresses, and in many but not all cases postal addresses. Each record also had the name of the merchant who is being paid, the card type, the last four-digits of the card number, and its expiry date.

The data also contained specific dates and times of the transaction. Each record also indicated if a payment was successful or if it was declined. Some of the records also contained notes from the customer, often describing what the payment was for — such as a donation or a commemoration.

Although there was some evidence of tokenization — a way of replacing sensitive information with a unique string of letters and numbers — the database itself was not encrypted.

We used some of the email addresses to contact a number of affected customers. Two people whose names and transactions were found in the database confirmed their information was accurate.

After TechCrunch contacted Cornerstone, the company pulled the database offline.

“Cornerstone Payment Systems has secured all server access,” said spokesperson Tony Adamo.

“It is vital to note that Cornerstone Payment Systems does not store complete credit card data or check data. We have put in place enhanced security measures locking down all URLs. We are currently reviewing all logs for any potential access,” he added.

Cornerstone did not say if it will inform state regulators of the security lapse, which it’s required to do under California state’s data breach notification laws.

Read more:

Securiti.ai scores $50M Series B to modernize data governance

Securiti.ai, a San Jose startup, is working to bring a modern twist to data governance and security. Today the company announced a $50 million Series B led by General Catalyst with participation from Mayfield.

The company, which only launched in 2018, reports it has already raised $81 million. What is attracting all of this investment in such a short period of time is that the company is going after a problem that is an increasing pain point for companies all over the world because of a growing body of data privacy regulation like GDPR and CCPA.

These laws are forcing companies to understand the data they have, and find ways to be able to pull that data into a single view, and if needed respond to customer wishes to remove or redact some of  it. It’s a hard problem to solve with customer data spread across multiple applications, and often shared with third parties and partners.

Company CEO and founder Rehan Jalil says the goal of his startup is provide an operations platform for customer data, an area he has coined PrivacyOps, with the goal of helping companies give customers more control over their data, as laws increasingly require.

“In the end it’s all about giving individuals the rights on the data: the right to privacy, the right to deletion, the right to redaction, the right to stop the processing. That’s the charter and the mission of the company,” he told TechCrunch.

You begin by defining your data sources, then a bot goes out and gathers customer data across all of the data sources you have defined. The company has links to over 250 common modern and legacy data sources out of the box. Once the bot grabs the data and creates a central record, then humans come in to review the results, make any adjustments and final decisions on how to handle a data request.

It has a number of security templates for different kinds of privacy regulations such as GDPR and CCPA, and the bot finds data that must meet these requirements and lets the governance team see how many records could be in violation or meet a set of criteria you define.

Securiti.ai data view. Screenshot: Securiti.ai (cropped)

There are a number of tools in the package including ways to look at your privacy readiness, vendor assessments, data maps and data breaches to look at data privacy in broad way.

The company launched in 2018, and in 15 months has already grown to 185 employees, a number that is expected to increase in the next year with the new funding.

An adult sexting site exposed thousands of models’ passports and driver’s licenses

A popular sexting website has exposed thousands of photo IDs belonging to models and sex workers who earn commissions from the site.

SextPanther, an Arizona-based adult site, stored over 11,000 identity documents on an exposed Amazon Web Services (AWS) storage bucket, including passports, driver’s licenses, and Social Security numbers, without a password. The company says on its website that it uses to verify the ages of models who users communicate with.

Most of the exposed identity documents contain personal information, such as names, home addresses, dates of birth, biometrics, and their photos.

Although most of the data came from models in the U.S., some of the documents were supplied by workers in Canada, India, and the United Kingdom.

The site allows models and sex workers to earn money by exchanging text messages, photos, and videos with paying users, including explicit and nude content. The exposed storage bucket also contained over a hundred thousand photos and videos sent and received by the workers.

It was not immediately clear who owned the storage bucket. TechCrunch asked U.K.-based penetration testing company Fidus Information Security, which has experience in discovering and identifying exposed data, to help.

Researchers at Fidus quickly found evidence suggesting the exposed data could belong to SextPanther.

An hour after we alerted the site’s owner, Alexander Guizzetti, to the exposed data, the storage bucket was pulled offline.

“We have passed this on to our security and legal teams to investigate further. We take accusations like this very seriously,” Guizzetti said in an email, who did not explicitly confirm the bucket belonged to his company.

Using information from identity documents matched against public records, we contacted several models whose information was exposed by the security lapse.

“I’m sure I sent it to them,” said one model, referring to her driver’s license which was exposed. (We agreed to withhold her name given the sensitivity of the data.) We passed along a photo of her license as it found in the exposed bucket. She confirmed it was her license, but said that the information on her license is no longer current.

“I truly feel awful for others whom have signed up with their legit information,” she said.

The security lapse comes a week after researchers found a similar cache of highly sensitive personal information of sex workers on adult webcam streaming site, PussyCash.

More than 850,000 documents were insecurely stored in another unprotected storage bucket.

Read more:


Got a tip? You can send tips securely over Signal and WhatsApp to +1 646-755–8849.

Facebook’s dodgy defaults face more scrutiny in Europe

Italy’s Competition and Markets Authority has launched proceedings against Facebook for failing to fully inform users about the commercial uses it makes of their data.

At the same time a German court has today upheld a consumer group’s right to challenge the tech giant over data and privacy issues in the national courts.

Lack of transparency

The Italian authority’s action, which could result in a fine of €5 million for Facebook, follows an earlier decision by the regulator, in November 2018 — when it found the company had not been dealing plainly with users about the underlying value exchange involved in signing up to the ‘free’ service, and fined Facebook €5M for failing to properly inform users how their information would be used commercially.

In a press notice about its latest action, the watchdog notes Facebook has removed a claim from its homepage — which had stated that the service ‘is free and always will be’ — but finds users are still not being informed, “with clarity and immediacy”, about how the tech giant monetizes their data.

The Authority had prohibited Facebook from continuing what it dubs “deceptive practice” and ordered it to publish an amending declaration on its homepage in Italy, as well as on the Facebook app and on the personal page of each registered Italian user.

In a statement responding to the watchdog’s latest action, a Facebook spokesperson told us:

We are reviewing the Authority decision. We made changes last year — including to our Terms of Service — to further clarify how Facebook makes money. These changes were part of our ongoing commitment to give people more transparency and control over their information.

Last year Italy’s data protection agency also fined Facebook $1.1M — in that case for privacy violations attached to the Cambridge Analytics data misuse scandal.

Dodgy defaults

In separate but related news, a ruling by a German court today found that Facebook can continue to use the advertising slogan that its service is ‘free and always will be’ — on the grounds that it does not require users to hand over monetary payments in exchange for using the service.

A local consumer rights group, vzbv, had sought to challenge Facebook’s use of the slogan — arguing it’s misleading, given the platform’s harvesting of user data for targeted ads. But the court disagreed.

However that was only one of a number of data protection complaints filed by the group — 26 in all. And the Berlin court found in its favor on a number of other fronts.

Significantly vzbv has won the right to bring data protection related legal challenges within Germany even with the pan-EU General Data Protection Regulation in force — opening the door to strategic litigation by consumer advocacy bodies and privacy rights groups in what is a very pro-privacy market. 

This looks interesting because one of Facebook’s favored legal arguments in a bid to derail privacy challenges at an EU Member State level has been to argue those courts lack jurisdiction — given that its European HQ is sited in Ireland (and GDPR includes provision for a one-stop shop mechanism that pushes cross-border complaints to a lead regulator).

But this ruling looks like it will make it tougher for Facebook to funnel all data and privacy complaints via the heavily backlogged Irish regulator — which has, for example, been sitting on a GDPR complaint over forced consent by adtech giants (including Facebook) since May 2018.

The Berlin court also agreed with vzbv’s argument that Facebook’s privacy settings and T&Cs violate laws around consent — such as a location service being already activated in the Facebook mobile app; and a pre-ticked setting that made users’ profiles indexable by search engines by default

The court also agreed that certain pre-formulated conditions in Facebook’s T&C do not meet the required legal standard — such as a requirement that users agree to their name and profile picture being used “for commercial, sponsored or related content”, and another stipulation that users agree in advance to all future changes to the policy.

Commenting in a statement, Heiko Dünkel from the law enforcement team at vzbv, said: “It is not the first time that Facebook has been convicted of careless handling of its users’ data. The Chamber of Justice has made it clear that consumer advice centers can take action against violations of the GDPR.”

We’ve reached out to Facebook for a response.

London’s Met Police switches on live facial recognition, flying in face of human rights concerns

While EU lawmakers are mulling a temporary ban on the use of facial recognition to safeguard individuals’ rights, as part of risk-focused plan to regulate AI, London’s Met Police has today forged ahead with deploying the privacy hostile technology — flipping the switch on operational use of live facial recognition in the UK capital.

The deployment comes after a multi-year period of trials by the Met and police in South Wales.

The Met says its use of the controversial technology will be targeted to “specific locations… where intelligence suggests we are most likely to locate serious offenders”.

“Each deployment will have a bespoke ‘watch list’, made up of images of wanted individuals, predominantly those wanted for serious and violent offences,” it adds.

It also claims cameras will be “clearly signposted”, adding that officers will be “deployed to the operation will hand out leaflets about the activity”.

“At a deployment, cameras will be focused on a small, targeted area to scan passers-by,” it writes. “The technology, which is a standalone system, is not linked to any other imaging system, such as CCTV, body worn video or ANPR.”

The biometric system is being provided to the Met by Japanese IT and electronics giant, NEC.

In a press statement, assistant commissioner Nick Ephgrave claimed the force is taking a balanced approach to using the controversial tech.

“We all want to live and work in a city which is safe: the public rightly expect us to use widely available technology to stop criminals. Equally I have to be sure that we have the right safeguards and transparency in place to ensure that we protect people’s privacy and human rights. I believe our careful and considered deployment of live facial recognition strikes that balance,” he said.

London has seen a rise in violent crime in recent years, with murder rates hitting a ten-year peak last year.

The surge in violent crime has been linked to cuts to policing services — although the new Conservative government has pledged to reverse cuts enacted by earlier Tory administrations.

The Met says its hope for the AI-powered tech is will help it tackle serious crime, including serious violence, gun and knife crime, child sexual exploitation and “help protect the vulnerable”.

However its phrasing is not a little ironic, given that facial recognition systems can be prone to racial bias, for example, owing to factors such as bias in data-sets used to train AI algorithms.

So in fact there’s a risk that police-use of facial recognition could further harm vulnerable groups who already face a disproportionate risk of inequality and discrimination.

Yet the Met’s PR doesn’t mention the risk of the AI tech automating bias.

Instead it makes pains to couch the technology as “additional tool” to assist its officers.

“This is not a case of technology taking over from traditional policing; this is a system which simply gives police officers a ‘prompt’, suggesting “that person over there may be the person you’re looking for”, it is always the decision of an officer whether or not to engage with someone,” it adds.

While the use of a new tech tool may start with small deployments, as is being touting here, the history of software development underlines how potential to scale is readily baked in.

A ‘targeted’ small-scale launch also prepares the ground for London’s police force to push for wider public acceptance of a highly controversial and rights-hostile technology via a gradual building out process. Aka surveillance creep.

On the flip side, the text of the draft of an EU proposal for regulating AI which leaked last week — floating the idea of a temporary ban on facial recognition in public places — noted that a ban would “safeguard the rights of individuals”. Although it’s not yet clear whether the Commission will favor such a blanket measure, even temporarily.

UK rights groups have reacted with alarm to the Met’s decision to ignore concerns about facial recognition.

Liberty accused the force of ignoring the conclusion of a report it commissioned during an earlier trial of the tech — which it says concluded the Met had failed to consider human rights impacts.

It also suggested such use would not meet key legal requirements.

“Human rights law requires that any interference with individuals’ rights be in accordance with the law, pursue a legitimate aim, and be ‘necessary in a democratic society’,” the report notes, suggesting the Met earlier trials of facial recognition tech “would be held unlawful if challenged before the courts”.

A petition set up by Liberty to demand a stop to facial recognition in public places has passed 21,000 signatures.

Discussing the legal framework around facial recognition and law enforcement last week, Dr Michael Veale, a lecturer in digital rights and regulation at UCL, told us that in his view the EU’s data protection framework, GDPR, forbids facial recognition by private companies “in a surveillance context without member states actively legislating an exemption into the law using their powers to derogate”.

A UK man who challenged a Welsh police force’s trial of facial recognition has a pending appeal after losing the first round of a human rights challenge. Although in that case the challenge pertains to police use of the tech — rather than, as in the Met’s case, a private company (NEC) providing the service to the police.

UN says malware built by NSO Group ‘most likely’ used in Bezos phone hack

A United Nations analysis of a forensic report says a mobile hacking tool built by mobile spyware maker, the NSO Group, was “most likely” used to hack into the Amazon founder Jeff Bezos’ phone.

The remarks, published by U.N. human rights experts on Wednesday, said the Israeli-based spyware maker likely used its Pegasus mobile spyware to exfiltrate gigabytes of data from Bezos’ phone in May 2018, about six months after the Saudi government first obtained the spyware.

It comes a day after news emerged, citing a forensics report commissioned to examine the Amazon founder’s phone, that the malware was delivered from a number belonging to Saudi crown prince Mohammed bin Salman. The forensics report, carried out by FTI Consulting, said it was “highly probable” that the phone hack was triggered by a malicious video sent over WhatsApp to Bezos’ phone. Within hours, large amounts of data on Bezos’ phone had been exfiltrated.

U.N. experts Agnes Callamard and Davie Kaye, who were given a copy of the forensics report, said the breach of Bezos’ phone was part of “a pattern of targeted surveillance of perceived opponents and those of broader strategic importance to the Saudi authorities.”

But the report left open the possibility that technology developed by another mobile spyware maker may have been used.

The Saudi government has rejected the claims, calling them “absurd.”

NSO Group said in a statement that its technology “was not used in this instance,” saying its technology “cannot be used on U.S. phone numbers.” The company said any suggestion otherwise was “defamatory” and threatened legal action.

Forensics experts are said to have began looking at Bezos’ phone after he accused the National Enquirer of blackmail last year. In a tell-all Medium post, Bezos described how he was targeted by the tabloid, which obtained and published private text messages and photos from his phone, prompting an investigation into the leak.

The subsequent forensic report, which TechCrunch has not yet seen, claims the initial breach began after Bezos and the Saudi crown prince exchanged phone numbers in April 2018, a month before the hack.

The report said several other prominent figures, including Saudi dissidents and political activists, also had their phones infected with the same mobile malware around the time of the Bezos phone breach. Some whose phones were infected including those close to Jamal Khashoggi, a prominent Saudi critic and columnist for the Washington Post — which Bezos owns — who was murdered five months later.

“The information we have received suggests the possible involvement of the Crown Prince in surveillance of Mr. Bezos, in an effort to influence, if not silence, The Washington Post’s reporting on Saudi Arabia,” the U.N. experts said.

U.S. intelligence concluded that bin Salman ordered Khashoggi’s death.

The U.N. experts said the Saudis purchased the Pegasus malware, and used WhatsApp as a way to deliver the malware to Bezos’ phone.

WhatsApp, which is owned by Facebook, filed a lawsuit against the NSO Group for creating and using the Pegasus malware, which exploits a since-fixed vulnerability in the the messaging platform. Once exploited, sometimes silently and without the target knowing, the operators can download data from the user’s device. Facebook said at the time more than the malware was delivered on more than 1,400 targeted devices.

The U.N. experts said they will continue to investigate the “growing role of the surveillance industry” used for targeting journalists, human rights defenders, and owners of media outlets.

Amazon did not immediately comment.

Where top VCs are investing in adtech and martech

Lately, the venture community’s relationship with advertising tech has been a rocky one.

Advertising is no longer the venture oasis it was in the past, with the flow of VC dollars in the space dropping dramatically in recent years. According to data from Crunchbase, adtech deal flow has fallen at a roughly 10% compounded annual growth rate over the last five years.

While subsectors like privacy or automation still manage to pull in funding, with an estimated 90%-plus of digital ad spend growth going to incumbent behemoths like Facebook and Google, the amount of high-growth opportunities in the adtech space seems to grow narrower by the week.

Despite these pains, funding for marketing technology has remained much more stable and healthy; over the last five years, deal flow in marketing tech has only dropped at a 3.5% compounded annual growth rate according to Crunchbase, with annual invested capital in the space hovering just under $2 billion.

Given the movement in the adtech and martech sectors, we wanted to try to gauge where opportunity still exists in the verticals and which startups may have the best chance at attracting venture funding today. We asked four leading VCs who work at firms spanning early to growth stages to share what’s exciting them most and where they see opportunity in marketing and advertising:

Several of the firms we spoke to (both included and not included in this survey) stated that they are not actively investing in advertising tech at present.

UK watchdog sets out “age appropriate” design code for online services to keep kids’ privacy safe

The UK’s data protection watchdog has today published a set of design standards for Internet services which are intended to help protect the privacy of children online.

The Information Commissioner’s Office (ICO) has been working on the Age Appropriate Design Code since the 2018 update of domestic data protection law — as part of a government push to create ‘world-leading’ standards for children when they’re online.

UK lawmakers have grown increasingly concerned about the ‘datafication’ of children when they go online and may be too young to legally consent to being tracked and profiled under existing European data protection law.

The ICO’s code is comprised of 15 standards of what it calls “age appropriate design” — which the regulator says reflects a “risk-based approach”, including stipulating that setting should be set by default to ‘high privacy’; that only the minimum amount of data needed to provide the service should be collected and retained; and that children’s data should not be shared unless there’s a reason to do so that’s in their best interests.

Profiling should also be off by default. While the code also takes aim at dark pattern UI designs that seek to manipulate user actions against their own interests, saying “nudge techniques” should not be used to “lead or encourage children to provide unnecessary personal data or weaken or turn off their privacy protections”.

“The focus is on providing default settings which ensures that children have the best possible access to online services whilst minimising data collection and use, by default,” the regulator writes in an executive summary.

While the age appropriate design code is focused on protecting children it is applies to a very broad range of online services — with the regulator noting that “the majority of online services that children use are covered” and also stipulating “this code applies if children are likely to use your service” [emphasis ours].

This means it could be applied to anything from games, to social media platforms to fitness apps to educational websites and on-demand streaming services — if they’re available to UK users.

“We consider that for a service to be ‘likely’ to be accessed [by children], the possibility of this happening needs to be more probable than not. This recognises the intention of Parliament to cover services that children use in reality, but does not extend the definition to cover all services that children could possibly access,” the ICO adds.

Here are the 15 standards in full as the regulator describes them:

  1. Best interests of the child: The best interests of the child should be a primary consideration when you design and develop online services likely to be accessed by a child.
  2. Data protection impact assessments: Undertake a DPIA to assess and mitigate risks to the rights and freedoms of children who are likely to access your service, which arise from your data processing. Take into account differing ages, capacities and development needs and ensure that your DPIA builds in compliance
    with this code.
  3. Age appropriate application: Take a risk-based approach to recognising the age of individual users and ensure you effectively apply the standards in this code to child users. Either establish age with a level of certainty that is appropriate to the risks to the rights and freedoms of children that arise from your data processing, or apply the standards in this code to all your users instead.
  4. Transparency: The privacy information you provide to users, and other published terms, policies and community standards, must be concise, prominent and in clear language suited to the age of the child. Provide additional specific ‘bite-sized’ explanations about how you use personal data at the point that use is activated.
  5. Detrimental use of data: Do not use children’s personal data in ways that have been shown to be detrimental to their wellbeing, or that go against industry codes of practice, other regulatory provisions or Government advice.
  6. Policies and community standards: Uphold your own published terms, policies and community standards (including but not limited to privacy policies, age restriction, behaviour rules and content policies).
  7. Default settings: Settings must be ‘high privacy’ by default (unless you can demonstrate a compelling reason for a different default setting, taking account of the best interests of the child).
  8. Data minimisation: Collect and retain only the minimum amount of personal data you need to provide the elements of your service in which a child is actively and knowingly engaged. Give children separate choices over which elements they wish to activate.
  9. Data sharing: Do not disclose children’s data unless you can demonstrate a compelling reason to do so, taking account of the best interests of the child.
  10. Geolocation: Switch geolocation options off by default (unless you can demonstrate a compelling reason for geolocation to be switched on by default, taking account of the best interests of the child). Provide an obvious sign for children when location tracking is active. Options which make a child’s location visible to others must default back to ‘off’ at the end of each session.
  11. Parental controls: If you provide parental controls, give the child age appropriate information about this. If your online service allows a parent or carer to monitor their child’s online activity or track their location, provide an obvious sign to the child when they are being monitored.
  12. Profiling: Switch options which use profiling ‘off’ by default (unless you can demonstrate a compelling reason for profiling to be on by default, taking account of the best interests of the child). Only allow profiling if you have appropriate measures in place to protect the child from any harmful effects (in particular, being fed content that is detrimental to their health or wellbeing).
  13. Nudge techniques: Do not use nudge techniques to lead or encourage children to provide unnecessary personal data or weaken or turn off their privacy protections.
  14. Connected toys and devices: If you provide a connected toy or device ensure you include effective tools to enable conformance to this code.
  15. Online tools: Provide prominent and accessible tools to help children exercise their data protection rights and report concerns.

The Age Appropriate Design Code also defines children as under the age of 18 — which offers a higher bar than current UK data protection law which, for example, puts only a 13-year-age limit for children to be legally able to give their consent to being tracked online.

So — assuming (very wildly) — that Internet services were to suddenly decide to follow the code to the letter, setting trackers off by default and not nudging users to weaken privacy-protecting defaults by manipulating them to give up more data, the code could — in theory — raise the level of privacy both children and adults typically get online.

However it’s not legally binding — so there’s a pretty fat chance of that.

Although the regulator does make a point of noting that the standards in the code are backed by existing data protection laws, which it does regulate and can legally enforceable (and which include clear principles like ‘privacy by design and default’) — pointing out it has powers to take action against law breakers, including “tough sanctions” such as orders to stop processing data and fines of up to 4% of a company’s global turnover.

So, in a way, the regulator appears to be saying: ‘Are you feeling lucky data punk?’

Last April the UK government published a white paper setting out its proposals for regulating a range of online harms — including seeking to address concern about inappropriate material that’s available on the Internet being accessed by children.

The ICO’s Age Appropriate Design Code is intended to support that effort. So there’s also a chance that some of the same sorts of stipulations could be baked into the planned online harms bill.

“This is not, and will not be, ‘law’. It is just a code of practice,” said Neil Brown, an Internet, telecoms and tech lawyer at Decoded Legal, discussing the likely impact of the suggested standards. “It shows the direction of the ICO’s thinking, and its expectations, and the ICO has to have regard to it when it takes enforcement action but it’s not something with which an organisation needs to comply as such. They need to comply with the law, which is the GDPR [General Data Protection Regulation] and the DPA [Data Protection Act] 2018.

“The code of practice sits under the DPA 2018, so companies which are within the scope of that are likely to want to understand what it says. The DPA 2018 and the UK GDPR (the version of the GDPR which will be in place after Brexit) covers controllers established in the UK, as well as overseas controllers which target services to people in the UK or monitor the behaviour of people in the UK. Merely making a service available to people in the UK should not be sufficient.”

“Overall, this is consistent with the general direction of travel for online services, and the perception that more needs to be done to protect children online,” Brown also told us.

“Right now, online services should be working out how to comply with the GDPR, the ePrivacy rules, and any other applicable laws. The obligation to comply with those laws does not change because of today’s code of practice. Rather, the code of practice shows the ICO’s thinking on what compliance might look like (and, possibly, goldplates some of the requirements of the law too).”

Organizations that choose to take note of the code — and are in a position to be able to demonstrate they’ve followed its standards — stand a better chance of persuading the regulator they’ve complied with relevant privacy laws, per Brown.

“Conversely, if they want to say that they comply with the law but not with the code, that is (legally) possible, but might be more of a struggle in terms of engagement with the ICO,” he added.

Zooming back out, the government said last fall that it’s committed to publishing draft online harms legislation for pre-legislative scrutiny “at pace”.

But at the same time it dropped a controversial plan included in a 2017 piece of digital legislation which would have made age checks for accessing online pornography mandatory — saying it wanted to focus on a developing “the most comprehensive approach possible to protecting children”, i.e. via the online harms bill.

How comprehensive the touted ‘child protections’ will end up being remains to be seen.

Brown suggests age verification could come through as a “general requirement”, given the age verification component of the Digital Economy Act 2017 was dropped — and “the government has said that these will be swept up in the broader online harms piece”.

The government has also been consulting with tech companies on possible ways to implement age verification online.

However the difficulties of regulating perpetually iterating Internet services — many of which are also operated by companies based outside the UK — have been writ large for years. (And are now mired in geopolitics.)

While the enforcement of existing European digital privacy laws remains, to put it politely, a work in progress

Adblock Plus’s Till Faida on the shifting shape of ad blocking

Publishers hate ad blockers, but millions of internet users embrace them — and many browsers even bake it in as a feature, including Google’s own Chrome. At the same time, growing numbers of publishers are walling off free content for visitors who hard-block ads, even asking users directly to be whitelisted.

It’s a fight for attention from two very different sides.

Some form of ad blocking is here to stay, so long as advertisements are irritating and the adtech industry remains deaf to genuine privacy reform. Although the nature of the ad-blocking business is generally closer to filtering than blocking, where is it headed?

We chatted with Till Faida, co-founder and CEO of eyeo, maker of Adblock Plus (ABP), to take the temperature of an evolving space that’s never been a stranger to controversy — including fresh calls for his company to face antitrust scrutiny.

Google’s Sundar Pichai doesn’t want you to be clear-eyed about AI’s dangers

Alphabet and Google CEO, Sundar Pichai, is the latest tech giant kingpin to make a public call for AI to be regulated while simultaneously encouraging lawmakers towards a dilute enabling framework that does not put any hard limits on what can be done with AI technologies.

In an op-ed published in today’s Financial Times, Pichai makes a headline-grabbing call for artificial intelligence to be regulated. But his pitch injects a suggestive undercurrent that puffs up the risk for humanity of not letting technologists get on with business as usual and apply AI at population-scale — with the Google chief claiming: “AI has the potential to improve billions of lives, and the biggest risk may be failing to do so” — thereby seeking to frame ‘no hard limits’ as actually the safest option for humanity.

Simultaneously the pitch downplays any negatives that might cloud the greater good that Pichai implies AI will unlock — presenting “potential negative consequences” as simply the inevitable and necessary price of technological progress.

It’s all about managing the level of risk, is the leading suggestion, rather than questioning outright whether the use of a hugely risk-laden technology such as facial recognition should actually be viable in a democratic society.

“Internal combustion engines allowed people to travel beyond their own areas but also caused more accidents,” Pichai writes, raiding history for a self-serving example while ignoring the vast climate costs of combustion engines (and the resulting threat now posed to the survival of countless species on Earth).

“The internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread,” he goes on. “These lessons teach us that we need to be clear-eyed about what could go wrong.”

For “clear-eyed” read: Accepting of the technology-industry’s interpretation of ‘collateral damage’. (Which, in the case of misinformation and Facebook, appears to run to feeding democracy itself into the ad-targeting meat-grinder.)

Meanwhile, not at all mentioned in Pichai’s discussion of AI risks: The concentration of monopoly power that artificial intelligence appears to be very good at supercharging.

Funny that.

Of course it’s hardly surprising a tech giant that, in recent years, rebranded an entire research division to ‘Google AI’ — and has previously been called out by some of its own workforce over a project involving applying AI to military weapons technology — should be lobbying lawmakers to set AI ‘limits’ that are as dilute and abstract as possible.

The only thing that’s better than zero regulation are laws made by useful idiots who’ve fallen hook, line and sinker for industry-expounded false dichotomies — such as those claiming it’s ‘innovation or privacy’.

Pichai’s intervention also comes at a strategic moment, with US lawmakers eyeing AI regulation and the White House seemingly throwing itself into alignment with tech giants’ desires for ‘innovation-friendly’ rules which make their business easier. (To wit: This month White House CTO Michael Kratsios warned in a Bloomberg op-ed against “preemptive, burdensome or duplicative rules that would needlessly hamper AI innovation and growth”.)

The new European Commission, meanwhile, has been sounding a firmer line on both AI and big tech.

It has made tech-driven change a key policy priority, with president Ursula von der Leyen making public noises about reining in tech giants. She has also committed to publish “a coordinated European approach on the human and ethical implications of Artificial Intelligence” within her first 100 days in office. (She took up the post on December 1, 2019 so the clock is ticking.)

Last week a leaked draft of the Commission proposals for pan-EU AI regulation suggest it’s leaning towards a relatively light touch approach (albeit, the European version of light touch is considerably more involved and interventionist than anything born in a Trump White House, clearly) — although the paper does float the idea of a temporary ban on the use of facial recognition technology in public places.

The paper notes that such a ban would “safeguard the rights of individuals, in particular against any possible abuse of the technology” — before arguing against such a “far-reaching measure that might hamper the development and uptake of this technology”, in favor of relying on provisions in existing EU law (such as the EU data protection framework, GDPR), in addition to relevant tweaks to current product safety and liability laws.

While it’s not yet clear which way the Commission will jump on regulating AI, even the lightish-touch version its considering would likely be a lot more onerous than Pichai would like.

In the op-ed he calls for what he couches as “sensible regulation” — aka taking a “proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities”.

For “social opportunities” read: The plentiful ‘business opportunities’ Google is spying — assuming the hoped for vast additional revenue scale it can get by supercharging expansion of AI-powered services into all sorts of industries and sectors (from health to transportation to everywhere else in between) isn’t derailed by hard legal limits on where AI can actually be applied.

“Regulation can provide broad guidance while allowing for tailored implementation in different sectors,” Pichai urges, setting out a preference for enabling “principles” and post-application “reviews”, to keep the AI spice flowing.

The op-ed only touches very briefly on facial recognition — despite the FT editors choosing to illustrate it with an image of the tech. Here Pichai again seeks to reframe the debate around what is, by nature, an extremely rights-hostile technology — talking only in passing of “nefarious uses” of facial recognition.

Of course this wilfully obfuscates the inherent risks of letting blackbox machines make algorithmic guesses at identity every time a face happens to pass through a public space.

You can’t hope to protect people’s privacy in such a scenario. Many other rights are also at risk, depending on what else the technology is being used for. So, really, any use of facial recognition is laden with individual and societal risk.

But Pichai is seeking to put blinkers on lawmakers. He doesn’t want them to see inherent risks baked into such a potent and powerful technology — pushing them towards only a narrow, ill-intended subset of “nefarious” and “negative” AI uses and “consequences” as being worthy of “real concerns”. 

And so he returns to banging the drum for “a principled and regulated approach to applying AI” [emphasis ours] — putting the emphasis on regulation that, above all, gives the green light for AI to be applied.

What technologists fear most here is rules that tell them when artificial intelligence absolutely cannot apply.

Ethics and principles are, to a degree, mutable concepts — and ones which the tech giants have become very practiced at claiming as their own, for PR purposes, including by attaching self-styled ‘guard-rails’ to their own AI operations. (But of course there’s no actual legal binds there.)

At the same time data-mining giants like Google are very smooth operators when it comes to gaming existing EU rules around data protection, such as by infesting their user-interfaces with confusing dark patterns that push people to click or swipe their rights away.

But a ban on applying certain types of AI would change the rules of the game. Because it would put society in the driving seat.

Laws that contained at least a moratorium on certain “dangerous” applications of AI — such as facial recognition technology, or autonomous weapons like the drone-based system Google was previously working on — have been called for by some far-sighted regulators.

And a ban would be far harder for platform giants to simply bend to their will.