UK offers cash for CSAM detection tech targeted at e2e encryption

The UK government is preparing to spend over half a million dollars to encourage the development of detection technologies for child sexual exploitation material (CSAM) that can be bolted on to end-to-end encrypted messaging platforms to scan for the illegal material, as part of its ongoing policy push around Internet and child safety.

In a joint initiative today, the Home Office and the Department for Digital, Media, Culture and Sport (DCMS) announced a “Tech Safety Challenge Fund” — which will distribute up to £425,000 (~$584k) to five organizations (£85k/$117k each) to develop “innovative technology to keep children safe in environments such as online messaging platforms with end-to-end encryption”.

A Challenge statement for applicants to the program adds that the focus is on solutions that can be deployed within e2e encrypted environments “without compromising user privacy”.

“The problem that we’re trying to fix is essentially the blindfolding of law enforcement agencies,” a Home Office spokeswoman told us, arguing that if tech platforms go ahead with their “full end-to-end encryption plans, as they currently are… we will be completely hindered in being able to protect our children online”.

While the announcement does not name any specific platforms of concern, Home Secretary Priti Patel has previously attacked Facebook’s plans to expand its use of e2e encryption — warning in April that the move could jeopardize law enforcement’s ability to investigate child abuse crime.

Facebook-owned WhatsApp also already uses e2e encryption so that platform is already a clear target for whatever ‘safety’ technologies might result from this taxpayer-funded challenge.

Apple’s iMessage and FaceTime are among other existing mainstream messaging tools which use e2e encryption.

So there is potential for very widespread application of any ‘child safety tech’ developed through this government-backed challenge. (Per the Home Office, technologies submitted to the Challenge will be evaluated by “independent academic experts”. The department was unable to provide details of who exactly will assess the projects.)

Patel, meanwhile, is continuing to apply high level pressure on the tech sector on this issue — including aiming to drum up support from G7 counterparts.

Writing in paywalled op-ed in Tory-friendly newspaper, The Telegraph, she trails a meeting she’ll be chairing today where she says she’ll push the G7 to collectively pressure social media companies to do more to address “harmful content on their platforms”.

“The introduction of end-to-end encryption must not open the door to even greater levels of child sexual abuse. Hyperbolic accusations from some quarters that this is really about governments wanting to snoop and spy on innocent citizens are simply untrue. It is about keeping the most vulnerable among us safe and preventing truly evil crimes,” she adds.

“I am calling on our international partners to back the UK’s approach of holding technology companies to account. They must not let harmful content continue to be posted on their platforms or neglect public safety when designing their products. We believe there are alternative solutions, and I know our law enforcement colleagues agree with us.”

In the op-ed, the Home Secretary singles out Apple’s recent move to add a CSAM detection tool to iOS and macOS to scan content on user’s devices before it’s uploaded to iCloud — welcoming the development as a “first step”.

“Apple state their child sexual abuse filtering technology has a false positive rate of 1 in a trillion, meaning the privacy of legitimate users is protected whilst those building huge collections of extreme child sexual abuse material are caught out. They need to see th[r]ough that project,” she writes, urging Apple to press ahead with the (currently delayed) rollout.

Last week the iPhone maker said it would delay implementing the CSAM detection system — following a backlash led by security experts and privacy advocates who raised concerns about vulnerabilities in its approach, as well as the contradiction of a ‘privacy-focused’ company carrying out on-device scanning of customer data. They also flagged the wider risk of the scanning infrastructure being seized upon by governments and states who might order Apple to scan for other types of content, not just CSAM.

Patel’s description of Apple’s move as just a “first step” is unlikely to do anything to assuage concerns that once such scanning infrastructure is baked into e2e encrypted systems it will become a target for governments to widen the scope of what commercial platforms must legally scan for.

However the Home Office’s spokeswoman told us that Patel’s comments on Apple’s CSAM tech were only intended to welcome its decision to take action in the area of child safety — rather than being an endorsement of any specific technology or approach. (And Patel does also write: “But that is just one solution, by one company. Greater investment is essential.”)

The Home Office spokeswoman wouldn’t comment on which types of technologies the government is aiming to support via the Challenge fund, either, saying only that they’re looking for a range of solutions.

She told us the overarching goal is to support ‘middleground’ solutions — denying the government is trying to encourage technologists to come up with ways to backdoor e2e encryption.

In recent years in the UK GCHQ has also floated the controversial idea of a so-called ‘ghost protocol’ — that would allow for state intelligence or law enforcement agencies to be invisibly CC’d by service providers into encrypted communications on a targeted basis. That proposal was met with widespread criticism, including from the tech industry, which warned it would undermine trust and security and threaten fundamental rights.

It’s not clear if the government has such an approach — albeit with a CSAM focus — in mind here now as it tries to encourage the development of ‘middleground’ technologies that are able to scan e2e encrypted content for specifically illegal stuff.

In another concerning development, earlier this summer, guidance put out by DCMS for messaging platforms recommended that they “prevent” the use of e2e encryption for child accounts altogether.

Asked about that, the Home Office spokeswoman told us the tech fund is “not too different” and “is trying to find the solution in between”.

“Working together and bringing academics and NGOs into the field so that we can find a solution that works for both what social media companies want to achieve and also make sure that we’re able to protect children,” said said, adding: “We need everybody to come together and look at what they can do.”

There is not much more clarity in the Home Office guidance to suppliers applying for the chance to bag a tranche of funding.

There it writes that proposals must “make innovative use of technology to enable more effective detection and/or prevention of sexually explicit images or videos of children”.

“Within scope are tools which can identify, block or report either new or previously known child sexual abuse material, based on AI, hash-based detection or other techniques,” it goes on, further noting that proposals need to address “the specific challenges posed by e2ee environments, considering the opportunities to respond at different levels of the technical stack (including client-side and server-side).”

General information about the Challenge — which is open to applicants based anywhere, not just in the UK — can be found on the Safety Tech Network website.

The deadline for applications is October 6.

Selected applicants will have five months, between November 2021 and March 2022 to deliver their projects.

When exactly any of the tech might be pushed at the commercial sector isn’t clear — but the government may be hoping that by keeping up the pressure on the tech sector platform giants will develop this stuff themselves, as Apple has been.

The Challenge is just the latest UK government initiative to bring platforms in line with its policy priorities — back in 2017, for example, it was pushing them to build tools to block terrorist content — and you could argue it’s a form of progress that ministers are not simply calling for e2e encryption to be outlawed, as they frequently have in the past.

That said, talk of ‘preventing’ the use of e2e encryption — or even fuzzy suggestions of “in between” solutions — may not end up being so very different.

What is different is the sustained focus on child safety as the political cudgel to make platforms comply. That seems to be getting results.

Wider government plans to regulate platforms — set out in a draft Online Safety bill, published earlier this year — have yet to go through parliamentary scrutiny. But in one already baked in change, the country’s data protection watchdog is now enforcing a children’s design code which stipulates that platforms need to prioritize kids’ privacy by default, among other recommended standards.

The Age Appropriate Design Code was appended to the UK’s data protection bill as an amendment — meaning it sits under wider legislation that transposed Europe’s General Data Protection Regulation (GDPR) into law, which brought in supersized penalties for violations like data breaches. And in recent months a number of social media giants have announced changes to how they handle children’s accounts and data — which the ICO has credited to the code.

So the government may be feeling confident that it has finally found a blueprint for bringing tech giants to heel.

Microsoft launches a personalized news service, Microsoft Start

Microsoft today is introducing its own personalized news reading experience called Microsoft Start, available as both a website and mobile app, in addition to being integrated with other Microsoft products, including Windows 10 and 11 and its Microsoft Edge web browser. The feed will combine content from news publishers, but in a way that’s tailored to users’ individual interests, the company says — a customization system that could help Microsoft to better compete with the news reading experiences offered by rivals like Apple or Google, as well as popular third-party apps like Flipboard or SmartNews.

Microsoft says the product builds on the company’s legacy with online and mobile consumer services like MSN and Microsoft News. However, it won’t replace MSN. That service will remain available, despite the launch of this new, in-house competitor.

To use Microsoft Start, consumers can visit the standalone website MicrosoftStart.com, which works on both Google Chrome and Microsoft Edge (but not Safari), or they can download the Microsoft Start mobile app for iOS or Android.

The service will also power the News and Interests experience on the Windows 10 taskbar and the Widgets experience on Windows 11. In Microsoft Edge, it will be available from the New Tab page, too.

Image Credits: Microsoft

At first glance, the Microsoft Start website it very much like any other online portal offering a collection of news from a variety of publishers, alongside widgets for things like weather, stocks, sports scores and traffic. When you click to read an article, you’re taken to a syndicated version hosted on Microsoft’s domain, which includes the Microsoft Start top navigation bar at the top and emoji reaction buttons below the headline.

Users can also react to stories with emojis while browsing the home page itself.

This emoji set is similar to the one being offered today by Facebook, except that Microsoft has replaced Facebook’s controversial laughing face emoji with a thinking face. (It’s worth noting that the Facebook laughing face has been increasingly criticized for being used to openly ridicule posts and mock people  — even on stories depicting tragic events, like Covid deaths, for instance.)

Microsoft has made another change with its emoji, as well: after you react to a story with an emoji, you only see your emoji instead of the top three and total reaction count. 

Image Credits: Microsoft

But while online web portals tend to be static aggregators of news content, Microsoft Start’s feed will adjust to users’ interests in several different ways.

Users can click a “Personalize” button to be taken to a page where they can manually add and remove interests from across a number of high-level categories like news, entertainment, sports, technology, money, finance, travel, health, shopping, and more. Or they can search for categories and interests that could be more specific or more niche. (Instead of “parenting,” for instance, “parenting teenagers.”)  This recalls the recent update Flipboard made to its own main page, the For You feed, which lets users make similar choices.

As users then begin to browse their Microsoft Start feed, they can also click a button to thumbs up or thumbs down an article to better adjust the feed to their preferences. Over time, the more the user engages with the content, the better refined the feed becomes, says Microsoft. This customization will leverage A.I. and machine learning, as well as human moderation, the company notes.

The feed, like other online portals, is supported by advertising. As you scroll down, you’ll notice every few rows will feature one ad unit, where the URL is flagged with a green “Ad” badge. Initially, these mostly appear to be product ads, making them distinct from the news content. Since Microsoft isn’t shutting down MSN and is integrating this news service into a number of other products, it’s expanding the available advertising real estate it can offer with this launch.

The website, app and integrations are rolling out starting today. (If you aren’t able to find the app yet, you can try scanning the QR code from your mobile device.)

 

Driven by live streams, consumer spending in social apps to hit $17.2B in 2025

The live streaming boom is driving a significant uptick in the creator economy, as a new forecast estimates consumers will spend $6.78 billion in social apps in 2021. That figure will grow to $17.2 billion annually by 2025, according to data from mobile data firm App Annie, which notes the upward trend represents a five-year compound annual growth rate (CAGR) of 29%. By that point, the lifetime total spend in social apps will reach $78 billion, the firm reports.

Image Credits: App Annie

Initially, much of the livestream economy was based on one-off purchases like sticker packs, but today, consumers are gifting content creators directly during their live streams. Some of these donations can be incredibly high, at times. Twitch streamer ExoticChaotic was gifted $75,000 during a live session on Fortnite, which was one of the largest ever donations on the game streaming social network. Meanwhile, App Annie notes another platform, Bigo Live, is enabling broadcasters to earn up to $24,000 per month through their live streams.

Apps that offer live streaming as a prominent feature are also those that are driving the majority of today’s social app spending, the report says. In the first half of this year, $3 out every $4 spend in the top 25 social apps came from apps that offered live streams, for example.

Image Credits: App Annie

During the first half of 2021, the U.S. become the top market for consumer spending inside social apps with 1.7x the spend of the next largest market, Japan, and representing 30% of the market by spend. China, Saudi Arabia, and South Korea followed to round out the top 5.

Image Credits: App Annie

While both creators and the platforms are financially benefitting from the live streaming economy, the platforms are benefitting in other ways beyond their commissions on in-app purchases. Live streams are helping to drive demand for these social apps and they help to boost other key engagement metrics, like time spent in app.

One top app that’s significantly gaining here is TikTok.

Last year, TikTok surpassed YouTube in the U.S. and the U.K. in terms of the average monthly time spent per user. It often continues to lead in the former market, and more decisively leads in the latter.

Image Credits: App Annie

Image Credits: App Annie

In other markets, like South Korea and Japan, TikTok is making strides, but YouTube still leads by a wide margin. (In South Korea, YouTube leads by 2.5x, in fact.)

Image Credits: App Annie

Beyond just TikTok, consumers spent 740 billion hours in social apps in the first half of the year, which is equal to 44% of the time spent on mobile globally. Time spent in these apps has continued to trend upwards over the years, with growth that’s up 30% in the first half of 2021 compared to the same period in 2018.

Today, the apps that enable live streaming are outpacing those that focus on chat, photo or video. This is why companies like Instagram are now announcing dramatic shifts in focus, like how they’re “no longer a photo sharing app.” They know they need to more fully shift to video or they will be left behind.

The total time spent in the top five social apps that have an emphasis on live streaming are now set to surpass half a trillion hours on Android phones alone this year, not including China. That’s a three-year CAGR of 25% versus just 15% for apps in the Chat and Photo & Video categories, App Annie noted.

Image Credits: App Annie

Thanks to growth in India, the Asia-Pacific region now accounts for 60% of the time spent in social apps. As India’s growth in this area increased over the past 3.5 years, it shrunk the gap between itself and China from 115% in 2018 to just 7% in the first half of this year.

Social app downloads are also continuing to grow, due to the growth in live streaming.

To date, consumers have downloaded social apps 74 billion times and that demand remains strong, with 4.7 billion downloads in the first half of 2021 alone — up 50% year-over-year. In the first half of the year, Asia was the largest region region for social app downloads, accounting for 60% of the market.

This is largely due to India, the top market by a factor of 5x, which surpassed the U.S. back in 2018. India is followed by the U.S., Indonesia, Brazil and China, in terms of downloads.

Image Credits: App Annie

The shift towards live streaming and video has also impacted what sort of apps consumers are interested in downloading, not just the number of downloads.

A chart that show the top global apps from 2012 to the present highlights Facebook’s slipping grip. While its apps (Facebook, Messenger, Instagram and Facebook) have dominated the top spots over the years in various positions, TikTok popped into the number one position last year, and continues to maintain that ranking in 2021.

Further down the chart, other apps that aid in video editing have also overtaken others that had been more focused on photos or chat.

Image Credits: App Annie

Video apps like YouTube (#1), TikTok (#2) Tencent Video (#4), Bigo Live (#5), Twitch (#6), and others also now rank at the top of the global charts by consumer spending in the first half of 2021.

But YouTube (#1) still dominates in time spent compared with TikTok (#5), and others from Facebook — the company holds the next three spots for Facebook, WhatsApp and Instagram, respectively.

This could explain why TikTok is now exploring the idea of allowing users to upload even longer videos, by increasing the limit from 3 minutes to 5, for instance.

In addition, because of live streaming’s ability to drive growth in terms of time spent, it’s also likely the reason why TikTok has been heavily investing in new features for its TikTok LIVE platform, including things like events, support for co-hosts, Q&As and more, and why it made the “LIVE” button a more prominent feature in its app and user experience.

App Annie’s report also digs into the impact live streaming has had on specific platforms, like Twitch and Bigo Live, the former which doubled its monthly active user base from the pre-pandemic era, and the latter which saw $314.2 million in consumer spend during H1 2021.

“The ability of social media users to communicate with each other using live video – or watch others’ live broadcasts – has not only maintained the growth of a social media app market, but contributed to its exponential growth in engagement metrics like time spent, that might otherwise have saturated some time ago,” wrote App Annie’s Head of Insights, Lexi Sydow, when announcing the new report.

The full report is available here.

Social network Peanut expands to include more women with launch of Peanut Menopause

Peanut, a social networking app for women, initially found traction connecting women in the earlier stages of their motherhood journey. But over the years, the network expanded to support women through other life stages. Now, that will include menopause, as well — a life stage that will impact nearly half the world’s population at some point, but where opportunities there are few online communities where women can connect and learn.

“We’ve been thinking about this life stage for a long time, in terms of how it is so underserved,” explains Peanut founder and CEO Michelle Kennedy. “By 2025, there are going to be a billion women who are in menopause at that moment…and yet, when you think about what is there and accessible in terms of community, social [networking], and support — there’s literally nothing,” she says.

The company saw the opportunity in this market by observing what women were already discussing on the app, Kennedy says.

Although the app had historically skewed towards younger women just getting started with marriages and family, there were a number of women who had undergone surgical or chemically-induced menopause because of something like breast cancer or some other medical condition. This had put them into early or premature menopause, and they began to discuss how that was impacting their life — particularly as younger parents. There were also women who felt like they may have begun to experience menopause but were having their concerns dismissed by their doctors because they were too young. They wanted to talk about their symptoms with others who were going through the same thing. Others, meanwhile, were older and entering menopause, and were in search of community.

Image Credits: Peanut

To address this market, Peanut is expanding with the launch of Peanut Menopause, a dedicated space in the app where women can meet others who are at a similar life stage — whether that’s other premenopausal, menopausal, or postmenopausal women.

Women can join groups, ask questions, and get advice, or even join live audio conversations hosted by experts, through Peanut’s newer live audio rooms feature, Peanut Pods.  And they can use the app’s matchmaking feature to discover other women who are also in their same demographic, where they can chat using messaging or video.

Kennedy notes that the topic of menopause is something women have historically kept quiet about, often suffering in silence due to the lack of resources available to them when it comes on online networking and support groups.

“Men are never going to build this for us, so we have to build it for ourselves,” she says. “We have to build what we want and what we need.”

Image Credits: Peanut

The expansion may bring a broader group of women to Peanut. Today, the average age of the Peanut user is around 32, but the menopause-focused communities may attract women in the 49-plus age demographic, in addition to those who are going through the experience at a younger age, for other reasons.

Unfortunately for Peanut, not all investors see the opportunity in addressing the needs of menopausal women. In fact, on a recent phone call, Kennedy said one investor seemed dismayed about the expansion, noting they had really loved “the younger age demo.” Kennedy said this comment blew her away.

“They are women who are at a stage in their life where they probably have more disposable income,” she said of the new demographic Peanut is now including. “They are more considered users, in many respects. They’re not as flighty. They don’t have 30 apps on their phone, and the ones they have on their phone they’re really invested in. It’s just astonishing to me that someone in the investment community would make a comment like that,” she adds.

Peanut is not yet monetizing its users and doesn’t intend to do so using ads Instead, the company’s plan is to eventually introduce the freemium model where women will pay to unlock a set of premium features — a model that worked well in the dating app industry, where Kennedy has roots as the former deputy CEO at dating app Badoo and an inaugural board member at Bumble.

The feature is the latest in a long line of expansions over the years — including Q&A forums, Peanut Pages, Peanut Groups, and recently, Peanut Pods — that have helped Peanut evolve into an online community that serves over 2 million users. The Peanut app is available as a free download across both iOS and Android, while a preview of its communities are available on the web.

WhatsApp faces $267M fine for breaching Europe’s GDPR

It’s been a long time coming but Facebook is finally feeling some heat from Europe’s much trumpeted data protection regime: Ireland’s Data Protection Commission (DPC) has just announced a €225 million (~$267M) for WhatsApp.

The Facebook-owned messaging app has been under investigation by the Irish DPC, its lead data supervisor in the European Union, since December 2018 — several months after the first complaints were fired at WhatsApp over how it processes user data under Europe’s General Data Protection Regulation (GDPR), once it begun being applied in May 2018.

Despite receiving a number of specific complaints about WhatsApp, the investigation undertaken by the DPC that’s been decided today was what’s known as an “own volition” enquiry — meaning the regulator selected the parameters of the investigation itself, choosing to fix on an audit of WhatsApp’s ‘transparency’ obligations.

A key principle of the GDPR is that entities which are processing people’s data must be clear, open and honest with those people about how their information will be used.

The DPC’s decision today (which runs to a full 266 pages) concludes that WhatsApp failed to live up to the standard required by the GDPR.

Its enquiry considered whether or not WhatsApp fulfils transparency obligations to both users and non-users of its service (WhatsApp may, for example, upload the phone numbers of non-users if a user agrees to it ingesting their phone book which contains other people’s personal data); as well as looking at the transparency the platform offers over its sharing of data with its parent entity Facebook (a highly controversial issue at the time the privacy U-turn was announced back in 2016, although it predated GDPR being applied).

In sum, the DPC found a range of transparency infringements by WhatsApp — spanning articles 5(1)(a); 12, 13 and 14 of the GDPR.

In addition to issuing a sizeable financial penalty, it has ordered WhatsApp to take a number of actions to improve the level of transparency it offer users and non-users — giving the tech giant a three-month deadline for making all the ordered changes.

In a statement responding to the DPC’s decision, WhatsApp disputed the findings and dubbed the penalty “entirely disproportionate” — as well as confirming it will appeal, writing:

“WhatsApp is committed to providing a secure and private service. We have worked to ensure the information we provide is transparent and comprehensive and will continue to do so. We disagree with the decision today regarding the transparency we provided to people in 2018 and the penalties are entirely disproportionate. We will appeal this decision.” 

It’s worth emphasizing that the scope of the DPC enquiry which has finally been decided today was limited to only looking at WhatsApp’s transparency obligations.

The regulator was explicitly not looking into wider complaints — which have also been raised against Facebook’s data-mining empire for well over three years — about the legal basis WhatsApp claims for processing people’s information in the first place.

So the DPC will continue to face criticism over both the pace and approach of its GDPR enforcement.

 

Indeed, prior to today, Ireland’s regulator had only issued one decision in a major cross-border cases addressing ‘Big Tech’ — against Twitter when, back in December, it knuckle-tapped the social network over a historical security breach with a fine of $550k.

WhatsApp’s first GDPR penalty is, by contrast, considerably larger — reflecting what EU regulators (plural) evidently consider to be a far more serious infringement of the GDPR.

Transparency is a key principle of the regulation. And while a security breach may indicate sloppy practice, systematic opacity towards people whose data your adtech empire relies upon to turn a fat profit looks rather more intentional; indeed, it’s arguably the whole business model.

And — at least in Europe — such companies are going to find themselves being forced to be up front about what they’re doing with people’s data.

Is the GDPR working?  

The WhatsApp decision will rekindle the debate about whether the GDPR is working effectively where it counts most: Against the most powerful companies in the world, which are also of course Internet companies.

Under the EU’s flagship data protection regulation, decisions on cross border cases require agreement from all affected regulators — across the 27 Member States — so while the GDPR’s “one-stop-shop” mechanism seeks to streamline the regulatory burden for cross-border businesses by funnelling complaints and investigations via a lead regulator (typically where a company has its main legal establishment in the EU), objections can be raised to that lead supervisory authority’s conclusions (and any proposed sanctions), as has happened here in this WhatsApp case.

Ireland originally proposed a far more low-ball penalty of up to €50M for WhatsApp. However other EU regulators objected to its draft decision on a number of fronts — and the European Data Protection Board (EDPB) ultimately had to step in and take a binding decision (issued this summer) to settle the various disputes.

Through that (admittedly rather painful) joint-working, the DPC was required to increase the size of the fine issued to WhatsApp. In a mirror of what happened with its draft Twitter decision — where the DPC has also suggested an even tinier penalty in the first instance.

While there is a clear time cost in settling disputes between the EU’s smorgasbord of data protection agencies — the DPC submitted its draft WhatsApp decision to the other DPAs for review back in December, so it’s taken well over half a year to hash out all the disputes about WhatsApp’s lossy hashing and so forth — the fact that ‘corrections’ are being made to its decisions and conclusions can land — if not jointly agreed but at least arriving via a consensus getting pushed through by the EDPB — is a sign that the process, while slow and creaky, is working. At least technically.

Even so, Ireland’s data watchdog will continue to face criticism for its outsized role in handling GDPR complaints and investigations — with some accusing the DPC of essentially cherry-picking which issues to examine in detail (by its choice and framing of cases) and which to elide entirely (those issues it doesn’t open an enquiry into or complaints it simply drops or ignores), with its loudest critics arguing it’s therefore still a major bottleneck on effective enforcement of data protection rights across the EU.

The associated conclusion for that critique is that tech giants like Facebook are still getting a pretty free pass to violate Europe’s privacy rules.

But while it’s true that a $267M penalty is the equivalent of a parking ticket for Facebook’s business empire, orders to change how such adtech giants are able to process people’s information at least have the potential to be a far more significant correction on problematic business models.

Again, though, time will be needed to tell whether such wider orders are having the sought for impact.

In a statement reacting to the DPC’s WhatsApp decision today, noyb — the privacy advocacy group founded by long-time European privacy campaigner Max Schrems, said: “We welcome the first decision by the Irish regulator. However, the DPC gets about ten thousand complaints per year since 2018 and this is the first major fine. The DPC also proposed an initial €50MK fine and was forced by the other European data protection authorities to move towards €225M, which is still only 0.08% of the turnover of the Facebook Group. The GDPR foresees fines of up to 4% of the turnover. This shows how the DPC is still extremely dysfunctional.”

Schrems also noted that he and noyb still have a number of pending cases before the DPC — including on WhatsApp.

In further remarks, they raised concerns about the length of the appeals process and whether the DPC would make a muscular defence of a sanction it had been forced to increase by other EU DPAs.

“WhatsApp will surely appeal the decision. In the Irish court system this means that years will pass before any fine is actually paid. In our cases we often had the feeling that the DPC is more concerned with headlines than with actually doing the hard groundwork. It will be very interesting to see if the DPC will actually defend this decision fully, as it was basically forced to make this decision by its European counterparts. I can imagine that the DPC will simply not put many resources on the case or ‘settle’ with WhatsApp in Ireland. We will monitor this case closely to ensure that the DPC is actually following through with this decision.”

UK now expects compliance with children’s privacy design code

In the UK, a 12-month grace period for compliance with a design code aimed at protecting children online expires today — meaning app makers offering digital services in the market which are “likely” to be accessed by children (defined in this context as users under 18 years old) are expected to comply with a set of standards intended to safeguard kids from being tracked and profiled.

The age appropriate design code came into force on September 2 last year however the UK’s data protection watchdog, the ICO, allowed the maximum grace period for hitting compliance to give organizations time to adapt their services.

But from today it expects the standards of the code to be met.

Services where the code applies can include connected toys and games and edtech but also online retail and for-profit online services such as social media and video sharing platforms which have a strong pull for minors.

Among the code’s stipulations are that a level of ‘high privacy’ should be applied to settings by default if the user is (or is suspected to be) a child — including specific provisions that geolocation and profiling should be off by default (unless there’s a compelling justification for such privacy hostile defaults).

The code also instructs app makers to provide parental controls while also providing the child with age-appropriate information about such tools — warning against parental tracking tools that could be used to silently/invisibly monitor a child without them being made aware of the active tracking.

Another standard takes aim at dark pattern design — with a warning to app makers against using “nudge techniques” to push children to provide “unnecessary personal data or weaken or turn off their privacy protections”.

The full code contains 15 standards but is not itself baked into legislation — rather it’s a set of design recommendations the ICO wants app makers to follow.

The regulatory stick to make them do so is that the watchdog is explicitly linking compliance with its children’s privacy standards to passing muster with wider data protection requirements that are baked into UK law.

The risk for apps that ignore the standards is thus that they draw the attention of the watchdog — either through a complaint or proactive investigation — with the potential of a wider ICO audit delving into their whole approach to privacy and data protection.

“We will monitor conformance to this code through a series of proactive audits, will consider complaints, and take appropriate action to enforce the underlying data protection standards, subject to applicable law and in line with our Regulatory Action Policy,” the ICO writes in guidance on its website. “To ensure proportionate and effective regulation we will target our most significant powers, focusing on organisations and individuals suspected of repeated or wilful misconduct or serious failure to comply with the law.”

It goes on to warn it would view a lack of compliance with the kids’ privacy code as a potential black mark against (enforceable) UK data protection laws, adding: “If you do not follow this code, you may find it difficult to demonstrate that your processing is fair and complies with the GDPR [General Data Protection Regulation] or PECR [Privacy and Electronics Communications Regulation].”

Tn a blog post last week, Stephen Bonner, the ICO’s executive director of regulatory futures and innovation, also warned app makers: “We will be proactive in requiring social media platforms, video and music streaming sites and the gaming industry to tell us how their services are designed in line with the code. We will identify areas where we may need to provide support or, should the circumstances require, we have powers to investigate or audit organisations.”

“We have identified that currently, some of the biggest risks come from social media platforms, video and music streaming sites and video gaming platforms,” he went on. “In these sectors, children’s personal data is being used and shared, to bombard them with content and personalised service features. This may include inappropriate adverts; unsolicited messages and friend requests; and privacy-eroding nudges urging children to stay online. We’re concerned with a number of harms that could be created as a consequence of this data use, which are physical, emotional and psychological and financial.”

“Children’s rights must be respected and we expect organisations to prove that children’s best interests are a primary concern. The code gives clarity on how organisations can use children’s data in line with the law, and we want to see organisations committed to protecting children through the development of designs and services in accordance with the code,” Bonner added.

The ICO’s enforcement powers — at least on paper — are fairly extensive, with GDPR, for example, giving it the ability to fine infringers up to £17.5M or 4% of their annual worldwide turnover, whichever is higher.

The watchdog can also issue orders banning data processing or otherwise requiring changes to services it deems non-compliant. So apps that chose to flout the children’s design code risk setting themselves up for regulatory bumps or worse.

In recent months there have been signs some major platforms have been paying mind to the ICO’s compliance deadline — with Instagram, YouTube and TikTok all announcing changes to how they handle minors’ data and account settings ahead of the September 2 date.

In July, Instagram said it would default teens to private accounts — doing so for under 18s in certain countries which the platform confirmed to us includes the UK — among a number of other child-safety focused tweaks. Then in August, Google announced similar changes for accounts on its video charing platform, YouTube.

A few days later TikTok also said it would add more privacy protections for teens. Though it had also made earlier changes limiting privacy defaults for under 18s.

Apple also recently got itself into hot water with the digital rights community following the announcement of child safety-focused features — including a child sexual abuse material (CSAM) detection tool which scans photo uploads to iCloud; and an opt in parental safety feature that lets iCloud Family account users turn on alerts related to the viewing of explicit images by minors using its Messages app.

The unifying theme underpinning all these mainstream platform product tweaks is clearly ‘child protection’.

And while there’s been growing attention in the US to online child safety and the nefarious ways in which some apps exploit kids’ data — as well as a number of open probes in Europe (such as this Commission investigation of TikTok, acting on complaints) — the UK may be having an outsized impact here given its concerted push to pioneer age-focused design standards.

The code also combines with incoming UK legislate which is set to apply a ‘duty of care’ on platforms to take a rboad-brush safety-first stance toward users, also with a big focus on kids (and there it’s also being broadly targeted to cover all children; rather than just applying to kids under 13s as with the US’ COPPA, for example).

In the blog post ahead of the compliance deadline expiring, the ICO’s Bonner sought to take credit for what he described as “significant changes” made in recent months by platforms like Facebook, Google, Instagram and TikTok, writing: “As the first-of-its kind, it’s also having an influence globally. Members of the US Senate and Congress have called on major US tech and gaming companies to voluntarily adopt the standards in the ICO’s code for children in America.”

“The Data Protection Commission in Ireland is preparing to introduce the Children’s Fundamentals to protect children online, which links closely to the code and follows similar core principles,” he also noted.

And there are other examples in the EU: France’s data watchdog, the CNIL, looks to have been inspired by the ICO’s approach — issuing its own set of right child-protection focused recommendations this June (which also, for example, encourage app makers to add parental controls with the clear caveat that such tools must “respect the child’s privacy and best interests”).

The UK’s focus on online child safety is not just making waves overseas but sparking growth in a domestic compliance services industry.

Last month, for example, the ICO announced the first clutch of GDPR certification scheme criteria — including two schemes which focus on the age appropriate design code. Expect plenty more.

Bonner’s blog post also notes that the watchdog will formally set out its position on age assurance this autumn — so it will be providing further steerage to organizations which are in scope of the code on how to tackle that tricky piece, although it’s still not clear how hard a requirement the ICO will support, with Bonner suggesting it could be actually “verifying ages or age estimation”. Watch that space. Whatever the recommendations are, age assurance services are set to spring up with compliance-focused sales pitches.

Children’s safety online has been a huge focus for UK policymakers in recent years, although the wider (and long in train) Online Safety (neé Harms) Bill remains at the draft law stage.

An earlier attempt by UK lawmakers to bring in mandatory age checks to prevent kids from accessing adult content websites — dating back to 2017’s Digital Economy Act — was dropped in 2019 after widespread criticism that it would be both unworkable and a massive privacy risk for adult users of porn.

But the government did not drop its determination to find a way to regulate online services in the name of child safety. And online age verification checks look set to be — if not a blanket, hardened requirement for all digital services — increasingly brought in by the backdoor, through a sort of ‘recommended feature’ creep (as the ORG has warned). 

The current recommendation in the age appropriate design code is that app makers “take a risk-based approach to recognising the age of individual users and ensure you effectively apply the standards in this code to child users”, suggesting they: “Either establish age with a level of certainty that is appropriate to the risks to the rights and freedoms of children that arise from your data processing, or apply the standards in this code to all your users instead.” 

At the same time, the government’s broader push on online safety risks conflicting with some of the laudable aims of the ICO’s non-legally binding children’s privacy design code.

For instance, while the code includes the (welcome) suggestion that digital services gather as little information about children as possible, in an announcement earlier this summer UK lawmakers put out guidance for social media platforms and messaging services — ahead of the planned Online Safety legislation — that recommends they prevent children from being able to use end-to-end encryption.

That’s right; the government’s advice to data-mining platforms — which it suggests will help prepare them for requirements in the incoming legislation — is not to use ‘gold standard’ security and privacy (e2e encryption) for kids.

So the official UK government messaging to app makers appears to be that, in short order, the law will require commercial services to access more of kids’ information, not less — in the name of keeping them ‘safe’. Which is quite a contradiction vs the data minimization push on the design code.

The risk is that a tightening spotlight on kids privacy ends up being fuzzed and complicated by ill-thought through policies that push platforms to monitor kids to demonstrate ‘protection’ from a smorgasbord of online harms — be it adult content or pro-suicide postings, or cyber bullying and CSAM.

The law looks set to encourage platforms to ‘show their workings’ to prove compliance — which risks resulting in ever closer tracking of children’s activity, retention of data — and maybe risk profiling and age verification checks (that could even end up being applied to all users; think sledgehammer to crack a nut). In short, a privacy dystopia.

Such mixed messages and disjointed policymaking seem set to pile increasingly confusing — and even conflicting — requirements on digital services operating in the UK, making tech businesses legally responsible for divining clarity amid the policy mess — with the simultaneous risk of huge fines if they get the balance wrong.

Complying with the ICO’s design standards may therefore actually be the easy bit.

 

How a Vungle-owned mobile marketer sent Fontmaker to the top of the App Store

Does this sound familiar? An app goes viral on social media, often including TikTok, then immediately climbs to the top of the App Store where it gains even more new installs thanks to the heightened exposure. That’s what happened with the recent No. 1 on the U.S. App Store, Fontmaker, a subscription-based fonts app which appeared to benefit from word-of-mouth growth thanks to TikTok videos and other social posts. But what we’re actually seeing here is a new form of App Store marketing — and one which now involves one of the oldest players in the space: Vungle.

Fontmaker, at first glance, seems to be just another indie app that hit it big.

The app, published by an entity called Mango Labs, promises users a way to create fonts using their own handwriting which they can then access from a custom keyboard for a fairly steep price of $4.99 per week. The app first launched on July 26. Nearly a month later, it was the No. 2 app on the U.S. App Store, according to Sensor Tower data. By August 26, it climbed up one more position to reach No. 1. before slowly dropping down in the top overall free app rankings in the days that followed.

By Aug. 27, it was No. 15, before briefly surging again to No. 4 the following day, then declining once more. Today, the app is No. 54 overall and No. 4 in the competitive Photo & Video category — still, a solid position for a brand-new and somewhat niche product targeting mainly younger users. To date, it’s generated $68,000 in revenue, Sensor Tower reports.

But Fontmaker may not be a true organic success story, despite its Top Charts success driven by a boost in downloads coming from real users, not bots. Instead, it’s an example of how mobile marketers have figured out how to tap into the influencer community to drive app installs. It’s also an example of how it’s hard to differentiate between apps driven by influencer marketing and those that hit the top of the App Store because of true demand — like walkie-talkie app Zello, whose recent trip to No. 1 can be attributed to Hurricane Ida

As it turns out, Fontmaker is not your typical “indie app.” In fact, it’s unclear who’s really behind it. Its publisher, Mango Labs, LLC, is actually an iTunes developer account owned by the mobile growth company JetFuel, which was recently acquired by the mobile ad and monetization firm Vungle — a longtime and sometimes controversial player in this space, itself acquired by Blackstone in 2019.

Vungle was primarily interested in JetFuel’s main product, an app called The Plug, aimed at influencers.

Through The Plug, mobile app developers and advertisers can connect to JetFuel’s network of over 15,000 verified influencers who have a combined 4 billion Instagram followers, 1.5 billion TikTok followers, and 100 million daily Snapchat views.

While marketers could use the built-in advertising tools on each of these networks to try to reach their target audience, JetFuel’s technology allows marketers to quickly scale their campaigns to reach high-value users in the Gen Z demographic, the company claims. This system can be less labor-intensive than traditional influencer marketing, in some cases. Advertisers pay on a cost-per-action (CPA) basis for app installs. Meanwhile, all influencers have to do is scroll through The Plug to find an app to promote, then post it to their social accounts to start making money.

Image Credits: The Plug’s website, showing influencers how the platform works

So while yes, a lot of influencers may have made TikTok videos about Fontmaker, which prompted consumers to download the app, the influencers were paid to do so. (And often, from what we saw browsing the Fontmaker hashtag, without disclosing that financial relationship in any way — an increasingly common problem on TikTok, and area of concern for the FTC.)

Where things get tricky is in trying to sort out Mango Labs’ relationship with JetFuel/Vungle. As a consumer browsing the App Store, it looks like Mango Labs makes a lot of fun consumer apps of which Fontmaker is simply the latest.

JetFuel’s website helps to promote this image, too.

It had showcased its influencer marketing system using a case study from an “indie developer” called Mango Labs and one of its earlier apps, Caption Pro. Caption Pro launched in Jan. 2018. (App Annie data indicates it was removed from the App Store on Aug. 31, 2021…yes, yesterday).

Image Credits: App Annie

Vungle, however, told TechCrunch “The Caption Pro app no longer exists and has not been live on the App Store or Google Play for a long time.” (We can’t find an App Annie record of the app on Google Play).

They also told us that “Caption Pro was developed by Mango Labs before the entity became JetFuel,” and that the case study was used to highlight JetFuel’s advertising capabilities. (But without clearly disclosing their connection.)

“Prior to JetFuel becoming the influencer marketing platform that it is today, the company developed apps for the App Store. After the company pivoted to become a marketing platform, in February 2018, it stopped creating apps but continued to use the Mango Labs account on occasion to publish apps that it had third-party monetization partnerships with,” the Vungle spokesperson explained.

In other words, the claim being made here is that while Mango Labs, originally, were the same folks who have long since pivoted to become JetFuel, and the makers of Caption Pro, all the newer apps published under “Mango Labs, LLC” were not created by JetFuel’s team itself.

“Any apps that appear under the Mango Labs LLC name on the App Store or Google Play were in fact developed by other companies, and Mango Labs has only acted as a publisher,” the spokesperson said.

Image Credits: JetFuel’s website describing Mango Labs as an “indie developer”

There are reasons why this statement doesn’t quite sit right — and not only because JetFuel’s partners seem happy to hide themselves behind Mango Labs’ name, nor because Mango Labs was a project from the JetFuel team in the past. It’s also odd that Mango Labs and another entity, Takeoff Labs, claim the same set of apps. And like Mango Labs, Takeoff Labs is associated with JetFuel too.

Breaking this down, as of the time of writing, Mango Labs has published several consumer apps on both the App Store and Google Play.

On iOS, this includes the recent No. 1 app Fontmaker, as well as FontKey, Color Meme, Litstick, Vibe, Celebs, FITme Fitness, CopyPaste, and Part 2. On Google Play, it has two more: Stickered and Mango.

Image Credits: Mango Labs

Most of Mango Labs’ App Store listings point to JetFuel’s website as the app’s “developer website,” which would be in line with what Vungle says about JetFuel acting as the apps’ publisher.

What’s odd, however, is that the Mango Labs’ app Part2, links to Takeoff Labs’ website from its App Store listing.

The Vungle spokesperson initially told us that Takeoff Labs is “an independent app developer.”

And yet, the Takeoff Labs’ website shows a team which consists of JetFuel’s leadership, including JetFuel co-founder and CEO Tim Lenardo and JetFuel co-founder and CRO JJ Maxwell. Takeoff Labs’ LLC application was also signed by Lenardo.

Meanwhile, Takeoff Labs’ co-founder and CEO Rhai Goburdhun, per his LinkedIn and the Takeoff Labs website, still works there. Asked about this connection, Vungle told us they did not realize the website had not been updated, and neither JetFuel nor Vungle have an ownership stake in Takeoff Labs with this acquisition.

Image Credits: Takeoff Labs’ website showing its team, including JetFuel’s co-founders.

Takeoff Labs’ website also shows off its “portfolio” of apps, which includes Celeb, Litstick, and FontKey — three apps that are published by Mango Labs on the App Store.

On Google Play, Takeoff Labs is the developer credited with Celebs, as well as two other apps, Vibe and Teal, a neobank. But on the App Store, Vibe is published by Mango Labs.

Image Credits: Takeoff Labs’ website, showing its app portfolio.

(Not to complicate things further, but there’s also an entity called RealLabs which hosts JetFuel, The Plug and other consumer apps, including Mango — the app published by Mango Labs on Google Play. Someone sure likes naming things “Labs!”)

Vungle claims the confusion here has to do with how it now uses the Mango Labs iTunes account to publish apps for its partners, which is a “common practice” on the App Store. It says it intends to transfer the apps published under Mango Labs to the developers’ accounts, because it agrees this is confusing.

Vungle also claims that JetFuel “does not make nor own any consumer apps that are currently live on the app stores. Any of the apps made by the entity when it was known as Mango Labs have long since been taken down from the app stores.”

JetFuel’s system is messy and confusing, but so far successful in its goals. Fontmaker did make it to No. 1, essentially growth hacked to the top by influencer marketing.

But as a consumer, what this all means is that you’ll never know who actually built the app you’re downloading or whether you were “influenced” to try it through what were, essentially, undisclosed ads.

Fontmaker isn’t the first to growth hack its way to the top through influencer promotions. Summertime hit Poparrazzi also hyped itself to the top of the App Store in a similar way, as have many others. But Poparazzi has since sunk to No. 89 in Photo & Video, which shows influence can only take you so far.

As for Fontmaker, paid influence got it to No. 1, but its Top Chart moment was brief.

Twitter is testing a new anti-abuse feature called ‘Safety Mode’

Twitter’s newest test could provide some long-awaited relief for anyone facing harassment on the platform.

The new product test introduces a feature called “Safety Mode” that puts up a temporary line of defense between an account and the waves of toxic invective that Twitter is notorious for. The mode can be enabled from the settings menu, which toggles on an algorithmic screening process that filters out potential abuse that lasts for seven days.

“Our goal is to better protect the individual on the receiving end of Tweets by reducing the prevalence and visibility of harmful remarks,” Twitter Product Lead Jarrod Doherty said.

Safe Mode won’t be rolling out broadly — not yet, anyway. The new feature will first be available to what Twitter describes as a “small feedback group” of about 1,000 English language users.

In deciding what to screen out, Twitter’s algorithmic approach assesses a tweet’s content — hateful language, repetitive, unreciprocated mentions — as well as the relationship between an account and the accounts replying. The company notes that accounts you follow or regularly exchange tweets with won’t be subject to the blocking features in Safe Mode.

For anyone in the test group, Safety Mode can be toggled on in the privacy and safety options. Once enabled, an account will stay in the mode for the next seven days. After the seven day period expires, it can be activated again.

In crafting the new feature, Twitter says it spoke with experts in mental health, online safety and human rights. The partners Twitter consulted with were able to contribute to the initial test group by nominating accounts that might benefit from the feature, and the company hopes to focus on female journalists and marginalized communities in its test of the new product. Twitter says that it will start reaching out to accounts that meet the criteria of the test group — namely accounts that often find themselves on the receiving end of some of the platform’s worst impulses.

Earlier this year, Twitter announced that it was working on developing new anti-abuse features, including an option to let users “unmention” themselves from tagged threads and a way for users to prevent serial harassers from mentioning them moving forward. The company also hinted at a feature like Safety Mode that could give users a way to defuse situations during periods of escalating abuse.

Being “harassed off of Twitter” is, unfortunately, not that uncommon. When hate and abuse get bad enough, people tend to abandon Twitter altogether, taking extended breaks or leaving outright. That’s obviously not great for the company either, and while it’s been slow to offer real solutions to harassment, it’s obviously aware of the problem and working toward some possible solutions.

TikTok adds educational resources for parents as part of its Family Pairing feature

TikTok is expanding its in-app parental controls feature, Family Pairing, with educational resources designed to help parents better support their teenage users, the company announced morning. The pairing feature, which launched to global users last year, allows parents of teens aged 13 and older to connect their accounts with the child’s so the parent can set controls related to screen time use, who the teen can direct message, and more. But the company heard from teens that they also want their voices to be heard when it comes to parents’ involvement in their digital life.

To create the new educational content, TikTok partnered with the online safety nonprofit, Internet Matters. The organization developed a set of resources in collaboration with teens that aim to offer parents tips about navigating the TikTok landscape and teenage social media usage in general.

Teens said they want parents to understand the rules they’re setting when they use features like Family Pairing and they want them to be open to having discussions about the time teens spend online. And while teens don’t mind when parents set boundaries, they also want to feel they’ve earned some level of trust from the adults in their life.

The older teens get, the more autonomy they want to have on their own device and social networks, as well. They may even tell mom or dad that they don’t want them to follow them on a given platform.

This doesn’t necessarily mean the teen is up to no good, the new resources explain to parents. The teens just want to feel like they can hang out with their friends online without being so closely monitored. This has become an important part of the online experience today, in the pandemic era, where many younger people are spending more time at home instead of socializing with friends in real-life or participating in other in-person group activities.

Image Credits: TikTok

Teens said they also want to be able to come to parents when something goes wrong, without fearing that they’ll be harshly punished or that the parent will panic about the situation. The teens know they’ll be consequences if they break the rules, but they want parents to work through other tough situations with them and devise solutions together, not just react in anger.

All this sounds like straightforward, common sense advice, but parents on TikTok often have varying degrees of comfort with their teens’ digital life and use of social networks. Some basic guidelines that explain what teens want and feel makes sense to include. That said, the parents who are technically savvy enough to enable a parental control feature like Family Pairing may already be clued into best practices.

Image Credits: TikTok

In addition, this sort of teen-focused privacy and safety content is also designed to help TikTok better establish itself as a platform working to protect its younger users — an increasingly necessary stance in light of the potential regulation which big tech has been trying to ahead of, as of late. TikTok, for instance, announced in August it would roll out more privacy protections for younger teens aimed to make the app safer. Facebook, Google and YouTube also did the same.

TikTok says parents or guardians who have currently linked their account to a teen’s account via the Family Pairing feature will receive a notification that prompts them to find out more about the teens’ suggestions and how to approach those conversations about digital literacy and online safety. Parents who sign up and enable Family Pairing for the first time, will also be guided to the resources.

LinkedIn is scrapping its Stories feature to work on short-form video

What do LinkedIn and Twitter have in common? They both introduced ephemeral story features that were pretty fleeting. LinkedIn announced today that it will suspend its Stories feature on September 30 and begin working on a different way to add short-form videos to the platform.

LinkedIn announced the upcoming change to warn advertisers who might have already purchased ads that would run in between Stories. Those will instead be shared on the LinkedIn feed, but users who promoted or sponsored Stores directly from their page will need to remake them.

LinkedIn introduced Stories in September, around the same time that Twitter rolled out Fleets to all users before doing away with the feature. This was part of a larger web and mobile redesign, which also added integrations with Zoom, BlueJeans and Teams to help professionals stay connected while working from home. But according to LinkedIn, these temporary posts didn’t quite work on the platform.

“In developing Stories, we assumed people wouldn’t want informal videos attached to their profile, and that ephemerality would reduce barriers that people feel about posting,” wrote LinkedIn’s Senior Director of Product Liz Li in a blog post today. “Turns out, you want to create lasting videos that tell your professional story in a more personal way and that showcase both your personality and expertise.”

Li also noted that users want “more creative tools to make engaging videos.” While Stories included stickers and prompts, users wanted more creative functionality.

If LinkedIn is successful in its plans to create a short-form video feature, it would join platforms like Snapchat and Instagram that have built their own TikTok-like feeds. Sure, most users probably don’t post the same content on LinkedIn and their personal social media accounts, but there actually are some prominent TikTokers sharing career advice, interview tips, and resume guidance, so LinkedIn’s pivot to video might not be as weird as it seems.