The FTC looks to change children’s privacy law following complaints about YouTube

The U.S. Federal Trade Commission is considering an update to the laws governing children’s privacy online, known as the COPPA Rule (or, the Children’s Online Privacy Protection Act). The Rule first went into effect in 2000 and was amended in 2013 to address changes in how children use mobile devices and social networking sites. Now, the FTC believes it may be due for more revisions. The organization is seeking input and comments on possible updates, some of which are specifically focused on how to address sites that aren’t necessarily aimed at children, but have large numbers of child users.

In other words, sites like YouTube .

The FTC’s announcement comes only weeks after U.S. consumer advocacy groups and Senator Ed Markey (D-Mass.) sent complaint letters to the FTC, urging the regulators to investigate YouTube for potential COPPA violations.

The advocacy groups allege that YouTube is hiding behind its terms of service which claim YouTube is “not intended for children under 13” — a statement that’s clearly no longer true. Today, the platform is filled with videos designed for viewing by kids. Google even offers a YouTube Kids app aimed at preschooler to tween-aged children.

According to the letter written by the Campaign for a Commercial-Free Childhood (CCFC) and the Center for Digital Democracy (CDD), Google has now collected personal information from nearly 25 million children in the U.S., and it used this data to engage in “very sophisticated digital marketing techniques.”

The groups want YouTube to delete the children’s data, set up an age-gate on the site, and separate out any kids content into its own app where YouTube will have to properly follow COPPA guidelines.

These demands are among those pushing the FTC to this action.

The Commission says it wants input as to whether COPPA should be updated to better address websites and online services that are not traditionally aimed at children but are used by kids, as well as whether these “general audience platforms” should have to identity and police the child-directed content that’s uploaded by third parties.

In other words, should the FTC amend COPPA so it can protect the privacy of the kids using YouTube?

“In light of rapid technological changes that impact the online children’s marketplace, we must ensure COPPA remains effective,” said FTC Chairman Joe Simons, in a published statement. “We’re committed to strong COPPA enforcement, as well as industry outreach and a COPPA business hotline to foster a high level of COPPA compliance. But we also need to regularly revisit and, if warranted, update the Rule,” he added.

While YouTube is a key focus, the FTC will also seek comment on whether there should be an exception for parental consent for the use of educational technology in schools. And it wants to better understand the implications for COPPA in terms of interactive media, like interactive TV (think Netflix’s Minecraft: Story Mode, for example), or interactive gaming.

More broadly, the FTC wants to know how COPPA has impacted the availability of sites and services aimed at children, it says.

The decision to initiate a review of COPPA was a unanimous decision from the FTC’s five commissioners, which includes three Republicans and two Democrats.

Led by Simons, the FTC in February took action against Musical.ly (now TikTok), by issuing a record $5.7 million fine for its COPPA violations. Similar to YouTube, the app was used by a number of under-13 kids without parental consent. The company knew this was the case, but continued to collect the kids’ personal information, regardless.

“This record penalty should be a reminder to all online services and websites that target children: We take enforcement of COPPA very seriously, and we will not tolerate companies that flagrantly ignore the law,” Simons had said at the time.

The settlement with TikTok required the company to delete children’s videos and data and restrict underage users from being able to film videos.

It’s unclear why the FTC can’t now require the same of YouTube, given the similarities between the two services, without amending the law.

“They absolutely can and should fine YouTube, not to mention force YouTube to make significant changes, under the current regulations,” says Josh Golin, the Executive Director for CCFC. “As for the YouTube decision – by far the most important COPPA case in the agency’s history – it’s extremely concerning that the Commission appears to be signaling they do not have the authority under the current rules to hold YouTube accountable,” he says.

“COPPA rules could use some updating but the biggest problem with the law is the FTC’s lack of enforcement, which is something the Commission could address right away without a lengthy comment period,” Golin adds.

The FTC says it will hold a public workshop on October 7, 2019 to examine the COPPA Rule.

U.S. Senator and consumer advocacy groups urge FTC to take action on YouTube’s alleged COPPA violations

The groups behind a push to get the U.S. Federal Trade Commission to investigate YouTube’s alleged violation of children’s privacy law, COPPA, have today submitted a new letter to the FTC that lays out the appropriate sanctions the groups want the FTC to now take. The letter comes shortly after news broke that the FTC was in the final stages of its probe into YouTube’s business practices regarding this matter.

They’re joined in pressing the FTC to act by COPPA co-author, Senator Ed Markey, who penned a letter of his own, which was also submitted today.

The groups’ formal complaint with the FTC was filed back in April 2018. The coalition, which then included 20 child advocacy, consumer and privacy groups, had claimed YouTube doesn’t get parental consent before collecting the data from children under the age of 13 — as is required by the Children’s Online Privacy Protection Act, also known as COPPA.

The organizations said, effectively, that YouTube was hiding behind its terms of service which claims that YouTube is “not intended for children under 13.”

This simply isn’t true, as any YouTube user knows. YouTube is filled with videos that explicitly cater to children, from cartoons to nursery rhymes to toy ads — the latter which often come about by way of undisclosed sponsorships between toy makers and YouTube stars. The video creators will excitedly unbox or demo toys they received for free or were paid to feature, and kids just eat it all up.

In addition, YouTube curates much of its kid-friendly content into a separate YouTube Kids app that’s designed for the under-13 crowd — even preschoolers.

Meanwhile, YouTube treats children’s content like any other. That means targeted advertising and commercial data collection are taking place, the groups’ complaint states. YouTube’s algorithms also recommend videos and autoplay its suggestions — a practice that led to kids being exposed to inappropriate content in the past.

Today, two of the leading groups behind the original complaint — the Campaign for a Commercial-Free Childhood (CCFC) and Center for Digital Democracy (CDD) — are asking the FTC to impose the maximum civil penalties on YouTube because, as they’ve said:

Google had actual knowledge of both the large number of child-directed channels on YouTube and the large numbers of children using YouTube. Yet, Google collected personal information from nearly 25 million children in the U.S over a period of years, and used this data to engage in very sophisticated digital marketing techniques. Google’s wrongdoing allowed it to profit in two different ways: Google has not only made a vast amount of money by using children’s personal information as part of its ad networks to target advertising, but has also profited from advertising revenues from ads on its YouTube channels that are watched by children.

The groups are asking the FTC to impose a 20-year consent degree on YouTube.

They want the FTC to order YouTube to destroy all data from children under 13, including any inferences drawn from the data, that’s in Google’s possession. YouTube should also stop collecting data from anyone under 13, including anyone viewing a channel or video directed at children. Kids’ ages also need to be identified so they can be prevented from accessing YouTube.

Meanwhile, the groups suggest that all the channels in the Parenting and Family lineup, plus any other channels or video directed at children, be removed from YouTube and placed into a separate platform for children. (e.g. the YouTube Kids app).

This is something YouTube is already considering, according to a report from The Wall Street Journal last week.

This separate kids platform would have a variety restrictions, including no commercial data collection; no links out to other sites or online services; no targeted marketing; no product or brand integration; no influencer marketing; and even no recommendations or autoplay.

The removal of autoplaying videos and recommendations, in particular, would be a radical change to how YouTube operates, but one that could protect kids from inappropriate content that slips in. It’s also a change that some employees inside YouTube itself were vying for, according to The WSJ’s report. 

The groups also urge the FTC to require Google to fund educational campaigns around the true nature of Google’s data-driven marketing systems, admit publicly that it violated the law, and submit to annual audits to ensure its ongoing compliance. They want Google to commit $100 million to establish a fund that supports the production of noncommercial, high-quality and diverse content for kids.

Finally, the groups are asking that Google faces the maximum possible civil penalties —  $42,530 per violation, which could be counted as either per child or per day. This monetary relief needs to be severe, the groups argue, so Google and YouTube will be deterred from ever violating COPPA in the future.

While this laundry list of suggestions is more like a wish list of what the ideal resolution would look like, it doesn’t mean that the FTC will follow through on all these suggestions.

However, it seems likely that the Commission would at least require YouTube to delete the improperly collected data and isolate the kids’ YouTube experience in some way. After all, that’s precisely what it just did with Tik Tok (previously Musical.ly) which earlier this year paid a record $5.7 million fine for its own COPPA violations. It also had to implement an age gate where under-13 kids were restricted from publishing content.

The advocacy groups aren’t the only ones making suggestions to the FTC.

Senator Ed Markey (D-Mass.) also sent the FTC a letter today about YouTube’s violations of COPPA — a piece of legislation that he co-authored.

In his letter, he urges the FTC take a similar set of actions, saying:

“I am concerned that YouTube has failed to comply with COPPA. I therefore, urge the Commission to use all necessary resources to investigate YouTube, demand that YouTube pay all monetary penalties it owes as a result of any legal violations, and instruct YouTube to institute policy changes that put children’s well-being first.”

His suggestions are similar to those being pushed by the advocacy groups. They include demands for YouTube to delete the children’s data and cease data collection on those under 13; implement an age gate on YouTube to come into compliance with COPPA; prohibit targeted and influencer marketing; offer detailed explanations of what data is collected if for “internal purposes;” undergo a yearly audit; provide documentation of compliance upon request; and establish a fund for noncommercial content.

He also wants Google to sponsor a consumer education campaign warning parents that no one under 13 should use YouTube and want Google to be prohibited from launching any new child-directed product until it’s been reviewed by an independent panel of experts.

The FTC’s policy doesn’t allow it to confirm or deny nonpublic investigations. YouTube hasn’t yet commented on the letters.

Alexa, does the Echo Dot Kids protect children’s privacy?

A coalition of child protection and privacy groups has filed a complaint with the Federal Trade Commission (FTC) urging it to investigate a kid-focused edition of Amazon’s Echo smart speaker.

The complaint against Amazon Echo Dot Kids, which has been lodged with the FTC by groups including the Campaign for a Commercial-Free Childhood, the Center for Digital Democracy and the Consumer Federation of America, argues that the ecommerce giant is violating the Children’s Online Privacy Protection Act (Coppa) — including by failing to obtain proper consents for the use of kids’ data.

As with its other smart speaker Echo devices the Echo Dot Kids continually listens for a wake word and then responds to voice commands by recording and processing users’ speech. The difference with this Echo is it’s intended for children to use — which makes it subject to US privacy regulation intended to protect kids from commercial exploitation online.

The complaint, which can be read in full via the group’s complaint website, argues that Amazon fails to provide adequate information to parents about what personal data will be collected from their children when they use the Echo Dot Kids; how their information will be used; and which third parties it will be shared with — meaning parents do not have enough information to make an informed decision about whether to give consent for their child’s data to be processed.

They also accuse Amazon of providing at best “unclear and confusing” information per its obligation under Coppa to also provide notice to parents to obtain consent for children’s information to be collected by third parties via the online service — such as those providing Alexa “skills” (aka apps the AI can interact with to expand its utility).

A number of other concerns are also being raised about Amazon’s device with the FTC.

Amazon released the Echo Dot Kids a year ago — and, as we noted at the time, it’s essentially a brightly bumpered iteration of the company’s standard Echo Dot hardware.

There are differences in the software, though. In parallel Amazon updated its Alexa smart assistant — adding parental controls, aka its FreeTime software, to the child-focused smart speaker.

Amazon said the free version of FreeTime that comes bundled with the Echo Dot Kids provides parents with controls to manage their kids’ use of the product, including device time limits; parental controls over skills and services; and the ability to view kids’ activity via a parental dashboard in the app. The software also removes the ability for Alexa to be used to make phone calls outside the home (while keeping an intercom functionality).

A paid premium tier of FreeTime (called FreeTime Unlimited) also bundles additional kid-friendly content, including Audible books, ad-free radio stations from iHeartRadio Family, and premium skills and stories from the likes of Disney, National Geographic and Nickelodeon .

At the time it announced the Echo Dot Kids, Amazon said it had tweaked its voice assistant to support kid-focused interactions — saying it had trained the AI to understand children’s questions and speech patterns, and incorporated new answers targeted specifically at kids (such as jokes).

But while the company was ploughing resource into adding a parental control layer to Echo and making Alexa’s speech recognition kid-friendly, the Coppa complaint argues it failed to pay enough attention to the data protection and privacy obligations that apply to products targeted at children — as the Echo Dot Kids clearly is.

Or, to put it another way, Amazon offers parents some controls over how their children can interact with the product — but not enough controls over how Amazon (and others) can interact with their children’s data via the same always-on microphone.

More specifically, the group argues that Amazon is failing to meet its obligation as the operator of a child-directed service to provide notice and obtain consent for third parties operating on the Alexa platform to use children’s data — noting that its Children’s Privacy Disclosure policy states it does not apply to third party services and skills.

Instead the complaint says Amazon tells parents they should review the skill’s policies concerning data collection and use. “Our investigation found that only about 15% of kid skills provide a link to a privacy policy. Thus, Amazon’s notice to parents regarding data collection by third parties appears designed to discourage parental engagement and avoid Amazon’s responsibilities under Coppa,” the group writes in a summary of their complaint.

They are also objecting to how Amazon is obtaining parental consent — arguing its system for doing so is inadequate because it’s merely asking that a credit or debit/debit gift card number be inputted.

“It does not verify that the person “consenting” is the child’s parent as required by Coppa,” they argue. “Nor does Amazon verify that the person consenting is even an adult because it allows the use of debit gift cards and does not require a financial transaction for verification.”

Another objection is that Amazon is retaining audio recordings of children’s voices far longer than necessary — keeping them indefinitely unless a parent actively goes in and deletes the recordings, despite Coppa requiring that children’s data be held for no longer than is reasonably necessary.

They found that additional data (such as transcripts of audio recordings) was also still retained even after audio recordings had been deleted. A parent must contact Amazon customer service to explicitly request deletion of their child’s entire profile to remove that data residue — meaning that to delete all recorded kids’ data a parent has to nix their access to parental controls and their kids’ access to content provided via FreeTime — so the complaint argues that Amazon’s process for parents to delete children’s information is “unduly burdensome” too.

Their investigation also found the company’s process for letting parents review children’s information to be similarly arduous, with no ability for parents to search the collected data — meaning they have to listen/read every recording of their child to understand what has been stored.

They further highlights that children’s Echo Dot Kids’ audio recordings can of course include sensitive personal details — such as if a child uses Alexa’s ‘remember’ feature to ask the AI to remember personal data such as their address and contact details or personal health information like a food allergy.

The group’s complaint also flags the risk of other children having their data collected and processed by Amazon without their parents consent — such as when a child has a friend or family member visiting on a playdate and they end up playing with the Echo together.

Responding to the complaint, Amazon has denied it is in breach of Coppa. In a statement a company spokesperson said: “FreeTime on Alexa and Echo Dot Kids Edition are compliant with the Children’s Online Privacy Protection Act (COPPA). Customers can find more information on Alexa and overall privacy practices here: https://www.amazon.com/alexa/voice [amazon.com].”

An Amazon spokesperson also told us it only allows kid skills to collect personal information from children outside of FreeTime Unlimited (i.e. the paid tier) — and then only if the skill has a privacy policy and the developer separately obtains verified consent from the parent, adding that most kid skills do not have a privacy policy because they do not collect any personal information.

At the time of writing the FTC had not responded to a request for comment on the complaint.

Over in Europe, there has been growing concern over the use of children’s data by online services. A report by England’s children’s commissioner late last year warned kids are being “datafied”, and suggested profiling at such an early age could lead to a data-disadvantaged generation.

Responding to rising concerns the UK privacy regulator launched a consultation on a draft Code of Practice for age appropriate design last month, asking for feedback on 16 proposed standards online services must meet to protect children’s privacy — including requiring that product makers put the best interests of the child at the fore, deliver transparent T&Cs, minimize data use and set high privacy defaults.

The UK government has also recently published a Whitepaper setting out a policy plan to regulate Internet content which has a heavy focus on child safety.

Oath agrees to pay $5M to settle charges it violated children’s privacy

TechCrunch’s Verizon-owned parent, Oath, an ad tech division made from the merging of AOL and Yahoo, has agreed to pay around $5 million to settle charges that it violated a federal children’s privacy law.

The penalty is said to be the largest ever issued under COPPA.

The New York Times reported the story yesterday, saying the settlement will be announced by the New York attorney general’s office today.

At the time of writing the AG’s office could not be reached for comment.

We reached out to Oath with a number of questions about this privacy failure. But a spokesman did not engage with any of them directly — emailing a short statement instead, in which it writes: “We are pleased to see this matter resolved and remain wholly committed to protecting children’s privacy online.”

The spokesman also did not confirm nor dispute the contents of the NYT report.

According to the newspaper, which cites the as-yet unpublished settlement documents, AOL, via its ad exchange, helped place adverts on hundreds of websites that it knew were targeted at children under 13 — such as Roblox.com and Sweetyhigh.com.

The ads were placed used children’s personal data, including cookies and geolocation, which the attorney general’s office said violated the Children’s Online Privacy Protection Act (COPPA) of 1998.

The NYT quotes attorney general, Barbara D. Underwood, describing AOL’s actions as “flagrantly” in violation of COPPA.

The $5M fine for Oath comes at a time when scrutiny is being dialled up on online privacy and ad tech generally, and around kids’ data specifically — with rising concern about how children are being tracked and ‘datafied’ online.

Earlier this year, a coalition of child advocacy, consumer and privacy groups in the US filed a complaint with the FTC asking it to investigate Google-owned YouTube over COPPA violations — arguing that while the site’s terms claim it’s aimed at children older than 13 content on YouTube is clearly targeting younger children, including by hosting cartoon videos, nursery rhymes, and toy ads.

COPPA requires that companies provide direct notice to parents and verifiable consent parents before collecting under 13’s information online.

Consent must also be sought for using or disclosing personal data from children. Or indeed for targeting kids with adverts linked to what they do online.

Personal data under COPPA includes persistent identifiers (such as cookies) and geolocation information, as well as data such as real names or screen names.

In the case of Oath, the NYT reports that even though AOL’s policies technically prohibited the use of its display ad exchange to auction ad space on kids’ websites, the company did so anyway —  citing settlement documents covering the ad tech firm’s practices between October 2015 and February 2017.

According to these documents, an account manager for AOL in New York repeatedly — and erroneously — told a client, Playwire Media (which represents children’s websites such as Roblox.com), that AOL’s ad exchange could be used to sell ad space while complying with Coppa.

Playwire then used the exchange to place more than a billion ads on space that should have been covered by Coppa, the newspaper adds.

The paper also reports that AOL (via Advertising.com) also bought ad space on websites flagged as COPPA-covered from other ad exchanges.

It says Oath has since introduced technology to identify when ad space is deemed to be covered by Coppa and ‘adjust its practices’ accordingly — again citing the settlement documents.

As part of the settlement the ad tech division of Verizon has agreed to create a COPPA compliance program, to be overseen by a dedicated executive or officer; and to provide annual training on COPPA compliance to account managers and other employees who work with ads on kids’ websites.

Oath also agreed to destroy personal information it has collected from children.

It’s not clear whether the censured practices ended in February 2017 or continued until more recently. We asked Oath for clarification but it did not respond to the question.

It’s also not clear whether AOL was also tracking and targeting adverts at children in the EU. If Oath was doing so but stopped before May 25 this year it should avoid the possibility of any penalty under Europe’s tough new privacy framework, GDPR, which came into force in May this year — beefing up protection around children’s data by setting a cap of between 16- and 13-years-old for children being able to consent to their own data being processed.

GDPR also steeply hikes penalties for privacy violations (up to a maximum of 4% of global annual turnover).

Prior to the regulation a European data protection directive was in force across the bloc but it’s GDPR that has strengthened protections in this area with the new provision on children’s data.

Facebook and Instagram change to crack down on underage children

Facebook and Instagram will more proactively lock the accounts of users its moderators encounter and suspect are below the age of 13. Its former policy was to only investigate accounts if they were reported specifically for being potentially underage. But Facebook confirmed to TechCrunch that an “operational” change to its policy for reviewers made this week will see them lock the accounts of any underage user they come across, even if they were reported for something else, such as objectionable content, or are otherwise discovered by reviewers. Facebook will require the users to provide proof that they’re over 13, such a government-issued photo ID, to regain access. The problem stems from Facebook not requiring any proof of age upon signup.

Facebook Messenger Kids is purposefully aimed at users under age 13

A tougher stance here could reduce Facebook and Instagram’s user counts and advertising revenue. The apps’ formerly more hands-off approach allowed them to hook young users so by the time they turned 13, they had already invested in building a social graph and history of content that tethers them to the Facebook corporation. While Facebook has lost cache with the youth over time and as their parents joined, Instagram is still wildly popular with them and likely counts many tweens or even younger children as users.

The change comes in response to an undercover documentary report by the U.K.’s Channel 4 and Firecrest Films that saw a journalist become a Facebook content reviewer through a third-party firm called CPL Resources in Dublin, Ireland. A reviewer there claims they were instructed to ignore users who appeared under 13, saying “We have to have an admission that the person is underage. If not, we just like pretend that we are blind and that we don’t know what underage looks like.” The report also outlined how far-right political groups are subject to different thresholds for deletion than other Pages or accounts if they post hateful content in violation of Facebook’s policies.

In response, Facebook published a blog post on July 16 claiming that that high-profile Pages and registered political groups may receive a second layer of review from Facebook employees. But in an update on July 17, Facebook noted that “Since the program, we have been working to update the guidance for reviewers to put a hold on any account they encounter if they have a strong indication it is underage, even if the report was for something else.”

Now a Facebook spokesperson confirms to TechCrunch that this is a change to how reviewers are trained to enforce its age policy for both Facebook and Instagram. This does not mean Facebook will begin a broad sweep of its site hunting for underage users, but it will stop ignoring those it comes across. Facebook’s spokesperson stressed that its terms of service that already bar underage users remain the same, but the operational guidance given to moderators for enforcing that policy has changed.

Facebook prohibits users under 13 to comply with the U.S. Child Online Privacy Protection Act, which requires parental consent to collect data about children. The change could see more underage users have their accounts terminated. That might in turn reduce the site’s utility for their friends over or under age 13, making them less engaged with the social network.

The news comes in contrast to Facebook purposefully trying to attract underage users through its Messenger Kids app that lets children ages 6 to 12 chat with those approved by their parents, which today expanded beyond the U.S. to Mexico, Canada and Peru. With one hand, Facebook is trying to make under-13 users dependent on the social network… while pushing them away with the other.

Child signups lead to problems as users age

A high-ranking source who worked at Facebook in its early days previously told me that one repercussion of a hands-off approach to policing underage users was that as some got older, Facebook would wrongly believe they were over 18 or over 21.

That’s problematic because it could make minors improperly eligible to see ads for alcohol, real-money gambling, loans or subscription services. They’d also be able to see potentially offensive content such as graphic violence that only appears to users over 18 and is hidden behind a warning interstitial. Facebook might also expose their contact info, school and birthday in public search results, which it hides for users under 18.

Users who request to change their birth date may have their accounts suspended, deterring users from coming clean about their real age. A Facebook spokesperson confirmed that in the U.S., Canada and EU, if a user listed as over 18 tries to change their age to be under 18 or vice versa, they would be prompted to provide proof of age.

Facebook might be wise to offer an amnesty period to users who want to correct their age without having their accounts suspended. Getting friends to confirm friend requests and building up a profile takes time and social capital that formerly underage users who are now actually over 13 might not want to risk just to able to display their accurate birth date and protect Facebook. If the company wants to correct the problem, it may need to offer a temporary consequence-free method for users to correct their age. It could then promote this option to its youngest users or those whom algorithms suggest might be under 13 based on their connections.

Facebook doesn’t put any real roadblock to sign up in front of underage users beyond a self-certification that they are of age, likely to keep it easy to join the social network and grow its business. It’s understandable that some 9- or 11-year-olds would lie to gain access. Blindly believing self-certifications led to the Cambridge Analytica scandal, as the data research firm promised Facebook it had deleted surreptitiously collected user data, but Facebook failed to verify that.

There are plenty of other apps that flout COPPA laws by making it easy for underage children to sign up. Lip-syncing app Musical.ly is particularly notorious for featuring girls under 13 dancing provocatively to modern pop songs in front of audiences of millions — which worryingly include adults. The company’s CEO Alex Zhu angrily denied that it violates COPPA when I confronted him with evidence at TechCrunch Disrupt London in 2016.

Facebook’s reckoning

The increased scrutiny brought on by the Cambridge Analytica debacle, Russian election interference, screen-time addiction, lack of protections against fake news and lax policy toward conspiracy theorists and dangerous content has triggered a reckoning for Facebook.

Yesterday Facebook announced a content moderation policy update, telling TechCrunch, “There are certain forms of misinformation that have contributed to physical harm, and we are making a policy change which will enable us to take that type of content down. We will be begin implementing the policy during the coming months.” That comes in response to false rumors spreading through WhatsApp leading to lynch mobs murdering people in countries like India. The policy could impact conspiracy theorists and publications spreading false news on Facebook, some of which claim to be practicing free speech.

Across safety, privacy and truth, Facebook will have to draw the line on how proactively to police its social network. It’s left trying to balance its mission to connect the world, its business that thrives on maximizing user counts and engagement, its brand as a family-friendly utility, its responsibility to protect society and democracy from misinformation and its values that endorse free speech and a voice for everyone. Something’s got to give.

Over 20 advocacy groups complain to FTC that YouTube is violating children’s privacy law

A coalition of over 20 child advocacy, consumer and privacy groups filed a complaint with the U.S. Federal Trade Commission asking it to investigate and sanction YouTube for violation of federal children’s privacy laws. Specifically, the groups say YouTube is violating COPPA, the Children’s Online Privacy Protection Act, which requires that websites directed at children under the age of 13 get parental consent before they’re allowed to collect children’s data. YouTube doesn’t do this, the complaint says.

Technically, YouTube’s site is aimed at those users 13 and older, its terms state. That’s how it gets around having to abide by child privacy laws. But the complaint points to several examples of how the site is clearly targeting younger children, including its hosting of cartoon videos, nursery rhymes, and toy ads. Some of the most popular channels on YouTube are also those aimed at young kids,like ChuChu TV Nursery Rhymes & Kids Songs, which has 15.9 million subscribers and over 10 billion channel views, and LittleBabyBum, which has 14.6 million subscribers and over 14 billion channel views.

Plus, advertisers can target kids’ programming through the “Parenting and Family” lineup in the Google Preferred ads platform, the complaint states.

“Google has acted duplicitously by falsely claiming in its terms of service that YouTube is only for those who are age 13 or older, while it deliberately lured young people into an ad-filled digital playground,” said Jeff Chester of the Center for Digital Democracy, one of the groups who signed the complaint, in a statement. “Just like Facebook, Google has focused its huge resources on generating profits instead of protecting privacy.”

The groups have a point. The proliferation of children’s’ videos on YouTube and the potential harm that came from having them mixed in with more adult-oriented content is what led YouTube to launch the YouTube Kids app several years ago. So it’s hard to argue, simply, that “YouTube is not for kids.”

However, even that YouTube Kids has come under fire in recent days because it doesn’t fully protect children from inappropriate videos, including those that took advantage of YouTube’s algorithms for suggested videos, in order to attract views to their shady and sometimes downright disturbing content.

YouTube has tried to crack down on these videos and demonetize them, but inappropriate content continues to slip through – most recently, with conspiracy videos suggested to children. Now the company is reportedly preparing to release a version of the YouTube Kids app without the algorithmic suggestions.

YouTube pointed to the creation the Kids app in response to the groups’ complaint, according to CNET’s report, saying the app was built for younger viewers because YouTube itself is “not for children.”

But the advocacy groups say that’s not true. The same content is available both on YouTube Kids and YouTube.com, and the latter benefits from the ability to collect data without having to play by the rules.

“Kids have been watching videos on YouTube for years, something the company has known, and profited off of, by targeting content and ads at children under 13. It is time for Google to be completely transparent with all the facts and institute fundamentally responsible new policies moving forward to protect the privacy of kids,” said James P. Steyer, CEO of Common Sense. “We fully expect Google to work closely with advocates and reach out to parents with information about parental controls, content, and collection practices on YouTube so parents can make informed choices about what content they allow their kids to access and how to protect their privacy,” he added.

The complaint arrives at a time when there’s a heightened awareness and concern over the data collection and data privacy practices of major tech companies in the U.S.,as well as how those systems can be compromised by foreign governments. Most notably, the news cycle is focused on Facebook’s involvement in the Cambridge Analytica scandal which will see CEO Mark Zuckerberg having to testify before Congress. But YouTube, too, has had issues with Russian trolls spreading propaganda on its site, like they had done on Facebook and Twitter.

The combination of the poor data privacy practices across social media, abuse of social media platforms by foreign actors, YouTube’s misbehaving algorithms, and its other missteps with regard to its approach with children, may have this complaint taken more seriously than those in the past.

Others who signed the complaint include:

Campaign for a Commercial-Free Childhood (CCFC), Berkeley Media Studies Group; Center for Media Justice; Common Sense; Consumer Action; Consumer Federation of America; Consumer Federation of California; Consumers Union, the advocacy division of Consumer Reports; Consumer Watchdog; Corporate Accountability; Defending the Early Years; Electronic Privacy Information Center (“EPIC”); New Dream; Obligation, Inc.; Parent Coalition for Student Privacy; Parents Across America; Parents Television Council; Privacy Rights Clearinghouse; Public Citizen; The Story of Stuff Project; TRUCE (Teachers Resisting Unhealthy Childhood Entertainment); and USPIRG.