Snapchat adds new parental controls that block ‘sensitive’ and ‘suggestive’ content from viewing by teens

Snapchat launched parental controls on its app last year through the new ‘Family Center’ feature. Today, the company announced through a post on its online Privacy and Safety Hub it will now add content filtering capabilities that will allow parents to restrict teens from being exposed to content identified as sensitive or suggestive.

To enable the feature, parents can toggle on the “Restric Sensitive Content” filter in Snapchat’s Family Center. Once enabled, teens will no longer see the blocked content on Stories and Spotlight — the platform’s short video section. The text under the toggle specifies that turning on this filter won’t have an impact on content shared in Chat, Snaps, and Search.

Accompanying this change, Snapchat is also publishing its content guidelines for the first time to give creators on Stories and Spotlight more insights into what kind of posts may be recommended on its platform and what content will now be considered “sensitive” under its community guidelines. The platform said that it had shared these guidelines with a set of creators under the Snap Stars program and with its media partners, but now the company is making them available to everyone through a page on its website.

The company already prohibits content like hateful content, terrorism, violent extremism, illegal activity, harmful false or deceptive information, harassment and bullying, threats of violence, and more from appearing on its platform. But now, the guidelines specify what content under various categories will be considered “sensitive.” This is content that may be eligible for recommendation but may be blocked from teen users under these new controls, or from others on the app based on their age, location or personal preferences.

For example, under the sexual content category, Snap explains that content will be considered “sensitive” if it includes “all nudity, as well as all depictions of sexual activity, even if clothed, and even if the imagery is not real” (such as in the case of AI images, as well as “explicit language” describing sex acts and other things related to sex, like sex work, taboos, genitalia, sex toy, “overtly suggestive imagery,” “insensitive or demeaning sexual content” and “manipulated media.”

It addresses what will be considered sensitive in other categories as well, including harassment, disturbing or violent content, false or deceptive info, illegal or regulated activities, hateful content, terrorism and violent extremism, and commercial content (overt solicitation to buy from non-approved creators). This includes a range of content, like depictions of drugs, engagement bait (“wait for it”), self-harm, body modifications, gore, violence in the news, graphic imagery of human physical maladies, animal suffering, sensationalized coverage of distributing incidents, like violent or sexual crimes, dangerous behavior, and much, much more.

The changes come long after a 2021 Congressional hearing where Snap was grilled about showing adult-related content in the app’s Discover feed such as invites to sexualized video games, and articles about going to bars or porn. As Senators rightly pointed out, Snap’s app was listed as 12+ in the App Store but the content it was sharing was clearly intended for a more adult audience. Even the video games it advertised, in some cases, were rated as being aimed at older users.

“We hope these new tools and guidelines help parents, caregivers, trusted adults and teens not only personalize their Snapchat experience, but empower them to have productive conversations about their online experiences,” the social media company said in a blog post.

However, while the new feature may go a long way to limit sensitive content from teen viewers in some areas, it doesn’t tackle one of the areas Congress had called out — the Discover feed. Here, Snap features publisher content, including those who publish content that could be considered “sensitive” under its guidelines. It’s frankly a lot of clickbait. And yet this area is not being addressed with the new controls.

Plus, the feature requires parents to take action by turning on a toggle they likely know nothing about.

In short, this is another example of how the lack of legislation and regulations regarding social media companies has led to self-policing, which doesn’t go far enough to protect young users from harm.

In addition to the content controls, Snap said that it is working on adding tools to give parents more “visibility and control” around teens’ usage of the new My AI chatbot.

Last month, the social network launched this chatbot powered by Open AI’s GPT tech under the Snapchat+ subscription. Incidentally, Snapchat’s announcement comes after the chatbot went rogue while chatting with a Washington Post columnist pretending to be a teen. The bot allegedly advised the columnist about hiding the smell of pot and alcohol while having a birthday party. Separately, researchers at the Center for Humane Technology found that the bot gave sex advice to a user pretending to be 13 years old.

The additional tools targeting the chatbot have not yet been rolled out.


Snapchat adds new parental controls that block ‘sensitive’ and ‘suggestive’ content from viewing by teens by Ivan Mehta originally published on TechCrunch

Twitter woos developers back with an app platform

Twitter has a novel idea as to how to win back developer trust: It’s going to allow them to build on the platform itself, then promote their tools and services directly to users on Twitter at key moments.

The company today is launching an experiment that may be one of the more significant efforts in recent years — beyond the launch of the rebuilt Twitter API, of course — aimed at demonstrating its intent to collaborate with developers and be more responsive to their needs. With the new test, Twitter says it will promote select developers’ apps directly on its platform at the moment they may be useful to the end user.

The experiment will start off small by highlighting apps from Twitter’s recently launched “Twitter Toolbox” collection of ready-made services, specifically in the area of developer-built safety tools, including Block Party, Bodyguard and Moderate.

When Twitter users block or mute someone using Twitter’s built-in tools via the web app, they’ll see a new, non-intrusive prompt that suggests various third-party services that are capable of providing more advanced levels of safety and protection.

Twitter first introduced the Twitter Toolbox in February.

Currently, the Toolbox is an online hub featuring ready-to-use, self-serve apps and services built by Twitter’s developer community and aimed at the public. In addition to the trio of safety tools, the Toolbox today includes scheduling apps Chirr App and Buffer; the Thread Reader app; and measurement tools ilo,, Direcon Inc., Followerwonk and Tweepsmap.

Twitter says the idea to feature the Toolbox apps on its platform came from its discussions with the developer community. Developers told the company they wanted to improve users’ experience on Twitter, and, more critically, they wanted distribution so people were aware of their product.

“This is the first step in us working with developers to actually enable those needs,” explains Amir Shevat, Twitter’s head of Product for its developer platform, who came to the company via its 2021 Reshuffle acquisition after filling similar roles at Twitch, Slack, Google and Microsoft.

“[Developers] want users and we want to provide them with the right users at the right time — giving value both to Twitter users, who are at the moment needing those developer innovations, and giving distribution to those developers,” he says.

Image Credits: Twitter

By only showing the prompts when the user intent is the strongest — when the user is already blocking or muting an account, in this case — Twitter expects users will show an increased willingness to interact with the developers’ apps. For a developer like Block Party, which currently gains users via word-of-mouth recommendations, the ability to acquire users on Twitter itself could be highly impactful for its business.

“We’re very enthused about the opportunity to get in front of users who are really going to benefit from what we’re doing,” says Block Party founder and CEO Tracy Chou. “This way, we are much more directly getting in front of the people who want this kind of tooling.”

Bodyguard’s founder and CEO, Charles Cohen, is already expecting his business to grow substantially with the result of the new test.

“We currently estimate that 25% of users installing Bodyguard come from the Twitter Toolbox website, and we expect that figure to rise to 50% within the next few days,” he says. “We’re also excited about this new experiment that will bring Bodyguard to Twitter users who need an immediate, free, real-time, customizable and high-quality protection against toxic content directed at them on their social media.”

Twitter’s olive branch to developers

Although Twitter has been operating in a more open capacity with its developer community for several years, it still has a lot of history to overcome to fully reset its developer relations. And it will be no small effort to repair its reputation.

In earlier years, Twitter’s attitude toward developers was disorganized and constantly in flux. It hosted its first developer conference, Chirp, in 2010, setting the expectation that it would continue to engage directly with those building with its API and developer tools for years to come. The following year, the event was canceled and it didn’t return.

In 2012, Twitter pulled the rug from under the feet of developers who had been building third-party Twitter clients, and a few years later, it shafted its own partners who had agreements to resell Twitter’s firehose data — the unfiltered, full stream of tweets and their metadata — after its 2014 acquisition of the partners’ competitor Gnip.

And, as some have suggested, Twitter was not always honest about the reasons behind its API changes. A recent Twitter thread by early Twitter employee Evan Henshaw-Plath (@rabble) explains that Twitter’s decision to limit third-party clients was actually due to a competitive threat from UberMedia, which had been buying up clients with the goal of shifting users over to its own microblogging service.

Lamented Henshaw-Plath, “Twitter thought the open ecosystem was going to be captured, so they shut down the ecosystem to save the company. I wish they’d partnered with app developers, [been] honest about the attack on the company …”

Today, Twitter is trying to do just that.

“I think we need to earn trust. And I think we need to be transparent. I think we need to build in the open,” says Shevat.

Today, Twitter wants to collaborate with developers and build the tools they need to be successful, he explains.

With the rebuilt API, for example, the company focused on including features that were missing from earlier versions — like conversation threading, poll results, pinned tweets, spam filtering, stream filtering and more. It also opened up its platform, reduced the onboarding burden, added more free access and removed some of its earlier policy constraints (including those impacting third-party apps).

By November 2021, the company said 90% of all existing apps built on the API v1.1 could be fully supported on v2, while also accessing its key new features.

“Twitter’s API v2 has shown a real listening to the needs and feedback of developers and all of this is going in the right direction,” says Cohen.

And Chou agrees. ” … It does seem like there has been a change from the early more chaotic days, where it seems like the folks at Twitter now really see the value of having this developer ecosystem and really want to invest in it — because they see that it’s a good thing for their users to have more of these options, around things like moderation and safety.”

But collaboration and communication are only part of serving the developer community. Helping apps get discovered and generate revenue are also critical factors.

“As we have more surfaces on Twitter, we want to create better ways to discover [developers’ apps],” Shevat says. “Developers, in order to be successful on the platform, need engagement surfaces … they need to get discovered, which is what we’re trying to do with the next few experiments,” he adds. “And they need the ability to be successful monetarily with their innovation.”

This strategy is not just good for the developers, it’s also good for end users who benefit from access to these third-party tools and services, he notes.

“This is basically the underpinning assumption behind the things that we’re launching,” says Shevat.

Another factor driving this strategy is that Twitter is coming to understand that it can’t build everything that its users need, even with its increased pace of product development.

Meanwhile, its decentralization efforts through the BlueSky project will only drive more demand for developer-led innovations. To highlight a current example, there’s a growing demand for algorithm choice and user-set levels of content filtering or moderation. Elon Musk’s bid for Twitter is, in part, about wanting a different set of moderation rules.

Apps like Block Party, which filters out unwanted mentions and blocks trolls, could ultimately help with some of these issues.

“One way to think about Block Party is that it is a different algorithm for what you’re going to look at, in terms of notifications and when people tag you,” explains Chou.

“The default algorithm is that anytime somebody tags you, you see a mention. The new algorithm, with something like Block Party installed — even if users aren’t thinking of it as like I’m choosing my own algorithm — is a different experience. Just because somebody attacked me doesn’t mean that I’m necessarily going to have to see it immediately,” she says. “I can choose not to see it. So now there’s a different algorithm controlling the experience of mentions.”

While Block Party’s focus is on filtering mentions, there are other areas throughout Twitter’s product where users might want to customize their experience more, rather than going with the platform default.

One such area involves creators. Shevat points out that the new API added endpoints for Twitter’s Super Follow creator subscriptions product, which allows users to subscribe to favorite accounts through in-app purchases to receive exclusive content. With the API, a developer could make a Super Follows app that serves different sets of content to followers versus subscribers. Or it could filter the end-user experience to focus on the paid content from the user’s Super Follow subscriptions.

Such an app could be promoted within Twitter when the user engages with a Super Follow creator or subscribes for the first time. He also imagines how developers could build within Twitter Spaces — imagine Wordle in Spaces, he suggests — or even promote their apps on the Twitter Timeline. Perhaps if someone posts a link to a song, you could play it with your favorite streaming app.

It’s easy to also imagine the other Twitter Toolbox apps being similarly promoted in various places on Twitter’s platform — like when users look for a way to batch-post tweets or check their analytics, for instance.

“I think of Twitter right now as the old Nokia phone … it was a good phone. But the only app on it was Snake, if you remember,” says Shevat. I see the future of Twitter as an iPhone, where the value that you get is actually through developer innovation.”

Twitter’s own “app store”

Shevat notes there’s now a dedicated team focused on solving the app discovery problem specifically, with solutions like Toolbox and this latest Twitter test. Further down the road, the apps and services could be more deeply integrated with the Twitter platform, as Twitter in December added support for OAuth 2.0, which he describes as a first step into a much tighter and seamless integration in the future between Twitter and the apps it features.

Other models could also be considered — like a Twitter “app store.”

“I can definitely see that as one of the possibilities for our future,” says Shevat, when asked about this option. “From my experience, as a pure product person, I want to experiment to see where’s the best way because … the world is full of successful marketplaces, but it’s also full of graveyards of old marketplaces that nobody wants to go to,” he continues.

“So I need to make sure that users find value — maybe I can just surface it in the product instead of a separate marketplace. Maybe I can create a section within the product that surfaces developer innovation.”

“But we’re definitely going to solve the problem of discovery,” he says.