Elon Musk says Twitter is ‘aiming’ to roll out encrypted DMs this month

Amongst a ton of product changes to make Twitter a more attractive platform, Elon Musk has mentioned multiple times his desire to make direct messages better and more secure. So much so that he wants DMs to “superset Signal” — the encrypted messaging app.

Over the weekend, Musk said that the end-to-end encrypted DM feature will roll out this month. Along with that, users will also get the ability to reply to individual messages and use any reaction emoji. “Aiming to roll out ability to reply to individual DMs, use any reaction emoji & encryption later this month,” Musk wrote. Currently, users can only choose among seven emojis as reactions.

End-to-end encryption protection means that no one, including Twitter, will be able to read your chats (except the recipients of your messages). Several other apps and messaging protocols like WhatsApp, Signal and iMessage already use this kind of encryption. Currently, Twitter employees can potentially read the content of direct messages on the platform. It’s not clear at the moment if the encryption will be available for both individual and group chats. Similarly, it’s unclear whether end-to-end encryption will be enabled by default or will be an opt-in feature.

Encrypted DMs are not exactly a new project. Twitter started working on them back in 2018 but abandoned its efforts later. Last year, app researcher Jane Manchun Wong discovered new code suggesting that the social network has resumed its work on the feature under the new management.

What’s more, Twitter designer Andrea Conway also showed off a concept in February, which indicated that DMs will have a banner at the top of a conversation to indicate that it is protected through end-to-end encryption.

Other new functions are just part of Twitter trying to achieve feature parity with chat apps like WhatsApp and Telegram. The Meta-owned app expanded its emoji reaction feature last year and Telegram pushed custom reactions behind a paywall.

Elon Musk says Twitter is ‘aiming’ to roll out encrypted DMs this month by Ivan Mehta originally published on TechCrunch

TikTok expands its DM settings to let users choose who can message them

TikTok has quietly expanded its direct messaging settings to give users a choice of who they want to receive messages from. The options are now: everyone, suggested friends, mutual followers, people you’ve sent messages to, or no one. Prior to this change, only people users had identified as friends or were recommended could send a DM to each other on the platform. The change was first spotted by The Information.

The company’s website explains that if you choose the “Everyone” option, that means anyone can send you a DM. Messages from mutual friends and people you follow will appear in your inbox, and messages from people you don’t follow will appear in Message requests. You can choose to accept, delete, or report these messages.

If you choose the “Suggested Friends” option, this means that recommended friends, including synced Facebook friends and phone contacts, can send you a DM. The “Mutual Friends” option means that anyone who follows you and you follow back can send you a message. If you select the “No one” option, then you can’t receive direct messages from anyone. TikTok notes that you can still access your message history in your inbox, but you can’t receive new direct messages in those chats.

To change your direct messaging settings, you need to tap the Profile icon at the bottom of the TikTok home screen. Next, you need to tap on the Menu button at the top and select “Settings and privacy” and then tap “Privacy.” From there, you need to select “Direct messages” and then you will be able to choose who you would like to allow to send you DMs.

The change marks the latest way that TikTok is expanding social features on its platform in a bid to compete against Instagram. Last year, the company introduced a new “Friends” tab that replaced the “Discover” tab. TikTok’s decision to move away from the Discover tab indicated that it was looking to offer a new way to recommend content based on your actual friendships. Last September, TikTok launched a BeReal clone called TikTok Now that encourages users to post content everyday at a specific time in exchange for viewing posts from their friends.

TikTok has already proven itself as a successful entertainment platform, and is now likely looking to expand its social features to get users to spend even more time on its app.

TikTok expands its DM settings to let users choose who can message them by Aisha Malik originally published on TechCrunch

Instagram is developing a nudity filter for direct messages

Instagram is testing a new way to filter out unsolicited nude messages sent over direct messages, confirming reports of the development posted by app researcher Alessandro Paluzzi earlier this week. The images indicated Instagram was working on technology that would cover up photos that may contain nudity but noted that the company would not be able to access the photos itself.

The development was first reported by The Verge and Instagram confirmed the feature to TechCrunch. The company said the feature is in the early stages of development and it’s not testing this yet.

“We’re developing a set of optional user controls to help people protect themselves from unwanted DMs, like photos containing nudity,” Meta spokesperson Liz Fernandez told TechCrunch. “This technology doesn’t allow Meta to see anyone’s private messages, nor are they shared with us or anyone else. We’re working closely with experts to ensure these new features preserve people’s privacy while giving them control over the messages they receive,” she added.

Screenshots of the feature posted by Paluzzi suggest that Instagram will process all images for this feature on the device, so nothing is sent to its servers. Plus, you can choose to see the photo if you think it’s from a trusted person. When the feature rolls it out widely, it will be an optional setting for users who want to weed out messages with nude photos.

Last year, Instagram launched DM controls to enable keyword-based filters that work with abusive words, phrases and emojis. Earlier this year, the company introduced a “Sensitive Content” filter that keeps certain kinds of content — including nudity and graphical violence — out of the users’ experience.

Social media has badly grappled with the problem of unsolicited nude photos. While some apps like Bumble have tried tools like AI-powered blurring for this problem, the likes of Twitter have struggled with catching child sexual abuse material (CSAM) and non-consensual nudity at scale.

Because of the lack of solid steps from platforms, lawmakers have been forced to look at this issue with a stern eye. For instance, the UK’s upcoming Online Safety Bill aims to make cyber flashing a crime. Last month, California passed a rule that allows receivers of unsolicited graphical material to sue the senders. Texas passed a law on cyber flashing in 2019, counting it as a “misdemeanor” and resulting in a fine of up to $500.

Instagram is developing a nudity filter for direct messages by Ivan Mehta originally published on TechCrunch

Twitter rolls out a series of improvements to its Direct Message system

Have you ever tried to share a funny tweet with a few friends via Twitter DM, only to accidentally start a group chat? You’re not alone. Today, Twitter announced that it will roll out a few quality of life improvements to its direct messaging system over the next few weeks, including the ability to DM a tweet to multiple people at once in individual conversations. Researcher Jane Manchun Wong noticed that Twitter was working on this functionality last month.

A potential downside of this update is that it might invite more spam — you can’t send a message to more than 20 people at once, but that’s still a lot of people. And users receiving these messages now may not realize they were a part of group spam, as the individual DMs will seem like private 1:1 messages.

Twitter says Android users will have to wait a bit longer than iOS and web tweeters to gain access to this feature — and it’s unclear how long that will take, because in the past, it’s taken years for iOS DM updates to reach Android. But as a consolation prize, on both Android and iOS, if you scroll up in a DM conversation, you’ll be able to return to the latest message by pressing a down arrow button to quick-scroll.

Twitter’s other two DM improvements are only rolling out so far on iOS — instead of timestamping individual messages with the date and time, messages will be grouped by day. Individual DMs will still have a timestamp, but Twitter says that this change will yield “less timestamp clutter.

Finally, in DMs, iOS users will be able to access the “add reaction” menu from both double-tapping and long-pressing on a message. Long-pressing a friend’s message also gives you the option to delete the message on your account only, report the message, or copy the text.

A demonstration of new Twitter DM features

Image Credits: Twitter, screenshot by TechCrunch

Twitter also announced today that it’s testing a feature that puts users’ Revue newsletters on their profile (Twitter acquired the newsletter platform earlier this year). But last week, it unveiled more noticeable UI updates that experts believe made the platform less accessible. Within two days of the update, Twitter made contrast changes on its buttons and identified issues with its proprietary font Chirp on Windows.

Instagram launches tools to filter out abusive DMs based on keywords and emojis, and to block people, even on new accounts

Facebook and its family of apps have long grappled with the issue of how to better manage — and eradicate — bullying and other harassment on its platform, turning both to algorithms and humans in its efforts to tackle the problem better. In the latest development, today, Instagram is announcing some new tools of its own.

First, it’s introducing a new way for people to further shield themselves from harassment in their direct messages, specifically in message requests by way of a new set of words, phrases and emojis that might signal abusive content, which will also include common misspellings of those key terms, sometimes used to try to evade the filters. Second, it’s giving users the ability to proactively block people even if they try to contact the user in question over a new account.

The blocking account feature is going live globally in the next few weeks, Instagram said, and it confirmed to me that the feature to filter out abusive DMs will start rolling out in the UK, France, Germany, Ireland, Canada, Australia and New Zealand in a few weeks’ time before becoming available in more countries over the next few months.

Notably, these features are only being rolled out on Instagram — not Messenger, and not WhatsApp, Facebook’s other two hugely popular apps that enable direct messaging. The spokesperson confirmed that Facebook hopes to bring it to other apps in the stable later this year. (Instagram and others have regularly issued updates on single apps before considering how to roll them out more widely.)

Instagram said that the feature to scan DMs for abusive content — which will be based on a list of words and emojis that Facebook compiles with the help of anti-discrimination and anti-bullying organizations (it did not specify which), along with terms and emoji’s that you might add in yourself — has to be turned on proactively, rather than being made available by default.

Why? More user license, it seems, and to keep conversations private if uses want them to be. “We want to respect peoples’ privacy and give people control over their experiences in a way that works best for them,” a spokesperson said, pointing out that this is similar to how its comment filters also work. It will live in Settings>Privacy>Hidden Words for those who will want to turn on the control.

There are a number of third-party services out there in the wild now building content moderation tools that sniff out harassment and hate speech — they include the likes of Sentropy and Hive — but what has been interesting is that the larger technology companies up to now have opted to build these tools themselves. That is also the case here, the company confirmed.

The system is completely automated, although Facebook noted that it reviews any content that gets reported. While it doesn’t keep data from those interactions, it confirmed that it will be using reported words to continue building its bigger database of terms that will trigger content getting blocked, and subsequently deleting, blocking and reporting the people who are sending it.

On the subject of those people, it’s been a long time coming that Facebook has started to get smarter on how it handles the fact that the people with really ill intent have wasted no time in building multiple accounts to pick up the slack when their primary profiles get blocked. People have been aggravated by this loophole for as long as DMs have been around, even though Facebook’s harassment policies had already prohibited people from repeatedly contacting someone who doesn’t want to hear from them, and the company had already also prohibited recidivism, which as Facebook describes it, means “if someone’s account is disabled for breaking our rules, we would remove any new accounts they create whenever we become aware of it.”

The company’s approach to Direct Messages has been something of a template for how other social media companies have built these out.

In essence, they are open-ended by default, with one inbox reserved for actual contacts, but a second one for anyone at all to contact you. While some people just ignore that second box altogether, the nature of how Instagram works and is built is for more, not less, contact with others, and that means people will use those second inboxes for their DMs more than they might, for example, delve into their spam inboxes in email.

The bigger issue continues to be a game of whack-a-mole, however, and one that not just its users are asking for more help to solve. As Facebook continues to find itself under the scrutinizing eye of regulators, harassment — and better management of it — has emerged as a very key area that it will be required to solve before others do the solving for it.

Twitter to test a new filter for spam and abuse in the Direct Message inbox

Twitter is testing a new way to filter unwanted messages from your Direct Message inbox. Today, Twitter allows users to set their Direct Message inbox as being open to receiving messages from anyone, but this can invite a lot of unwanted messages, including abuse. While one solution is to adjust your settings so only those you follow can send your private messages, that doesn’t work for everyone. Some people — like reporters, for example — want to have an open inbox in order to have private conversations and receive tips.

This new experiment will test a filter that will move unwanted messages, including those with offensive content or spam, to a separate tab.

Instead of lumping all your messages into a single view, the Message Requests section will include the messages from people you don’t follow, and below that, you’ll find a way to access these newly filtered messages.

Users would have to click on the “Show” button to even read these, which protects them from having to face the stream of unwanted content that can pour in at times when the inbox is left open.

And even upon viewing this list of filtered messages, all the content itself isn’t immediately visible.

In the case that Twitter identifies content that’s potentially offensive, the message preview will say the message is hidden because it may contain offensive content. That way, users can decide if they want to open the message itself or just click the delete button to trash it.

The change could allow Direct Messages to become a more useful tool for those who prefer an open inbox, as well as an additional means of clamping down on online abuse.

It’s also similar to how Facebook Messenger handles requests — those from people you aren’t friends with are relocated to a separate Message Requests area. And those that are spammy or more questionable are in a hard-to-find Filtered section below that.

It’s not clear why a feature like this really requires a “test,” however — arguably, most people would want junk and abuse filtered out. And those who for some reason did not, could just toggle a setting to turn the filter off.

Instead, this feels like another example of Twitter’s slow pace when it comes to making changes to clamp down on abuse. Facebook Messenger has been filtering messages in this way since late 2017. Twitter should just launch a change like this, instead of “testing” it.

The idea of hiding — instead of entirely deleting — unwanted content is something Twitter has been testing in other areas, too. Last month, for example, it began piloting a new “Hide Replies” feature in Canada, which allows users to hide unwanted replies to their tweets so they’re not visible to everyone. The tweets aren’t deleted, but rather placed behind an extra click — similar to this Direct Message change.

Twitter is updating is Direct Message system in other ways, too.

At a press conference this week, Twitter announced several changes coming to its platform including a way to follow topics, plus a search tool for the Direct Message inbox, as well as support for iOS Live Photos as GIFs, the ability to reorder photos, and more.