New code suggests Twitter is reviving its work on encrypted DMs

Under Elon Musk, Twitter may be reviving a project that would bring end-to-end encryption to its Direct Messaging system. Work appears to have resumed on the feature in the latest version of the Android app, according to independent researcher Jane Manchun Wong, who spotted the changes to Twitter’s code. While Musk himself recently expressed interest in making Twitter DMs more secure, Twitter itself abandoned its earlier efforts in this space after prototyping an encrypted “secret conversations” feature back in 2018.

Had the encrypted DM’s feature launched, it would have allowed Twitter to better challenge other secure messaging platforms like Signal or WhatsApp. But work on the project stopped and Twitter never publicly explained why — nor had it commented on the prototype Wong also found being developed in the app years ago.

Now, Wong says she’s seen work on encrypted DM’s resume, tweeting out a screenshot of Twitter’s code, which references encryption keys and their use in end-to-end encrypted conversations. Another screenshot shows a “Conversation key,” which the app explains is a number generated by the user’s encryption keys from the conversation. “If it matches the number in the recipient’s phone, end-to-end encryption is guaranteed,” the message reads.

In response to Wong’s tweets, Musk replied with a winking face emoji — an apparent confirmation, or at least what stands in for one these days, given that Twitter laid off its communications staff and no longer responds to reporters’ requests for comment.

Unlike the other projects Musk’s Twitter has in the works, like a relaunch of the Twitter Blue subscription now due out later this month, end-to-end encryption is something that cannot — and should not — be rushed out the gate.

Meta, for example, took years to fully roll out end-to-end encryption (E2EE) in Messenger, after having first tested the features in 2016. It wasn’t until this summer that Meta announced it would finally expand its E2EE test to individual Messenger chats. The company explained the delay to launch was, in part, due to the need to address concerns from child safety advocates who had warned the changes could shield abusers from detection. Meta also intended to use AI and machine learning to scan non-encrypted parts of its platform, like user profiles and photos, for other signals that could indicate malicious activity. Plus, it needed to ensure that its abuse-reporting features would continue to work in an E2EE environment.

In short, beyond the technical work required to introduce E2EE itself, there are complicating factors that should be taken into consideration. If Musk announces encrypted DMs in a compressed time frame, it would raise concerns about how secure and well-built the feature may be.

Plus, with Twitter’s 50% workforce reduction and the departure of key staff — including chief information security officer Lea Kissner, who would understand the cryptological challenges of such a project — it’s unclear if the remaining team has the expertise to tackle such a complex feature in the first place.

Musk, however, seems to believe encryption is the right direction for Twitter’s DM product, having recently tweeted “the goal of Twitter DMs is to superset Signal.” And, in response to a user’s question about whether Twitter would merge with telecommunication or become a WhatsApp replacement, Musk responded simply that “X will be the everything app.”

“X” here refers to Musk’s plan to transform Twitter into a “super app” that would combine payments, social networking, entertainment and more into one singular experience. Last week, he spoke in more detail about his plans for the payments portion, suggesting Twitter could one day allow users to hold cash balances, send money to one another and even offer high-yield money market accounts.

New code suggests Twitter is reviving its work on encrypted DMs by Sarah Perez originally published on TechCrunch

Smart Eye’s latest acquisition points to consolidation among driver monitoring system suppliers

Smart Eye, a supplier of driver monitoring systems for automakers, has agreed to acquire human behavior software company iMotions for $46.6 million just five months after it snapped up emotion-detection software startup Affectiva.

Smart Eye, a publicly traded Swedish company, said Tuesday this is a cash-and-stock deal. Smart Eye will provide $23.2 million (200 million Swedish kroner) in shares and the remaining amount will be paid in cash. iMotions, which employs 63 people, will operate as a standalone company within the Smart Eye Group. The company’s structure and management team will remain in place, according to Smart Eye.

The acquisition is notable because it signals growing consolidation within the driver monitoring systems segment, a trend that Smart Eye CEO and founder Martin Krantz confirmed in comments to TechCrunch.

“We expect to see continued consolidation of DMS vendors due to increased demand for DMS and interior sensing, which is already ramping up amongst OEMs,” Krantz told TechCrunch in an email. “With regulatory requirements in Europe — that are sure to follow in other regions of the world — we believe that nearly all global OEMs will procure their first or second generation DMS during the next couple of years. By joining forces with Affectiva, and now with iMotions, we are perfectly positioned for this development.”

Attention on DMS has increased as automakers roll out so-called Level 2 advanced driver assistance systems. There are five levels of automation under standards created by SAE International. Level 2 means two primary functions — like adaptive cruise and lane keeping — are automated and still have a human driver in the loop at all times. Level 2 is not considered full self-driving. These are advanced driver assistance systems that require a human being to be engaged and ready to take over.

The DMS typically involves a camera that watches the driver to ensure they’re paying attention and not abusing or stretching the capabilities of the system. GM and Ford use DMS to allow for hands-free driving. For years, Tesla has not had a camera, or in the case of the Model 3 and Y, used it to monitor the attentiveness of drivers using its Autopilot system. Instead, Tesla has relied on a sensor to gauge whether the driver’s hands were on the wheel.

While Tesla now equips Model X and Model S vehicles produced in 2021 or later with a camera, the feature is currently only available for Model 3 and Y vehicles equipped with Tesla Vision. That has been a sore spot for safety advocates in the United States, who have called on Tesla to change the system design of Autopilot to ensure it not being misused.

Regulators in Europe have already weighed in on what vehicles must be equipped with, opening up an opportunity for Smart Eye and other competitors.

While Affectiva and iMotion are related, Smart Eye contends that they offer different and complementing capabilities that can be folded into its own AI-based eye-tracking technology. The two companies actually used to work together, according to Smart Eye.

Affectiva, which spun out of the MIT Media Lab in 2009, uses computer vision, speech analytics and software to study facial expressions and analyze human emotion and cognitive states. Meanwhile, iMotion developed a software layer that brings together data coming in from multiple sensors and provides analytics that can then be used to improve driver safety and the driving (or riding) experience.

Affectiva and iMotions’ tech could help its new parent company establish market share in “interior sensing,” in which software and hardware are combined to monitor the entire cabin of a vehicle and deliver services in response to the occupant’s emotional state.

Twitter rolls out a series of improvements to its Direct Message system

Have you ever tried to share a funny tweet with a few friends via Twitter DM, only to accidentally start a group chat? You’re not alone. Today, Twitter announced that it will roll out a few quality of life improvements to its direct messaging system over the next few weeks, including the ability to DM a tweet to multiple people at once in individual conversations. Researcher Jane Manchun Wong noticed that Twitter was working on this functionality last month.

A potential downside of this update is that it might invite more spam — you can’t send a message to more than 20 people at once, but that’s still a lot of people. And users receiving these messages now may not realize they were a part of group spam, as the individual DMs will seem like private 1:1 messages.

Twitter says Android users will have to wait a bit longer than iOS and web tweeters to gain access to this feature — and it’s unclear how long that will take, because in the past, it’s taken years for iOS DM updates to reach Android. But as a consolation prize, on both Android and iOS, if you scroll up in a DM conversation, you’ll be able to return to the latest message by pressing a down arrow button to quick-scroll.

Twitter’s other two DM improvements are only rolling out so far on iOS — instead of timestamping individual messages with the date and time, messages will be grouped by day. Individual DMs will still have a timestamp, but Twitter says that this change will yield “less timestamp clutter.

Finally, in DMs, iOS users will be able to access the “add reaction” menu from both double-tapping and long-pressing on a message. Long-pressing a friend’s message also gives you the option to delete the message on your account only, report the message, or copy the text.

A demonstration of new Twitter DM features

Image Credits: Twitter, screenshot by TechCrunch

Twitter also announced today that it’s testing a feature that puts users’ Revue newsletters on their profile (Twitter acquired the newsletter platform earlier this year). But last week, it unveiled more noticeable UI updates that experts believe made the platform less accessible. Within two days of the update, Twitter made contrast changes on its buttons and identified issues with its proprietary font Chirp on Windows.

In the wake of recent racist attacks, Instagram rolls out more anti-abuse features

Instagram today is rolling out a set of new features aimed at helping people protect their accounts from abuse, including offensive and unwanted comments and messages. The company will introduce tools for filtering abusive direct message (DM) requests as well as a way for users to limit other people from posting comments or sending DMs during spikes of increased attention — like when going viral. In addition, those who attempt to harass others on the service will also see stronger warnings against doing so, which detail the potential consequences.

The company recently confirmed it was testing the new anti-harassment tool, Limits, which Instagram head Adam Mosseri referenced in a video update shared with the Instagram community last month. The feature aims to give Instagram users an easy way to temporarily lock down their accounts when they’re targeted with a flood of harassment.

Such an addition could have been useful to combat the recent racist attacks that took place on Instagram following the Euro 2020 final, which saw several England footballers viciously harassed by angry fans after the team’s defeat. The incidents, which had included racist comments and emoji, raised awareness of how little Instagram users could do to protect themselves when they’ve gone viral in a negative way.

Image Credits: Instagram

During these sudden spikes of attention, Instagram users see an influx of unwanted comments and DM requests from people they don’t know. The Limits feature allows users to choose who can interact with you during these busy times.

From Instagram’s privacy settings, you’ll be able to toggle on limits that restrict accounts that are not following you as well as those belonging to recent followers. When limits are enabled, these accounts can’t post comments or send DM requests for a period of time of your choosing, like a certain number of days or even weeks.

Twitter had been eyeing a similar set of tools for users who go viral, but has yet to put them into action.

Instagram’s Limits feature had already been in testing, but is now becoming globally available.

The company says it’s currently experimenting with using machine learning to detect a spike in comments and DMs in order to prompt people to turn on Limits with a notification in the Instagram app.

Another feature, Hidden Words, is also being expanded.

Designed to protect users from abusive DM requests, Hidden Words automatically filters requests that contain offensive words, phrases and emojis and places them into a Hidden Folder, which you can choose to never view. It also filters out requests that are likely spam or are otherwise low-quality. Instagram doesn’t provide a list of which words it blocks to prevent people from gaming the system, but it has now updated that database with new types of offensive language, including strings of emoji — like those that were used to abuse the footballers — and included them in the filter.

Hidden Words had already been rolled out to a handful of countries earlier this year, but will reach all Instagram users globally by the end of the month. Instagram will push accounts with a larger following to use it, with messages both in their DM inbox and in their Stories tray.

The feature was also expanded with a new option to “Hide More Comments,” which would allow users to easily hide comments that are potentially harmful, but don’t go against Instagram’s rules.

Another change will involve the warnings that are displayed when someone posts a potentially abusive comment. Already, Instagram would warn users when they first try to post a comment, and it would later display an even stronger warning when they tried to post potentially offensive comments multiple times. Now, the company says users will see the stronger message the first time around.

Image Credits: Instagram

The message clearly states the comment may “contain racist language” or other content that goes against its guidelines, and reminds users that the comment may be hidden when it’s posted as a result. It also warns the user if they continue to break the community guidelines, their account “may be deleted.”

While systems to counteract online abuse are necessary and underdeveloped, there’s also the potential for such tools to be misused to silence dissent. For example, if a creator was spreading misinformation or conspiracies, or had people calling them out in the comments, they could turn to anti-abuse tools to hide the negative interactions. This would allow the creator to paint an inaccurate picture of their account as one that was popular and well-liked. And that, in turn, can be leveraged into marketing power and brand deals.

As Instagram puts more power into creators’ hands to handle online abuse, it has to weigh the potential impacts those tools have on the overall creator economy, too.

“We hope these new features will better protect people from seeing abusive content, whether it’s racist, sexist, homophobic or any other type of abuse,” noted Mosseri, in an announcement about the changes. “We know there’s more to do, including improving our systems to find and remove abusive content more quickly, and holding those who post it accountable.”

Instagram launches tools to filter out abusive DMs based on keywords and emojis, and to block people, even on new accounts

Facebook and its family of apps have long grappled with the issue of how to better manage — and eradicate — bullying and other harassment on its platform, turning both to algorithms and humans in its efforts to tackle the problem better. In the latest development, today, Instagram is announcing some new tools of its own.

First, it’s introducing a new way for people to further shield themselves from harassment in their direct messages, specifically in message requests by way of a new set of words, phrases and emojis that might signal abusive content, which will also include common misspellings of those key terms, sometimes used to try to evade the filters. Second, it’s giving users the ability to proactively block people even if they try to contact the user in question over a new account.

The blocking account feature is going live globally in the next few weeks, Instagram said, and it confirmed to me that the feature to filter out abusive DMs will start rolling out in the UK, France, Germany, Ireland, Canada, Australia and New Zealand in a few weeks’ time before becoming available in more countries over the next few months.

Notably, these features are only being rolled out on Instagram — not Messenger, and not WhatsApp, Facebook’s other two hugely popular apps that enable direct messaging. The spokesperson confirmed that Facebook hopes to bring it to other apps in the stable later this year. (Instagram and others have regularly issued updates on single apps before considering how to roll them out more widely.)

Instagram said that the feature to scan DMs for abusive content — which will be based on a list of words and emojis that Facebook compiles with the help of anti-discrimination and anti-bullying organizations (it did not specify which), along with terms and emoji’s that you might add in yourself — has to be turned on proactively, rather than being made available by default.

Why? More user license, it seems, and to keep conversations private if uses want them to be. “We want to respect peoples’ privacy and give people control over their experiences in a way that works best for them,” a spokesperson said, pointing out that this is similar to how its comment filters also work. It will live in Settings>Privacy>Hidden Words for those who will want to turn on the control.

There are a number of third-party services out there in the wild now building content moderation tools that sniff out harassment and hate speech — they include the likes of Sentropy and Hive — but what has been interesting is that the larger technology companies up to now have opted to build these tools themselves. That is also the case here, the company confirmed.

The system is completely automated, although Facebook noted that it reviews any content that gets reported. While it doesn’t keep data from those interactions, it confirmed that it will be using reported words to continue building its bigger database of terms that will trigger content getting blocked, and subsequently deleting, blocking and reporting the people who are sending it.

On the subject of those people, it’s been a long time coming that Facebook has started to get smarter on how it handles the fact that the people with really ill intent have wasted no time in building multiple accounts to pick up the slack when their primary profiles get blocked. People have been aggravated by this loophole for as long as DMs have been around, even though Facebook’s harassment policies had already prohibited people from repeatedly contacting someone who doesn’t want to hear from them, and the company had already also prohibited recidivism, which as Facebook describes it, means “if someone’s account is disabled for breaking our rules, we would remove any new accounts they create whenever we become aware of it.”

The company’s approach to Direct Messages has been something of a template for how other social media companies have built these out.

In essence, they are open-ended by default, with one inbox reserved for actual contacts, but a second one for anyone at all to contact you. While some people just ignore that second box altogether, the nature of how Instagram works and is built is for more, not less, contact with others, and that means people will use those second inboxes for their DMs more than they might, for example, delve into their spam inboxes in email.

The bigger issue continues to be a game of whack-a-mole, however, and one that not just its users are asking for more help to solve. As Facebook continues to find itself under the scrutinizing eye of regulators, harassment — and better management of it — has emerged as a very key area that it will be required to solve before others do the solving for it.

Gwyneth Paltrow invests in The Expert, a video marketplace for high-end interior designers

The pandemic-induced lockdowns halted many a home decoration project, but the irony was that our homes became even more important. But where to get ideas to decorate? Home decor experts could no longer visit. Now an LA-based startup is addressing this digitization of the interior design market, but kicking off with a typically LA-oriented, high-end clientele.

The LA-based The Expert – a platform for video consultations with interior designers – has raised a $3 million seed funding round led by Forerunner Ventures, with participation from Sweet Capital, Promus Ventures, Golden Ventures, Jeffrey Katzenberg’s WndrCo, AD 100 designer Brigette Romanek, and movie star Gwyneth Paltrow.

The Expert offers 1:1 video consultations with leading interior designers, it says.

The founders consist of Jake Arnold, a celebrity interior designer (who has worked with John Legend, Rashida Jones, and Cameron Diaz among, others) and YC-alumni, Leo Seigal, who previously founded and sold Represent.com to CustomInk for $100m in 2015.

After being “inundated” with DMs during lockdown asking for his advice, Arnold says he realized he didn’t have the business model to help non-retainer clients. So he joined Seigal to create The Expert.

The Expert features 85 designers, so far. CLients click on their profiles to see rates and availability and then click to book. Clients can upload any relevant floor plans, images of the home, inspiration ideas etc for the designer to review ahead of time. They then join a zoom link (the platform uses the Zoom API) to meet with an interior designer and can leave a review afterward.

The company claims it has 700 designers on its waitlist and will hit $1m of bookings after its first quarter, after launching in early February this year.

The startup has some competition in the form of Modsy and Havenly, but The Expert says it is going for a more high-end experience, where clients are willing to pay $300-$2,500 for an hour of a designers’ time. The startup takes a 20% cut of the transaction.

Co-founder Leo Seigal said: “We were able to attract a crazy roster of designers partly thanks to co-founder Jake who is so highly regarded in the industry, and partly due to a timeliness of offering which is far above anything that has been tried in the home space.”

In a statement, Gwyneth Paltrow said: “I’ve always felt that access to great design – and those who create it – is too rare of a commodity. It’s a game-changer for someone without the budget for a full-time designer to have this roster of talent on speed dial.”

Nicole Johnson, Partner at Forerunner said: “We’ve been thinking through new models for the interior design sector for years at Forerunner, observing room for improvement for the trade and consumers alike. Interior design is arguably the ultimate, best-suited source of home inspiration and commerce enablement for consumers, but the trade is a famously walled garden. The Expert solves for this, connecting anyone, anywhere with the world’s leading interior designers via video consultation—allowing Experts to broaden their reach and monetization in a predictable, rewarding, and low-friction way.”