Gatheround raises millions from Homebrew, Bloomberg and Stripe’s COO to help remote workers connect

Remote work is no longer a new topic, as much of the world has now been doing it for a year or more because of the COVID-19 pandemic.

Companies — big and small — have had to react in myriad ways. Many of the initial challenges have focused on workflow, productivity and the like. But one aspect of the whole remote work shift that is not getting as much attention is the culture angle.

A 100% remote startup that was tackling the issue way before COVID-19 was even around is now seeing a big surge in demand for its offering that aims to help companies address the “people” challenge of remote work. It started its life with the name Icebreaker to reflect the aim of “breaking the ice” with people with whom you work.

“We designed the initial version of our product as a way to connect people who’d never met, kind of virtual speed dating,” says co-founder and CEO Perry Rosenstein. “But we realized that people were using it for far more than that.” 

So over time, its offering has evolved to include a bigger goal of helping people get together beyond an initial encounter –– hence its new name: Gatheround.

“For remote companies, a big challenge or problem that is now bordering on a crisis is how to build connection, trust and empathy between people that aren’t sharing a physical space,” says co-founder and COO Lisa Conn. “There’s no five-minute conversations after meetings, no shared meals, no cafeterias — this is where connection organically builds.”

Organizations should be concerned, Gatheround maintains, that as we move more remote, that work will become more transactional and people will become more isolated. They can’t ignore that humans are largely social creatures, Conn said.

The startup aims to bring people together online through real-time events such as a range of chats, videos and one-on-one and group conversations. The startup also provides templates to facilitate cultural rituals and learning & development (L&D) activities, such as all-hands meetings and workshops on diversity, equity and inclusion. 

Gatheround’s video conversations aim to be a refreshing complement to Slack conversations, which despite serving the function of communication, still don’t bring users face-to-face.

Image Credits: Gatheround

Since its inception, Gatheround has quietly built up an impressive customer base, including 28 Fortune 500s, 11 of the 15 biggest U.S. tech companies, 26 of the top 30 universities and more than 700 educational institutions. Specifically, those users include Asana, Coinbase, Fiverr, Westfield and DigitalOcean. Universities, academic centers and nonprofits, including Georgetown’s Institute of Politics and Public Service and Chan Zuckerberg Initiative, are also customers. To date, Gatheround has had about 260,000 users hold 570,000 conversations on its SaaS-based, video platform.

All its growth so far has been organic, mostly referrals and word of mouth. Now, armed with $3.5 million in seed funding that builds upon a previous $500,000 raised, Gatheround is ready to aggressively go to market and build upon the momentum it’s seeing.

Venture firms Homebrew and Bloomberg Beta co-led the company’s latest raise, which included participation from angel investors such as Stripe COO Claire Hughes Johnson, Meetup co-founder Scott Heiferman, Li Jin and Lenny Rachitsky. 

Co-founders Rosenstein, Conn and Alexander McCormmach describe themselves as “experienced community builders,” having previously worked on President Obama’s campaigns as well as at companies like Facebook, Change.org and Hustle. 

The trio emphasize that Gatheround is also very different from Zoom and video conferencing apps in that its platform gives people prompts and organized ways to get to know and learn about each other as well as the flexibility to customize events.

“We’re fundamentally a connection platform, here to help organizations connect their people via real-time events that are not just really fun, but meaningful,” Conn said.

Homebrew Partner Hunter Walk says his firm was attracted to the company’s founder-market fit.

“They’re a really interesting combination of founders with all this experience community building on the political activism side, combined with really great product, design and operational skills,” he told TechCrunch. “It was kind of unique that they didn’t come out of an enterprise product background or pure social background.”

He was also drawn to the personalized nature of Gatheround’s platform, considering that it has become clear over the past year that the software powering the future of work “needs emotional intelligence.”

“Many companies in 2020 have focused on making remote work more productive. But what people desire more than ever is a way to deeply and meaningfully connect with their colleagues,” Walk said. “Gatheround does that better than any platform out there. I’ve never seen people come together virtually like they do on Gatheround, asking questions, sharing stories and learning as a group.” 

James Cham, partner at Bloomberg Beta, agrees with Walk that the founding team’s knowledge of behavioral psychology, group dynamics and community building gives them an edge.

“More than anything, though, they care about helping the world unite and feel connected, and have spent their entire careers building organizations to make that happen,” he said in a written statement. “So it was a no-brainer to back Gatheround, and I can’t wait to see the impact they have on society.”

The 14-person team will likely expand with the new capital, which will also go toward helping adding more functionality and details to the Gatheround product.

“Even before the pandemic, remote work was accelerating faster than other forms of work,” Conn said. “Now that’s intensified even more.”

Gatheround is not the only company attempting to tackle this space. Ireland-based Workvivo last year raised $16 million and earlier this year, Microsoft  launched Viva, its new “employee experience platform.”

Instagram Live takes on Clubhouse with options to mute and turn off the video

In addition to Facebook’s Clubhouse competitor built within Messenger Rooms and its experiments with a Clubhouse-like Q&A platform on the web, the company is now leveraging yet another of its largest products to take on the Clubhouse threat: Instagram Live. Today, Instagram announced it’s adding new features that will allow users to mute their microphones and even turn their video off while using Instagram Live.

Instagram explains these new features will give hosts more flexibility during their livestream experiences, as they can decrease the pressure to look or sound a certain way while broadcasting live. While that may be true, the reality is that Facebook is simply taking another page from Clubhouse’s playbook by enabling a “video off” experience that encourages more serendipitous conversations.

When people don’t have to worry about how they look, they’ll often be more amenable to jumping into a voice chat. In addition, being audio-only allows creators to engage with their community while multitasking — perhaps they’re doing chores or moving around, and can’t sit and stare right at the camera. To date, this has been one of the advantages about using Clubhouse versus live video chat. You could participate in Clubhouse’s voice chat rooms without always having to give the conversation your full attention or worrying about background noise.

For the time being, hosts will not be able to turn on or off the video or mute others in the livestream, but Instagram tells us it’s working on offering more of these types of capabilities to the broadcaster, and expects to roll them out soon.

Instagram notes it tested the new features publicly earlier this week during an Instagram Live between Facebook CEO Mark Zuckerberg and Head of Instagram Adam Mosseri.

This isn’t the first feature Instagram has added in recent weeks to lure the creator community to its platform instead of Clubhouse or other competitors. In March, Instagram rolled out the option for creators to host Live Rooms that allow up to four people to broadcast at the same time. The Rooms were meant to appeal to creators who wanted to host live talk shows, expanded Q&As, and more — all experiences that are often found on Clubhouse. It also added the ability for fans to buy badges to support the hosts, to cater to the needs of professional creators looking to monetize their reach.

Although Instagram parent company Facebook already has a more direct Clubhouse clone in development with Live Audio Rooms on Facebook and Messenger, the company said it doesn’t expect it to launch into testing until this summer. And it will first be available to Groups and public figures, not the broader public.

Instagram Live’s new features, meanwhile, are rolling out to Instagram’s global audience on both iOS and Android starting today.

Click Studios asks customers to stop tweeting about its Passwordstate data breach

Australian security software house Click Studios has told customers not to post emails sent by the company about its data breach, which allowed malicious hackers to push a malicious update to its flagship enterprise password manager Passwordstate to steal customer passwords.

Last week, the company told customers to “commence resetting all passwords” stored in its flagship password manager after the hackers pushed the malicious update to customers over a 28-hour window between April 20-22. The malicious update was designed to contact the attacker’s servers to retrieve malware designed to steal and send the password manager’s contents back to the attackers.

In an email to customers, Click Studios did not say how the attackers compromised the password manager’s update feature, but included a link to a security fix.

But news of the breach only became public after after Danish cybersecurity firm CSIS Group published a blog post with details of the attack hours after Click Studios emailed its customers.

Click Studios claims Passwordstate is used by “more than 29,000 customers,” including in the Fortune 500, government, banking, defense and aerospace, and most major industries.

In an update on its website, Click Studios said in a Wednesday advisory that customers are “requested not to post Click Studios correspondence on Social Media.” The email adds: “It is expected that the bad actor is actively monitoring Social Media, looking for information they can use to their advantage, for related attacks.”

“It is expected the bad actor is actively monitoring social media for information on the compromise and exploit. It is important customers do not post information on Social Media that can be used by the bad actor. This has happened with phishing emails being sent that replicate Click Studios email content,” the company said.

Besides a handful of advisories published by the company since the breach was discovered, the company has refused to comment or respond to questions.

It’s also not clear if the company has disclosed the breach to U.S. and EU authorities where the company has customers, but where data breach notification rules obligate companies to disclose incidents timely. Companies can be fined up to 4% of their annual global revenue for falling foul of Europe’s GDPR rules.

Click Studios chief executive Mark Sandford has not responded to repeated requests for comment by TechCrunch. Instead, TechCrunch received the same canned autoresponse from the company’s support email saying that the company’s staff are “focused only on assisting customers technically.”

TechCrunch emailed Sandford again on Thursday for comment on the latest advisory, but did not hear back.

At social media hearing, lawmakers circle algorithm-focused Section 230 reform

Rather than a CEO-slamming sound bite free-for-all, Tuesday’s big tech hearing on algorithms aimed for more of a listening session vibe — and in that sense it mostly succeeded.

The hearing centered on testimony from the policy leads at Facebook, YouTube and Twitter rather than the chief executives of those companies for a change. The resulting few hours didn’t offer any massive revelations but was still probably more productive than squeezing some of the world’s most powerful men for their commitments to “get back to you on that.”

In the hearing, lawmakers bemoaned social media echo chambers and the ways that the algorithms pumping content through platforms are capable of completely reshaping human behavior. .

“… This advanced technology is harnessed into algorithms designed to attract our time and attention on social media, and the results can be harmful to our kids’ attention spans, to the quality of our public discourse, to our public health, and even to our democracy itself,” said Chris Coons (D-DE), chair of the Senate Judiciary’s subcommittee on privacy and tech, which held the hearing.

Coons struck a cooperative note, observing that algorithms drive innovation but that their dark side comes with considerable costs

None of this is new, of course. But Congress is crawling closer to solutions, one repetitive tech hearing at a time. The Tuesday hearing highlighted some zones of bipartisan agreement that could determine the chances of a tech reform bill passing the Senate, which is narrowly controlled by Democrats. Coons expressed optimism that a “broadly bipartisan solution” could be reached.

What would that look like? Probably changes to Section 230 of the Communications Decency Act, which we’ve written about extensively over the years. That law protects social media companies from liability for user-created content and it’s been a major nexus of tech regulation talk, both in the newly Democratic Senate under Biden and the previous Republican-led Senate that took its cues from Trump.

Lauren Culbertson, head of U.S. public policy at Twitter

Lauren Culbertson, head of U.S. public policy at Twitter Inc., speaks remotely during a Senate Judiciary Subcommittee hearing in Washington, D.C., U.S., on Tuesday, April 27, 2021. Photographer: Al Drago/Bloomberg via Getty Images

A broken business model

In the hearing, lawmakers pointed to flaws inherent to how major social media companies make money as the heart of the problem. Rather than criticizing companies for specific failings, they mostly focused on the core business model from which social media’s many ills spring forth.

“I think it’s very important for us to push back on the idea that really complicated, qualitative problems have easy quantitative solutions,” Sen. Ben Sasse (R-NE) said. He argued that because social media companies make money by keeping users hooked to their products, any real solution would have to upend that business model altogether.

“The business model of these companies is addiction,” Josh Hawley (R-MO) echoed, calling social media an “attention treadmill” by design.

Ex-Googler and frequent tech critic Tristan Harris didn’t mince words about how tech companies talk around that central design tenet in his own testimony. “It’s almost like listening to a hostage in a hostage video,” Harris said, likening the engagement-seeking business model to a gun just offstage.

Spotlight on Section 230

One big way lawmakers propose to disrupt those deeply entrenched incentives? Adding algorithm-focused exceptions to the Section 230 protections that social media companies enjoy. A few bills floating around take that approach.

One bill from Sen. John Kennedy (R-LA) and Reps. Paul Gosar (R-A) and Tulsi Gabbard (R-HI) would require platforms with 10 million or more users to obtain consent before serving users content based on their behavior or demographic data if they want to keep Section 230 protections. The idea is to revoke 230 immunity from platforms that boost engagement by “funneling information to users that polarizes their views” unless a user specifically opts in.

In another bill, the Protecting Americans from Dangerous Algorithms Act, Reps. Anna Eshoo (D-CA) and Tom Malinowski (D-NJ) propose suspending Section 230 protections and making companies liable “if their algorithms amplify misinformation that leads to offline violence.” That bill would amend Section 230 to reference existing civil rights laws.

Section 230’s defenders argue that any insufficiently targeted changes to the law could disrupt the modern internet as we know it, resulting in cascading negative impacts well beyond the intended scope of reform efforts. An outright repeal of the law is almost certainly off the table, but even small tweaks could completely realign internet businesses, for better or worse.

During the hearing, Hawley made a broader suggestion for companies that use algorithms to chase profits. “Why shouldn’t we just remove section 230 protection from any platform that engages in behavioral advertising or algorithmic amplification?” he asked, adding that he wasn’t opposed to an outright repeal of the law.

Sen. Klobuchar, who leads the Senate’s antitrust subcommittee, connected the algorithmic concerns to anti-competitive behavior in the tech industry. “If you have a company that buys out everyone from under them… we’re never going to know if they could have developed the bells and whistles to help us with misinformation because there is no competition,” Klobuchar said.

Subcommittee members Klobuchar and Sen. Mazie Hirono (D-HI) have their own major Section 230 reform bill, the Safe Tech Act, but that legislation is less concerned with algorithms than ads and paid content.

At least one more major bill looking at Section 230 through the lens of algorithms is still on the way. Prominent big tech critic House Rep. David Cicilline (D-RI) is due out soon with a Section 230 bill that could suspend liability protections for companies that rely on algorithms to boost engagement and line their pockets.

“That’s a very complicated algorithm that is designed to maximize engagement to drive up advertising prices to produce greater profits for the company,” Cicilline told Axios last month. “…That’s a set of business decisions for which, it might be quite easy to argue, that a company should be liable for.”

Social networking app for women Peanut adds live audio rooms

Mobile social networking app for women, Peanut, is today becoming the latest tech company to integrate audio into its product following the success of Clubhouse. Peanut, which began with a focus on motherhood, has expanded over the years to support women through all life stages, including pregnancy, marriage and even menopause. It sees its voice chat feature, which it’s calling “Pods,” as a way women on its app can make better connections in a more supportive, safer environment than other platforms may provide.

The pandemic, of course, likely drove some of the interest in audio-based social networking, as people who had been stuck at home found it helped to fill the gap that in-person networking and social events once did. However, voice chat social networking leader Clubhouse has since seen its model turned into what’s now just a feature for companies like Facebook, Twitter, Reddit, LinkedIn, Discord, and others to adopt.

Like many of the Clubhouse clones to date, Peanut’s Pods offer the basics, including a muted audience of listeners who virtually “raise their hand” to speak, emoji reactions, and hosts who can moderate the conversations and invite people to speak, among other things. The company, for now, is doing its own in-house moderation on the audio pods, to ensure the conversations don’t violate the company’s terms. In time, it plans to scale to include other moderators. (The company pays over two dozen moderators to help it manage the rest of its app, but this team has not yet been trained on audio, Peanut notes.)

Though there are similarities with Clubhouse in its design, what Peanut believes will differentiate its audio experience from the rest of the pack is where these conversations are taking place — on a network designed for women built with safety and trust in mind. It’s also a network where chasing clout is not the reason people participate.

Traditional social networks are often based on how many likes you have, how many followers you have, or if you’re verified with a blue check, explains Peanut founder CEO Michelle Kennedy.

“It’s kind of all based around status and popularity,” she says. “What we’ve only ever seen on Peanut is this ‘economy of care,’ where women are really supportive of one another. It’s really never been about, ‘I’ve got X number of followers.’ We don’t even have that concept. It’s always been about: ‘I need support; I have this question; I’m lonely or looking for a friend;’ or whatever it might be,” Kennedy adds.

In Peanut Pods, the company says it will continue to enforce the safety standards that make women feel comfortable social networking. This focus in particular could attract some of the women, and particularly women of color, who have been targeted with harassment on other voice-based networking platforms.

“The one thing I would say is we’re a community, and we have standards,” notes Kennedy. “When you have standards and you let everyone know what those standards are, it’s very clear. You’re allowed an opinion but what you’re not allowed to do are listed here…Here are the things we expect of you as a user and we’ll reward you if you do it and if you don’t, we’re going to ask you to leave,” she says.

Freedom of speech is not what Peanut’s about, she adds.

“We have standards and we ask you to adhere to them,” says Kennedy.

In time, Peanut envisions using the audio feature to help connect women with people who have specific expertise, like lactation consultants for new moms or fertility doctors, for example. But these will not be positioned as lectures where listeners are held hostage as a speaker drones on and on. In fact, Peanut’s design does away with the “stage” concept from Clubhouse to give everyone equal status — whether they’re speaking or not.

In the app, users will be able to find interesting chats based on what topics they’re already following — and, importantly, they can avoid being shown other topics by muting them.

The Pods feature is rolling out to Peanut’s app starting today, where it will reach the company’s now 2 million-plus users. It will be free to use, like all of Peanut, though the company plans to eventually launch a freemium model with some paid products further down the road.

TikTok to open a ‘Transparency’ Center in Europe to take content and security questions

TikTok will open a center in Europe where outside experts will be shown information on how it approaches content moderation and recommendation, as well as platform security and user privacy, it announced today.

The European Transparency and Accountability Centre (TAC) follows the opening of a U.S. center last year — and is similarly being billed as part of its “commitment to transparency”.

Soon after announcing its U.S. TAC, TikTok also created a content advisory council in the market — and went on to replicate the advisory body structure in Europe this March, with a different mix of experts.

It’s now fully replicating the U.S. approach with a dedicated European TAC.

To-date, TikTok said more than 70 experts and policymakers have taken part in a virtual U.S. tour, where they’ve been able to learn operational details and pose questions about its safety and security practices.

The short-form video social media site has faced growing scrutiny over its content policies and ownership structure in recent years, as its popularity has surged.

Concerns in the U.S. have largely centered on the risk of censorship and the security of user data, given the platform is owned by a Chinese tech giant and subject to Internet data laws defined by the Chinese Communist Party.

While, in Europe, lawmakers, regulators and civil society have been raising a broader mix of concerns — including around issues of child safety and data privacy.

In one notable development earlier this year, the Italian data protection regulator made an emergency intervention after the death of a local girl who had reportedly been taking part in a content challenge on the platform. TikTok agreed to recheck the age of all users on its platform in Italy as a result.

TikTok said the European TAC will start operating virtually, owing to the ongoing COVID-19 pandemic. But the plan is to open a physical center in Ireland — where it bases its regional HQ — in 2022.

EU lawmakers have recently proposed a swathe of updates to digital legislation that look set to dial up emphasis on the accountability of AI systems — including content recommendation engines.

A draft AI regulation presented by the Commission last week also proposes an outright ban on subliminal uses of AI technology to manipulate people’s behavior in a way that could be harmful to them or others. So content recommender engines that, for example, nudge users into harming themselves by suggestively promoting pro-suicide content or risky challenges may fall under the prohibition. (The draft law suggests fines of up to 6% of global annual turnover for breaching prohibitions.)

It’s certainly interesting to note TikTok also specifies that its European TAC will offer detailed insight into its recommendation technology.

“The Centre will provide an opportunity for experts, academics and policymakers to see first-hand the work TikTok teams put into making the platform a positive and secure experience for the TikTok community,” the company writes in a press release, adding that visiting experts will also get insights into how it uses technology “to keep TikTok’s community safe”; how trained content review teams make decisions about content based on its Community Guidelines; and “the way human reviewers supplement moderation efforts using technology to help catch potential violations of our policies”.

Another component of the EU’s draft AI regulation sets a requirement for human oversight of high risk applications of artificial intelligence. Although it’s not clear whether a social media platform would fall under that specific obligation, given the current set of categories in the draft regulation.

However the AI regulation is just one piece of the Commission’s platform-focused rule-making.

Late last year it also proposed broader updates to rules for digital services, under the DSA and DMA, which will place due diligence obligations on platforms — and also require larger platforms to explain any algorithmic rankings and hierarchies they generate. And TikTok is very likely to fall under that requirement.

The UK — which is now outside the bloc, post-Brexit — is also working on its own Online Safety regulation, due to present this year. So, in the coming years, there will be multiple content-focused regulatory regimes for platforms like TikTok to comply with in Europe. And opening algorithms to outside experts may be hard legal requirement, not soft PR.

Commenting on the launch of its European TAC in a statement, Cormac Keenan, TikTok’s head of trust and safety, said: With more than 100 million users across Europe, we recognise our responsibility to gain the trust of our community and the broader public. Our Transparency and Accountability Centre is the next step in our journey to help people better understand the teams, processes, and technology we have to help keep TikTok a place for joy, creativity, and fun. We know there’s lots more to do and we’re excited about proactively addressing the challenges that lie ahead. I’m looking forward to welcoming experts from around Europe and hearing their candid feedback on ways we can further improve our systems.”

 

Facebook introduces a new miniplayer that streams Spotify within the Facebook app

Facebook announced last week an expanded partnership with streaming music service Spotify that would bring a new way to listen to music or podcasts directly within Facebook’s app, which it called Project Boombox. Today, the companies are rolling out this integration via a new “miniplayer” experience that will allow Facebook users to stream from Spotify through the Facebook app on iOS or Android. The feature will be available to both free Spotify users and Premium subscribers.

The miniplayer itself is an extension of the social sharing option already supported within Spotify’s app. Now, when Spotify users are listening to content they want to share to Facebook, they’ll be able to tap the existing “Share” menu (the three dot-menu at the upper right of the screen) and then tap either “Facebook” or “Facebook News Feed.”

When a user posts an individual track or podcast episode to Facebook through this sharing feature, the post will now display in a new miniplayer that allows other people who come across their post to also play the content as they continue to scroll, or reshare it. (Cue MySpace vibes!)

Spotify’s paid subscribers will be able to access full playback, the company says. Free users, meanwhile, will be able to hear the full shared track, not a clip . But afterwards, they’ll continue to listen to ad-supported content on Shuffle mode, just as they would in Spotify’s own app.

One important thing to note here about all this works is that the integration allows the music or podcast content to actually play from within the Spotify app. When a user presses play on the miniplayer, an app switch takes place so the user can log into Spotify. The miniplayer activates and controls the launch and playback in the Spotify app — which is how the playback is able continue even as the user scrolls on Facebook or if they minimize the Facebook app altogether.

This setup means users will need to have the Spotify mobile app installed on their phone and a Spotify account for the miniplayer to work. For first-time Spotify users, they’ll have to sign up for a free account in order to listen to the music shared via the miniplayer.

Spotify notes that it’s not possible to sign up for a paid account through the mini-player experience itself, so there’s no revenue share with Facebook on new subscriptions. (Users have to download the Spotify app and sign up for Paid accounts from there if they want to upgrade.)

The partnership allows Spotify to leverage Facebook’s reach to gain distribution and to drive both sign-ups and repeat usage of its app just as the Covid bump to subscriber growth may be wearing off. However, it’s still responsible for the royalties paid on streams, just as it was before, the company told TechCrunch, because its app is the one actually doing the streaming. It’s also fully in charge of the music catalog and audio ads that play alongside the content.

For Facebook, this deal means it now has a valuable tool to keep users spending time on its site — a metric that has been declining over the years, reports have indicated.

Spotify and Facebook have a long history of working together on music efforts. Facebook back in 2011 had been planning an update that would allow music subscription users to engage with music directly on Facebook, much like this. But those plans were later dialed back, possibly over music rights’ or technical issues. Spotify had also been one of the first media partners on Facebook’s ticker, which would show you in real-time what friends were up to on Facebook and other services. And Spotify had once offered Facebook Login as the default for its mobile app. Today, as it has for years, Spotify users on the desktop can see what their Facebook friends are streaming on its app, thanks to social networking integrations.

The timing for this renewed and extended partnership is interesting. Now, both Facebook and Spotify have a mutual enemy with Apple, whose privacy-focused changes are impacting Facebook’s ad business and whose investments in Apple Music and Podcasts are a threat to Spotify. As Facebook’s own music efforts in more recent years have shifted towards partnership efforts — like music video integrations enabled by music label agreements — it makes sense that it would turn to a partner like Spotify to power a new streaming feature that supports Facebook’s broader efforts around monetizable tools and services aimed at the creator economy.

The miniplayer feature had been tested in non-U.S. markets, Mexico and Thailand, ahead of its broader global launch today.

In addition to the U.S., the integration is fully rolling out to users in Argentina, Australia, Bolivia, Brazil, Canada, Chile, Colombia, Costa Rica, Dominican Republic, Ecuador, El Salvador, Guatemala, Honduras, Indonesia, Israel, Japan, Malaysia, Mexico, New Zealand, Nicaragua, Panama, Paraguay, Peru, South Africa, Thailand, and Uruguay.

Instagram launches tools to filter out abusive DMs based on keywords and emojis, and to block people, even on new accounts

Facebook and its family of apps have long grappled with the issue of how to better manage — and eradicate — bullying and other harassment on its platform, turning both to algorithms and humans in its efforts to tackle the problem better. In the latest development, today, Instagram is announcing some new tools of its own.

First, it’s introducing a new way for people to further shield themselves from harassment in their direct messages, specifically in message requests by way of a new set of words, phrases and emojis that might signal abusive content, which will also include common misspellings of those key terms, sometimes used to try to evade the filters. Second, it’s giving users the ability to proactively block people even if they try to contact the user in question over a new account.

The blocking account feature is going live globally in the next few weeks, Instagram said, and it confirmed to me that the feature to filter out abusive DMs will start rolling out in the UK, France, Germany, Ireland, Canada, Australia and New Zealand in a few weeks’ time before becoming available in more countries over the next few months.

Notably, these features are only being rolled out on Instagram — not Messenger, and not WhatsApp, Facebook’s other two hugely popular apps that enable direct messaging. The spokesperson confirmed that Facebook hopes to bring it to other apps in the stable later this year. (Instagram and others have regularly issued updates on single apps before considering how to roll them out more widely.)

Instagram said that the feature to scan DMs for abusive content — which will be based on a list of words and emojis that Facebook compiles with the help of anti-discrimination and anti-bullying organizations (it did not specify which), along with terms and emoji’s that you might add in yourself — has to be turned on proactively, rather than being made available by default.

Why? More user license, it seems, and to keep conversations private if uses want them to be. “We want to respect peoples’ privacy and give people control over their experiences in a way that works best for them,” a spokesperson said, pointing out that this is similar to how its comment filters also work. It will live in Settings>Privacy>Hidden Words for those who will want to turn on the control.

There are a number of third-party services out there in the wild now building content moderation tools that sniff out harassment and hate speech — they include the likes of Sentropy and Hive — but what has been interesting is that the larger technology companies up to now have opted to build these tools themselves. That is also the case here, the company confirmed.

The system is completely automated, although Facebook noted that it reviews any content that gets reported. While it doesn’t keep data from those interactions, it confirmed that it will be using reported words to continue building its bigger database of terms that will trigger content getting blocked, and subsequently deleting, blocking and reporting the people who are sending it.

On the subject of those people, it’s been a long time coming that Facebook has started to get smarter on how it handles the fact that the people with really ill intent have wasted no time in building multiple accounts to pick up the slack when their primary profiles get blocked. People have been aggravated by this loophole for as long as DMs have been around, even though Facebook’s harassment policies had already prohibited people from repeatedly contacting someone who doesn’t want to hear from them, and the company had already also prohibited recidivism, which as Facebook describes it, means “if someone’s account is disabled for breaking our rules, we would remove any new accounts they create whenever we become aware of it.”

The company’s approach to Direct Messages has been something of a template for how other social media companies have built these out.

In essence, they are open-ended by default, with one inbox reserved for actual contacts, but a second one for anyone at all to contact you. While some people just ignore that second box altogether, the nature of how Instagram works and is built is for more, not less, contact with others, and that means people will use those second inboxes for their DMs more than they might, for example, delve into their spam inboxes in email.

The bigger issue continues to be a game of whack-a-mole, however, and one that not just its users are asking for more help to solve. As Facebook continues to find itself under the scrutinizing eye of regulators, harassment — and better management of it — has emerged as a very key area that it will be required to solve before others do the solving for it.

Discord walked away from Microsoft talks, may pursue an IPO

A month after reports that Microsoft sought to buy the hot voice chat app Discord surfaced, those talks are off, a source familiar with the deal confirmed to TechCrunch.

Discord is considering plans to stay independent, possibly charting a path to its own IPO in the not-too-distant future. The Wall Street Journal first reported news that the deal was off.

The two companies were deep in acquisition talks that valued Discord at around $10 billion before Discord walked away. According to the WSJ, three companies were exploring the possible acquisition, though only Microsoft was named.

Discord’s valuation doubled in less than six months last year and its stock is only looking hotter in 2021. A well-loved voice chat app originally built for gamers, Discord was in the right place well ahead of the current voice chat trend that Clubhouse ignited. As companies from Facebook to Twitter scramble to build voice-based community tools, Discord rolled out its own support for curated audio events last month.

Discord’s decision to veer away from a sale makes sense for a company keen to keep its unique DNA rather than being rolled into an existing product at a bigger company. The choice could also keep the company distant from a protracted antitrust headache, as lawmakers mull legislation that could block big tech deals to prevent further consolidation in the industry.

Who’s funding privacy tech?

Privacy isn’t dead, as many would have you believe. New regulations, stricter cross-border data transfer rules and increasing calls for data sovereignty have helped the privacy startup space grow thanks to an uptick in investor support.

This is how we got here, and where investors are spending.

The rise of privacy tech

With strict privacy laws such as GDPR and CCPA already listing big-ticket penalties — and a growing number of countries following suit — businesses have little option but to comply. It’s not just bigger, established businesses offering privacy and compliance tech; brand-new startups are filling in the gaps in this emerging and growing space.

“For the last decade, privacy tech was trumpeted as one of the next ‘big things’ for investors, but never delivered. Startup business models were too academic, complex and did not appeal to VCs, or crucially, consumers were used to getting free web services,” Gilbert Hill, chief executive at Tapmydata, told Extra Crunch.

Some privacy companies — including privacy hardware companies — are chasing profits and less focused on hustling for outside investment.

Today, privacy is big business. Crunchbase lists 207 privacy startups (as of April 2021) that have together raised more than $3.5 billion over hundreds of individual rounds of funding. The number of privacy companies rockets if you take into account enterprise privacy players. Crunchbase currently has 809 listed under the wider “privacy” category.

The latest Privacy Tech Vendor Report 2021 names 356 companies exclusively dealing in enterprise privacy technology solutions, up from 304 companies a year earlier.

“Since 2017, the privacy landscape underwent a metamorphosis,” the report said. “The emergence of the California Consumer Privacy Act, Brazilian General Data Protection Law and other privacy laws around the world have forced organizations to adhere to a new array of compliance requirements, and in response, the demand for privacy tech grew exponentially.”

That also presents an opportunity for investors.

Increasing investments

Privacy tech was catching the attention of investors even before the recent wave of new privacy laws came into effect. The sector amassed nearly $10 billion in investment in 2019, according to Crunchbase, compared to just $1.7 billion in 2010. Investments remained active in 2020, despite the pandemic.

Case in point: In December, enterprise privacy and compliance firm OneTrust announced a $300 million Series C funding. The deal valued the 4-year-old privacy tech firm at $5.1 billion, making it one of the first modern privacy unicorns. Three months later, it extended its Series C funding, with SoftBank Vision Fund 2 and Franklin Templeton pumping in another $210 million.