AntiToxin sells safetytech to clean up poisoned platforms

The big social networks and video games have failed to prioritize user well-being over their own growth. As a result, society is losing the battle against bullying, predators, hate speech, misinformation and scammers. Typically when a whole class of tech companies have a dire problem they can’t cost-effectively solve themselves, a software-as-a-service emerges to fill the gap in web hosting, payment processing, etc. So along comes AntiToxin Technologies, a new Israeli startup that wants to help web giants fix their abuse troubles with its safety-as-a-service.

It all started on Minecraft. AntiToxin co-founder Ron Porat is cybersecurity expert who’d started popular ad blocker Shine. Yet right under his nose, one of his kids was being mercilessly bullied on the hit children’s game. If even those most internet-savvy parents were being surprised by online abuse, Porat realized the issue was bigger than could be addressed by victims trying to protect themselves. The platforms had to do more, research confirmed.

A recent Ofcom study found almost 80% of children had a potentially harmful online experience in the past year. Indeed, 23% said they’d been cyberbullied, and 28% of 12 to 15-year-olds said they’d received unwelcome friend or follow requests from strangers. A Ditch The Label study found of 12 to 20-year-olds who’d been bullied online, 42% were bullied on Instagram.

Unfortunately, the massive scale of the threat combined with a late start on policing by top apps makes progress tough without tremendous spending. Facebook tripled the headcount of its content moderation and security team, taking a noticeable hit to its profits, yet toxicity persists. Other mainstays like YouTube and Twitter have yet to make concrete commitments to safety spending or staffing, and the result is non-stop scandals of child exploitation and targeted harassment. Smaller companies like Snap or Fortnite-maker Epic Games may not have the money to develop sufficient safeguards in-house.

“The tech giants have proven time and time again we can’t rely on them. They’ve abdicated their responsibility. Parents need to realize this problem won’t be solved by these companies” says AntiToxin CEO Zohar Levkovitz, who previously sold his mobile ad company Amobee to Singtel for $321 million. “You need new players, new thinking, new technology. A company where ‘Safety’ is the product, not an after-thought. And that’s where we come-in.” The startup recently raised a multimillion-dollar seed round from Mangrove Capital Partners and is allegedly prepping for a double-digit millions Series A.

AntiToxin’s technology plugs into the backends of apps with social communities that either broadcast or message with each other and are thereby exposed to abuse. AntiToxin’s systems privately and securely crunch all the available signals regarding user behavior and policy violation reports, from text to videos to blocking. It then can flag a wide range of toxic actions and let the client decide whether to delete the activity, suspend the user responsible or how else to proceed based on their terms and local laws.

Through the use of artificial intelligence, including natural language processing, machine learning and computer vision, AntiToxin can identify the intent of behavior to determine if it’s malicious. For example, the company tells me it can distinguish between a married couple consensually exchanging nude photos on a messaging app versus an adult sending inappropriate imagery to a child. It also can determine if two teens are swearing at each other playfully as they compete in a video game or if one is verbally harassing the other. The company says that beats using static dictionary blacklists of forbidden words.

AntiToxin is under NDA, so it can’t reveal its client list, but claims recent media attention and looming regulation regarding online abuse has ramped up inbound interest. Eventually the company hopes to build better predictive software to identify users who’ve shown signs of increasingly worrisome behavior so their activity can be more closely moderated before they lash out. And it’s trying to build a “safety graph” that will help it identify bad actors across services so they can be broadly deplatformed similar to the way Facebook uses data on Instagram abuse to police connected WhatsApp accounts.

“We’re approaching this very human problem like a cybersecurity company, that is, everything is a Zero-Day for us” says Levkowitz, discussing how AntiToxin indexes new patterns of abuse it can then search for across its clients. “We’ve got intelligence unit alums, PhDs and data scientists creating anti-toxicity detection algorithms that the world is yearning for.” AntiToxin is already having an impact. TechCrunch commissioned it to investigate a tip about child sexual imagery on Microsoft’s Bing search engine. We discovered Bing was actually recommending child abuse image results to people who’d conducted innocent searches, leading Bing to make changes to clean up its act.

AntiToxin identified publicly listed WhatsApp Groups where child sexual abuse imagery was exchanged

One major threat to AntiToxin’s business is what’s often seen as boosting online safety: end-to-end encryption. AntiToxin claims that when companies like Facebook expand encryption, they’re purposefully hiding problematic content from themselves so they don’t have to police it.

Facebook claims it still can use metadata about connections on its already encrypted WhatApp network to suspend those who violate its policy. But AntiToxin provided research to TechCrunch for an investigation that found child sexual abuse imagery sharing groups were openly accessible and discoverable on WhatsApp — in part because encryption made them hard to hunt down for WhatsApp’s automated systems.

AntiToxin believes abuse would proliferate if encryption becomes a wider trend, and it claims the harm that it  causes outweighs fears about companies or governments surveiling unencrypted transmissions. It’s a tough call. Political dissidents, whistleblowers and perhaps the whole concept of civil liberty rely on encryption. But parents may see sex offenders and bullies as a more dire concern that’s reinforced by platforms having no idea what people are saying inside chat threads.

What seems clear is that the status quo has got to go. Shaming, exclusion, sexism, grooming, impersonation and threats of violence have started to feel commonplace. A culture of cruelty breeds more cruelty. Tech’s success stories are being marred by horror stories from their users. Paying to pick up new weapons in the fight against toxicity seems like a reasonable investment to demand.

Facebook can be told to cast a wider net to find illegal content, says EU court advisor

How much of an obligation should social media platforms be under to hunt down illegal content?

An influential advisor to Europe’s top court has taken the view that social media platforms like Facebook can be required to seek out and identify posts that are equivalent to content that an EU court has deemed illegal — such as hate speech or defamation — if the comments have been made by the same user.

Platforms can also be ordered to hunt for identical repostings of the illegal content.

But there should not be an obligation for platforms to identify equivalent defamatory comments that have been posted by any user, with the advocate general opining that such a broad requirement would not ensure a fair balance between the fundamental rights concerned — flagging risks to free expression and free access to information.

“An obligation to identify equivalent information originating from any user would not ensure a fair balance between the fundamental rights concerned. On the one hand, seeking and identifying such information would require costly solutions. On the other hand, the implementation of those solutions would lead to censorship, so that freedom of expression and information might well be systematically restricted.”

We covered this referral to the CJEU last year.

It’s an interesting case that blends questions of hate speech moderation and the limits of robust political speech, given that the original 2016 complaint of defamation was made by the former leader of the Austrian Green Party, Eva Glawischnig.

An Austrian court agreed with Glawischnig that hate speech posts made about her on Facebook were defamatory and ordered the company to remove them. Facebook did so, but only in Austria. Glawischnig challenged its partial takedown and in May 2017 a local appeals court ruled that it must remove both the original posts and any verbatim repostings and do so worldwide, not just in Austria. 

Further legal appeals led to the referral to the CJEU which is being asked to determine where the line should be drawn for similarly defamatory postings, and whether takedowns can be applied globally or only locally.

On the global takedowns point, the advocate general believes that existing EU law does not present an absolute blocker to social media platforms being ordered to remove information worldwide.

“Both the question of the extraterritorial effects of an injunction imposing a removal obligation and the question of the territorial scope of such an obligation should be analysed, in particular, by reference to public and private international law,” runs the non-binding opinion.

Another element relates to the requirement under existing EU law that platforms should not be required to carry out general monitoring of information they store — and specifically whether that directive precludes platforms from being ordered to remove “information equivalent to the information characterised as illegal” when they have been made aware of it by the person concerned, third parties or another source. 

On that, the AG takes the view that the EU’s e-Commerce Directive does not prevent platforms from being ordered to take down equivalent illegal content when it’s been flagged to them by others — writing that, in that case, “the removal obligation does not entail general monitoring of information stored”.

Advocate General Maciej Szpunar’s opinion — which can be read in full here — is not the last word on the matter, with the court still to deliberate and issue its final decision (usually within three to six months of an AG opinion). However advisors to the CJEU are influential and tend to predict which way the court will jump.

We’ve reached out to Facebook for comment.

Daily Crunch: Twitter acquires Fabula AI

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here.

1. Twitter bags deep learning talent behind London startup, Fabula AI

Fabula AI has been developing technology to identify online disinformation by looking at patterns in how fake stuff versus genuine news spreads online — making it an obvious fit for the rumor-riled social network.

Twitter says the acquisition of Fabula AI will help it build out its internal machine learning capabilities, writing that the startup’s “world-class team of machine learning researchers” will become part of an internal research group led by Sandeep Pandey.

2. Review: Ring’s new outdoor lighting products are brilliant

Twenty minutes after opening the box and throwing the instructions aside, Matt Burns says he had five new lights installed around his house and configured to his home’s network.

3. Live from WWDC 2019

The keynote starts at 10am Pacific today.

4. Amazon sellers to hit UK high streets in year-long pop-up pilot

The Amazon pop-up pilot program — which is couched as an exploration of “a new model to help up-and-coming online brands grow their high street presence” — will see more than 100 small online businesses selling on the UK high street for the first time.

5. A look at the many ways China suppresses online discourse about the Tiananmen Square protests

The effects of the crackdown appears to have spread beyond China, with the suspension of many Chinese-language Twitter accounts critical of China, even if they originated outside the country. (For what it’s worth, Twitter said this was part of a “routine action.”)

6. ‘Weirdo’ fintech VC Anthemis marches to its own drummer

Apparently, visiting the Anthemis office is like stepping into a Wes Anderson film. (Extra Crunch membership required.)

7. This week’s TechCrunch podcasts

The latest episode of Equity features a discussion of whether the tech press is too positive in its coverage of startups, while the Original Content team reviews the (excellent) animated Netflix series “Tuca & Bertie.”

Facebook introduces Avatars, its Bitmoji competitor

Ditch those generic emoji. Facebook’s new Avatars feature lets you customize a virtual lookalike of yourself for use as stickers in chat and comments. Once you personalize your Avatar’s face, hair, and clothes, they’ll star in a range of frequently updated stickers conveying common emotions and phrases. From Likes to Reactions to Avatars, you could see this as the natural progression of self-expression on Facebook…or as a ruthless clone of Snapchat’s wildly popular Bitmoji selfie stickers.

Facebook Avatars launches today in Australia for use in Messenger and News Feed comments before coming to the rest of the world in late 2019 or early 2020. The feature could make Facebook feel more fun, youthful, and visually communicative at a time when the 15-year-old social network increasingly seems drab and uncool. Avatars aren’t quite as cute or hip to modern slang as Bitmoji. But they could still become a popular way to add some flare to replies without resorting to cookie-cutter emoticons or cliche GIFs.

“There’s been a ton of work put in to this from the product and design perspective to find out, with how many people on Facebook, how to make this as representative as possible” says Facebook Avatars communication manager Jimmy Raimo. From offering religious clothing like hijabs to a rainbow of skin colors and hair styles, Facebook didn’t want any demographic left out. “They’re a bit more realistic so they can be your personal avatar vs trying to make them cute, funny, and cartoony” Raimo explains.

How To Make A Facebook Avatar

Users will start to see a smiley-face button in the News Feed comment composer and Messenger sticker chooser they can tap to create their Facebook Avatar. For now, only people in Australia can make Avatars but everyone will be able to see them around Facebook.

The creation process begins with gender neutral blank people can customize from scratch across 18 traits. For now there’s no option to start with a selfie or profile pic and have Facebook automatically generate you an avatar. Facebook is researching that technique, but Raimo acknowledges that “We want to make sure we don’t show you something totally opposite of the photo. There’s sensitivity around facial recognition.”

The Facebook Avatar customizer

Facebook won’t be monetizing Avatars directly, at least at first. There are no sponsored clothing options from fashion brands or ways to buy fancy jewelry or other accessories for your mini-me. Raimo said Facebook is open to these idea, though. “It would help personalize it for sure and from a business perspective that would be smart.” It’s easy to imagine Nike or The North Face paying to let Avatar sport their logos, or fans coughing up cash to wear their favorite labels. Perhaps that’s a micropayment use case for Facebook’s upcoming cryptocurrency.

Facebook is also considering expanding Avatars for use as profile pics or in Groups — two areas of the app where you interact with strangers and you might enjoy the anonymity of a 2D drawing instead of a photo of your face. However, Facebook hasn’t deeply considered turning Avatars into a platform so you could use them in other apps through an API or keyboard you install on your phone.

Instead, step 1 was just to make sure the Avatar creation flow is easy and most people can make one that actually looks like them. When asked about the ability to change the perceived age of Avatars to be inclusive of Facebook’s older users, Raimo admitted that’s a gap in the product. There’s no way to add wrinkles so mom and dad’s avatars might look a bit too similar to yours.

Avatars have come a long way since Facebook’s v1 prototype a year ago

The biggest problem with Facebook Avatars is that they’re hopelessly late to the market. TechCrunch broke the news that Facebook was working on Avatars a year ago. And while it’s good that it took the time to research how to make them inclusive, Snapchat gained a ton of ground in the meantime.

Bitmoji have been around since 2014, and since Snapchat acquired it in 2016, it’s been a mainstay of the top 10 apps chart. Sensor Tower estimates Bitmoji has been downloaded over 330 million times. And Bitmoji now has its own developer kit that’s spawning hit apps like YOLO. Like Snapchat Stories, Bitmoji may be too entrenched in popular culture for Facebook Avatars to ever escape their rap as a rip-off.

Facebook Avatars (left) vs Snapchat Bitmoji (right)

Yet that doesn’t mean there isn’t a huge opportunity here. The need for humans to expressively represent themselves on the internet is only growing as we shift from text-based to image-based communication. Whether for privacy, creativity, or convenience, there’s a wide range of use cases for having a parallel virtual likeness. There are still billions of Facebook users without a Bitmoji. And now Mark Zuckerberg has the foundation of a digital identity layer that could travel with us alongside our data, personalizing every online interaction.

Twitter bags deep learning talent behind London startup, Fabula AI

Twitter has just announced it has picked up London-based Fabula AI. The deep learning startup has been developing technology to try to identify online disinformation by looking at patterns in how fake stuff vs genuine news spreads online — making it an obvious fit for the rumor-riled social network.

Social media giants remain under increasing political pressure to get a handle on online disinformation to ensure that manipulative messages don’t, for example, get a free pass to fiddle with democratic processes.

Twitter says the acquisition of Fabula will help it build out its internal machine learning capabilities — writing that the UK startup’s “world-class team of machine learning researchers” will feed an internal research group it’s building out, led by Sandeep Pandey, its head of ML/AI engineering.

This research group will focus on “a few key strategic areas such as natural language processing, reinforcement learning, ML ethics, recommendation systems, and graph deep learning” — now with Fabula co-founder and chief scientist, Michael Bronstein, as a leading light within it.

Bronstein is chair in machine learning & pattern recognition at Imperial College, London — a position he will remain while leading graph deep learning research at Twitter.

Fabula’s chief technologist, Federico Monti — another co-founder, who began the collaboration that underpin’s the patented technology with Bronstein while at the University of Lugano, Switzerland — is also joining Twitter.

“We are really excited to join the ML research team at Twitter, and work together to grow their team and capabilities. Specifically, we are looking forward to applying our graph deep learning techniques to improving the health of the conversation across the service,” said Bronstein in a statement.

“This strategic investment in graph deep learning research, technology and talent will be a key driver as we work to help people feel safe on Twitter and help them see relevant information,” Twitter added. “Specifically, by studying and understanding the Twitter graph, comprised of the millions of Tweets, Retweets and Likes shared on Twitter every day, we will be able to improve the health of the conversation, as well as products including the timeline, recommendations, the explore tab and the onboarding experience.”

Terms of the acquisition have not been disclosed.

We covered Fabula’s technology and business plan back in February when it announced its “new class” of machine learning algorithms for detecting what it colloquially badged ‘fake news’.

Its approach to the problem of online disinformation looks at how it spreads on social networks — and therefore who is spreading it — rather than focusing on the content itself, as some other approaches do.

Fabula has patented algorithms that use the emergent field of “Geometric Deep Learning” to detect online disinformation — where the datasets in question are so large and complex that traditional machine learning techniques struggle to find purchase. Which does really sound like a patent designed with big tech in mind.

Fabula likens how ‘fake news’ spreads on social media vs real news as akin to “a very simplified model of how a disease spreads on the network”.

One advantage of the approach is it looks to be language agnostic (at least barring any cultural differences which might also impact how fake news spread).

Back in February the startup told us it was aiming to build an open, decentralised “truth-risk scoring platform” — akin to a credit referencing agency, just focused on content not cash.

It’s not clear from Twitter’s blog post whether the core technologies it will be acquiring with Fabula will now stay locked up within its internal research department — or be shared more widely, to help other platforms grappling with online disinformation challenges.

The startup had intended to offer an API for platforms and publishers later this year.

But of course building a platform is a major undertaking. And, in the meanwhile, Twitter — with its pressing need to better understand the stuff its network spreads — came calling.

A source close to the matter told us that Fabula’s founders decided that selling to Twitter instead of pushing for momentum behind a vision of a decentralized, open platform because the exit offered them more opportunity to have “real and deep impact, at scale”.

Though it is also still not certain what Twitter will end up doing with the technology it’s acquiring. And it at least remains possible that Twitter could choose to make it made open across platforms.

“That’ll be for the team to figure out with Twitter down the line,” our source added.

A spokesman for Twitter did not respond directly when we asked about its plans for the patented technology but he told us: “There’s more to come on how we will integrate Fabula’s technology where it makes sense to strengthen our systems and operations in the coming months.  It will likely take us some time to be able to integrate their graph deep learning algorithms into our ML platform. We’re bringing Fabula in for the team, tech and mission, which are all aligned with our top priority: Health.”

Twitter takes down ‘a large number’ of Chinese-language accounts ahead of Tiananmen Square anniversary

Twitter has suspended a large number of Chinese-language user accounts, including those belonging to critics of China’s government. It seems like a particularly ill-timed move, occurring just days before thirtieth anniversary of the Tiananmen Square massacre on June 4.

“A large number of Chinese @Twitter accounts are being suspended today,” wrote Yaxue Cao, founder and editor of the U.S.-based publication China Change. “They ‘happen’ to be accounts critical of China, both inside and outside China.”

Cao then went on to highlight a number of the suspended accounts in a Twitter thread.

The Chinese government reportedly began cracking down late last year on people who post criticism on Twitter. The author of that story, The New York Times’ Paul Mozur, has also been tweeting about the takedowns, noting that “suspensions seem not limited to accounts critical of China” and that it appears to be “an equal opportunity purge of Chinese language accounts.”

In response, Twitter’s Public Policy account said it suspended “a number of accounts this week” mostly for “engaging in mix of spamming, inauthentic behavior, & ban evasion.” It acknowledged, however, that some of the accounts “were involved in commentary about China.”

“These accounts were not mass reported by the Chinese authorities — this was a routine action on our part,” the company said. “Sometimes our routine actions catch false positives or we make errors. We apologize. We’re working today to ensure we overturn any errors but that we remain vigilant in enforcing our rules for those who violate them.”

By this point, the deletions had attracted broader political notice, with Florida Senator Marco Rubio declaring, “Twitter has become a Chinese govt censor.”

And while Cao acknowledged Twitter’s official explanation, as well as help she’s received from the company in the past, she said, “Per @Twitter’s explanation, it’s cleaning up CCP bots but accidentally suspended 1000s anti-CCP accts. That doesn’t make sense.”

Spotify is building shared queue Social Listening

Want to rock out together even when you’re apart? Spotify has prototyped an unreleased feature called “Social Listening” that lets multiple people add songs to a queue they can all listen to. You just all scan one friend’s QR-style Spotify Social Listening code, and then anyone can add songs to the real-time playlist. Spotify could potentially expand the feature to synchronize playback so you’d actually hear the same notes at the same time, but for now it’s a just a shared queue.

Social Listening could give Spotify a new viral growth channel, as users could urge friends to download the app to sync up. The intimate experience of co-listening might lead to longer sessions with Spotify, boosting ad plays or subscription retention. Plus it could differentiate Spotify from Apple Music, YouTube Music, Tidal, and other competing streaming services.

A Spotify spokesperson tells TechCrunch that “We’re always testing new products and experiences, but have no further news to share at this time.” Spotify already offers Collaborative Playlists friends can add to, but Social Listening is designed for real-time sharing. The company refused to provide further details on the prototype or when it might launch.

The feature is reminiscent of Turntable.fm, a 2011 startup that let people DJ in virtual rooms on their desktop that other people could join where they could chat, vote on the next song, and watch everyone’s avatars dance. But the company struggled to properly monetize through ad-free subscriptions and shut down in 2014. Facebook briefly offered its own version called “Listen With…” in 2012 that let Spotify or Rdio users synchronize music playback.

Spotify Social Listening was first spotted by reverse engineering sorceress and frequent TechCrunch tipster Jane Manchun Wong. She discovered code for the feature buried in Spotify’s Android app, but for now it’s only available to Spotify employees. Social Listening appears in the menu of connected devices you can open while playing a song beside nearby Wi-Fi and Bluetooth devices. “Connect with friends: Your friends can add tracks by scanning this code – You can also scan a friend’s code” the feature explains.

A help screen describes Social Listening as “Listen to music together. 1. On your phone, play a song and select (Connected Devices). You’ll see a code at the bottom of the screen. 2. On your friend’s phone, select the same (Connected Devices) icon, tap SCAN CODE, and point the camera at your code. 3. Now you can control the music together.” You’ll then see friends who are part of your Social Listening session listed in the Connected Devices menu. Users can also copy and share a link to join their Social Listening session that starts with the URL prefix https://open.spotify.com/socialsession/ Note that Spotify never explicitly says that playback will be synchronized.

With streaming apps largely having the same music catalog and similar $9.99 per month premium pricing, they have to compete on discovery and user experience. Spotify has long been in the lead here with its algorithmically personalized Discover Weekly playlists that were promptly copied by Apple and SoundCloud.

Oddly, Spotify has stripped out some of its own social features over the years, eliminating the in-app messing inbox and instead pushing users to share songs over third-party messaging apps. The deemphasis in discovery through friends conveniently puts the focus on Spotify’s owned playlists. That gives it leverage over the record labels during their rate negotations since it’s who influences which songs will become hits, so if labels don’t play nice their artists might not get promoted via playlists.

That’s why it’s good to see Spotify remembering that music is an inherently social experience. Music physically touches us through its vibrations, and when people listen to the same songs and are literally moved by it at the same time, it creates a sense of togetherness we’re too often deprived of on the Internet.

Tinder launches a new a la carte option, Super Boost, only for subscribers

Tinder this morning announced a second, more premium version of its most popular a la carte purchase, Boost, with the launch of Super Boost — an upgrade only offered to Tinder Plus and Tinder Gold premium subscribers. The idea with the new product is to extract additional revenues out of those users who have already demonstrated a willingness to pay for the dating app, while also offering others another incentive to upgrade to a paid Tinder subscription.

Similar to Boost, which puts you on top of the stack of profiles shown to potential matches for 30 minutes, Super Boost also lets you cut the line.

Tinder says the option will be shown to select Tinder Plus and Tinder Gold subscribers during peak activity times, and only at night. Once purchased and activated, Super Boost promises the chance to be seen by up to 100 times more potential matches. By comparison, Boost only increases profile views by up to 10 times.

Also like Boost, Super Boost may not have a set price point. Tinder prices its products dynamically, taking into account various factors like age, location, length of subscription, and other factors. (Tinder’s decision to up its pricing for older users led to an age discrimination class action lawsuit, which the company eventually settled. This limits its ability to price based on age, but only in California.)

The company hasn’t yet settled on a price point — or range — for Super Boost, but is now testing various options in the select markets where the feature is going live. Super Boost is not broadly available across all Tinder markets nor to all premium subscribers at this time, as the company considers this a test for the time being.

The addition, if successful, could have a big impact on Tinder’s bottom line.

As Tinder’s subscriber base grows, its a la carte purchases do the same — the company even noted they reached record levels in Q4 2018, when it also disclosed that a la carte accounts for around 30 percent of direct revenue. Boost and Super Like are the most popular, and Tinder has for a long time hinted that it wants to expand its menu of a la carte features as it grows.

During the first quarter of 2019, Tinder’s average subscribers were 4.7 million, up from 384,000 in the previous quarter and 1.3 million year-over-year. Its most recent earnings also topped estimates, thanks to Tinder’s continued growth, bringing parent company Match Group’s net income across its line of dating apps to $123 million, or 42 cents a share, up from $99.7 million, or 33 cents a share, in the year-ago period.

That said, the decision to monetize a user base against a built-in algorithm bias may be a long-term riskier bet for Tinder and other dating apps, who are already the subject of much cultural criticism thanks to articles lamenting their existence, damning documentaries, their connection to everything from racial discrimination to now eating disorders, as well as studies that demonstrate their unfair nature — like this most recent one from Mozilla.

For the near-term, dating app makers reliant on this model are raking in the profits due to a lack of other options. But there’s still room for a new competitor that could disrupt the status quo. Had Facebook not waited until its name had been dragged through the mud by way of its numerous privacy scandals, its Facebook Dating product could have been that disruptor. For now, however, Tinder and its rivals are safe — and its users will likely continue to pay for any feature offering them the ability to improve their chances.

 

Alibaba pumps $100 million into Vmate to grow its video app in India

Chinese tech giant Alibaba is doubling down on India’s burgeoning video market, looking to fight back local rival ByteDance, Google, and Disney to gain its foothold in the nation. The company said today that it is pumping $100 million into Vmate, a three-year-old social video app owned by subsidiary UC Web.

Vmate was launched as a video streaming and sharing app in 2016. But in the years since, it has added features such as video downloads and 3-dimensional face emojis to expand its use cases. It has amassed 30 million users globally, and will use the capital to scale its business in India, the company told TechCrunch. Alibaba Group did not respond to TechCrunch’s questions about its ownership of the app.

The move comes as Alibaba revives its attempts to take on the growing social video apps market, something it has missed out completely in China. Vmate could potentially help it fill the gap in India. Many of the features Vmate offers are similar to those by ByteDance’s TikTok, which currently has more than 120 million active users in India. ByteDance, with valuation of about $75 billion, has grown its business without taking money from either Alibaba or Tencent, the latter of which has launched its own TikTok-like apps with limited success.

Alibaba remains one of the biggest global investors in India’s e-commerce and food-tech markets. It has heavily invested in Paytm, BigBasket, Zomato, and Snapdeal. It was also supposedly planning to launch a video streaming service in India last year — a rumor that was fueled after it acquired majority stake in TicketNew, a Chennai-based online ticketing service.

UC Web, a subsidiary of Alibaba Group, also counts India as one of its biggest markets. The browser maker has attempted to become a super app in India in recent years by including news and videos. In the last two years, it has been in talks with several bloggers and small publishers to host their articles directly on its platform, many people involved in the project told TechCrunch.

UC Web’s eponymous browser rose to stardom in the days of feature phones, but has since lost the lion’s share to Google Chrome as smartphones become more ubiquitous. Chrome ships as the default browser on most Android smartphones.

The major investment by Alibaba Group also serves as a testament to the growing popularity of video apps in India. Once cautious about each megabyte they spent on the internet, thrifty Indians have become heavy video consumers online as mobile data gets significantly cheaper in the country. Video apps are increasingly climbing up the charts on Google Play Store.

In an event for marketers late last year, YouTube said that India was the only nation where it had more unique users than its parent company Google. The video juggernaut had about 250 million active users in India at the end of 2017. The service, used by more than 2 billion users worldwide, has not revealed its India-specific user base since.

T Series, the largest record label in India, became the first YouTube channel this week to claim more than 100 million subscribers. What’s even more noteworthy is that T-Series took 10 years to get to its first 10 million subscribers. The rest 90 million subscribers signed up to its channel in the last two years. Also fighting for users’ attention is Hotstar, which is owned by Disney. Earlier this month, it set a new global record for most simultaneous views on a live streaming event.