Revenge of the mutual funds

Listen here or wherever you get your podcasts.

Hello and welcome back to Equity, the podcast about the business of startups, where we unpack the numbers and nuance behind the headlines.

Our Monday show covers the latest in tech news from the weekend and what’s making headlines early in the week. This fine Monday we are celebrating the start of June, and the fact that there are still earnings rounds for us to cover and enjoy! Here’s the show rundown:

  • Stocks are mixed around the world this morning while crypto stays pretty flat. Earnings this week that we’re excited about include Gitlab, Couchbase, Yext, Smartsheet, and Hashicorp. (We’re also keeping an eye on the Atomic Wallet hack.)
  • Reddit’s proposed API changes (charges, more like) are having a pretty big impact on the service’s userbase; there are calls for a blackout of certain forums in response to the proposed updates. Reddit, on the other hand, is a business and needs to make money.
  • Sticking to social media, news broke this morning that Twitter’s revenues are down sharply compared to year-ago totals, at least when we consider its American advertising incomes. Twitter does more than just ads in North America, but given that it’s likely a pretty big chunk of its total top line, it’s not good news.
  • Canva’s valuation was slashed by a mutual fund (something that we have seen a lot lately), the latest in a string of similar headlines for other unicorns.
  • Closing, WWDC is today. Get. Hype.

For episode transcripts and more, head to Equity’s Simplecast website.

Equity drops at 7 a.m. PT every Monday, Wednesday and Friday, so subscribe to us on Apple Podcasts, Overcast, Spotify and all the casts. TechCrunch also has a great show on crypto, a show that interviews founders, one that details how our stories come together and more!

Revenge of the mutual funds by Alex Wilhelm originally published on TechCrunch

YouTube rolls back its rules against election misinformation

YouTube was the slowest major platform to disallow misinformation during the 2020 U.S. election and almost three years later, the company will toss that policy out altogether.

The company announced Friday that it would reverse its rules around election denialism, allowing some previously prohibited false claims, effective immediately. Axios first reported the changes.

“In the current environment, we find that while removing this content does curb some misinformation, it could also have the unintended effect of curtailing political speech without meaningfully reducing the risk of violence or other real-world harm,” the company wrote in a blog post.

“With that in mind, and with 2024 campaigns well underway, we will stop removing content that advances false claims that widespread fraud, errors, or glitches occurred in the 2020 and other past US Presidential elections.”

YouTube still won’t allow some kids of false election-related claims, like lying about the location of polling places and other specific efforts to dissuade people from successfully casting a vote.

“All of our election misinformation policies remain in place, including those that disallow content aiming to mislead voters about the time, place, means, or eligibility requirements for voting; false claims that could materially discourage voting, including those disputing the validity of voting by mail; and content that encourages others to interfere with democratic processes,” the company wrote.

There’s certainly an argument that, on the whole, denying the valid results of a presidential election ultimately does more to discourage people from voting than these more targeted hypothetical scenarios. But it doesn’t appear that allowing users to sow broad mistrust in the democratic process fits into the company’s definition of “real-world harm.”

Even if enforcement was challenging, it’s a strange choice to announce that it’s open season for U.S. election denial on YouTube, particularly with the 2024 race gearing up. The company plans to offer more updates around its 2024 election strategy in the next few months, so hopefully YouTube elaborates on its thinking or other planned precautions then.

YouTube rolls back its rules against election misinformation by Taylor Hatmaker originally published on TechCrunch

Gig workers get paid, Fidelity slashes Reddit’s valuation and AI conquers Minecraft

Hey, folks, welcome to Week in Review (WiR), TechCrunch’s regular newsletter that recaps the week in tech. Hope the summer’s treating y’all well — it’s a balmy 90 degrees here in NYC! — and that some much-needed R&R is on the agenda.

Speaking of “agenda,” mark your calendars for Disrupt, TC’s annual conference, kicking off in September. Whether you’re a startup rookie learning the ropes or a founder hell-bent on changing the world, Disrupt will deliver the tools, knowledge and connections to help you make it happen. You don’t want to miss it.

Elsewhere, stay tuned for City Spotlight on June 7 (Wednesday), which will highlight Atlanta, Georgia, this go-round. Atlanta has emerged as one of the buzziest new hubs in the nation, with booming cybersecurity and software-as-a-service sectors as well as a slew of investors looking to back the hot new startups coming from the metro. Among the speakers at City Spotlight will be mayor Andre Dickens — we’re looking forward to hearing his perspective.

Now with the PSAs out of the way, here’s your WiR!

most read

Fidelity sours on Reddit: This week, Fidelity, the lead investor in Reddit’s most recent funding round in 2021, slashed the estimated worth of its equity stake in the social media platform by 41% since the investment. The devaluation, part of a broader trend that has hit a variety of growth-stage startups across the globe in the past year, raises uncertainties about whether Reddit will maintain its initial intent to reportedly go public at a valuation around $15 billion.

Amazon Prime Data: Amazon is considering offering low-cost or possibly free nationwide mobile phone service to Prime subscribers in the United States, according to a new report from Bloomberg. The tech giant is reportedly in talks with Verizon, T-Mobile, Dish Network and AT&T.

Gig workers get paid: Uber, Lyft, DoorDash and other app-based ride-hail and delivery companies will have to reimburse California gig workers potentially millions of dollars for unpaid vehicle expenses between 2022 and 2023. The back payments come from a provision in Proposition 22, the controversial law that classifies gig workers as independent contractors rather than employees and promises them half-hearted protections and benefits.

Volkswagen’s ace in the hole: Volkswagen is betting big on the upcoming ID.Buzz electric van. With availability of the vehicle still a year out, the automaker is counting on years of pent-up anticipation to not only sell the bus shrouded in nostalgia, but to also have it act as a halo product to bring customers to the brand’s entire EV lineup.

Shopify launches Shop Cash: Shopify’s Shop app is introducing a new rewards program called Shop Cash, the e-commerce platform announced on Friday. The new program is funded by Shopify and earns shoppers 1% back on purchases made using its Shop Pay online checkout service.

Stripe gets into credit: Stripe wants to make it easier for businesses to access credit. The private financial infrastructure giant announced a new charge card program today from Stripe Issuing, its commercial card issuing product. Denise Ho, head of product at Stripe, gave TechCrunch the exclusive details — go read the piece by Mary Ann.

AI conquers Minecraft: AI researchers have built a Minecraft bot that can explore and expand its capabilities in the game’s open world — but unlike other bots, this one basically wrote its own code through trial and error and lots of GPT-4 queries. Called Voyager, this experimental system is an example of an “embodied agent,” an AI that can move and act freely and purposefully in a simulated or real environment.

YouTube Shorts, in minutes: Dumme, a startup putting AI to practical use in video editing, is already generating demand before opening to the public. The Y Combinator–backed company has hundreds of video creators testing its product, which leverages AI to create short-form videos from YouTube content, and it has a waitlist of over 20,000 pre-launch, it says.

audio

Need a new podcast to get your weekend started right? Good news — TC has you covered (and then some). On Equity, the crew took a look at the latest from Web Roulette, Stripe’s acquisition of Okay, what Klarna’s Q1 means for the fintech market and QED and a16z’s early-stage strategies. Found spoke with Dr. Stacy Blain, the co-founder and chief science officer at Concarlo Therapeutics, about the company’s novel therapeutic solutions for drug-resistant cancer. Over at Chain Reaction, Gary Vaynerchuk, the chairman of VaynerX and the CEO of VaynerMedia and NFT collection VeeFriends, spoke on his experiences in the creative media industry. And the TechCrunch Live folks dove into how AI doomerism is overblown — and why the blowhards doing the blowing want it that way.

TechCrunch+

TC+ subscribers get access to in-depth commentary, analysis and surveys — which you know if you’re already a subscriber. If you’re not, consider signing up. Here are a few highlights from this week:

Competition concerns in the age of AI: AI is rapidly changing how businesses sense, reason and adapt in the market. But these groundbreaking capabilities are creating an upheaval in how companies engage with competitors and consumers. Henry Hauser is counsel in Perkins Coie’s antitrust and litigation practice groups. He muses on this in an informative piece.

Salesforce becomes a data company: Could the data exhaust being generated by the Salesforce family of products become more valuable than the products themselves — at least in terms of new revenue adds? This piece explores the possibility.

Why don’t more scientists become founders?: Why is it so common to see outsiders bringing research out of the lab and not the scientists themselves? It’s a complex issue to unravel, but Rebecca does it deftly.

Gig workers get paid, Fidelity slashes Reddit’s valuation and AI conquers Minecraft by Kyle Wiggers originally published on TechCrunch

Meta found liable as court blocks firing of moderators

A Kenyan court has ruled that Meta is the primary employer of content moderators suing the social media giant and its content review partner in Africa, Sama, for unlawful dismissal. The 184 moderators, in the suit filed in March this year, also alleged that Meta’s new content review partner on the continent, Majorel, had blacklisted them on instruction by Meta.

Justice Byram Ongaya of Kenya’s employment and labor relations court on Friday watered down the social media giant’s plan to recuse itself from the case saying the moderators did Meta’s work, used its technology for the work, as well as adhered to its performance, and accuracy metrics. The court said that Sama was “merely an agent…or manager.” Sama disputed this saying “Meta is a client of Sama’s and Sama is not legally empowered to act on behalf of Meta.”

Meta has not replied to a request for comment.

The latest development is a blow to Meta, which has sought to distance itself from the petition saying that it is not the moderators’ employer.

“The evidence is that the obligation to provide the digital work of content moderation belong to the first and second respondents who provided the digital or virtual workspace for the applicants. The first and second respondents exercise control by imposing the operational requirements and standards of performance. The first and second respondent then provided the remuneration back through the agent [Sama],” the court said.

“The third respondent [Sama] was acting as an agent of the owner of the work of content moderation the first and second respondents [Meta Platforms Inc and Meta Platforms Ireland Limited], there is nothing in the arrangements to absolve the first and second respondents as the primary and principal employers of the content moderators.”

Additionally, the court directed that moderators’ contracts be extended and also barred Meta and Sama from laying them off, pending the determination of the case. The court issued the directions saying there was no suitable justification for the redundancies, and that it had “found that the job of content moderation is available. The applicants will continue working upon the prevailing or better terms in the interim.”

Moderators, hired from across the continent, including from Ethiopia, Uganda, Somalia and South Africa, sift through social media posts on Meta’s platforms to remove content that perpetrates and perpetuates hate, misinformation and violence.

The moderators allege that Sama fired them illegally after failing to issue them with redundancy notices as required by Kenyan law. The suit also claims, among other issues, that the moderators were not issued with a 30-day termination notice, and that their terminal dues were pegged on their signing of non-disclosure documents.

Sama, in the past, told TechCrunch it observed the Kenyan law, and communicated the decision to discontinue content moderation in a town hall, and through email and notification letters.

Sama, whose clients include OpenAI, dropped Meta’s contract and content review services and issued redundancy notices to 260 moderators to concentrate on labeling work (computer vision data annotation).

Meta and Sama are facing two other suits in Kenya; Daniel Motaung, a South African, sued the company for labor and human trafficking, unfair labor relations, union busting and failure to provide “adequate” mental health and psychosocial support. Motaung alleges he was laid off for organizing a 2019 strike and trying to unionize Sama’s employees.

Ethiopians filed another suit in December last year over claims that the social media giant failed to employ enough safety measures on Facebook, which, in turn, fueled the conflicts that have led to deaths, including the father of one of the petitioners, and 500,000 Ethiopians during the Tigray War.

Meta found liable as court blocks firing of moderators by Annie Njanja originally published on TechCrunch

Snapchat’s AI bot isn’t very smart, but at least it won’t send nudes

Snapchat now has an AI bot that you can send snaps to, and if you’re a premium subscriber, it can even send you pictures back. So, what happens if you send Snapchat’s My AI bot nudes?

This is the obvious question that comes to mind, because on the internet, people will immediately try to test the limits of new technology, especially if it is even tangentially related to sex. When Snapchat’s initial GPT-powered chatbot came out this Spring, it lacked appropriate age-gating features, so a reporter who registered on Snapchat as a fifteen-year-old was able to get the bot to give advice on how to cover up the smell of weed or set the mood for sex.

When asked about those findings at the Snap Partner Summit in April, CEO Evan Spiegel said, “I think humans, whenever we come across new technology, the first thing we try to do is break it.”

So, naturally, I tried to break Snapchat’s new My AI Snaps.

Though Snapchat added more guardrails to prevent the bot from having inappropriate conversations with teens, there’s still a lot riding on My AI Snaps. With the amount of controversy that Snap’s text AI stirred up, the company needed My AI Snaps to be as unbreakable as possible. Unfortunately, it’s barely coherent enough to break.

As a very serious reporter, I made a very serious Google Images search: “boobs.” I found a generic picture of boobs, took a snap of my computer screen, and sent it to My AI.

“Sorry, I can’t respond to that Snap,” My AI said.

“Why can’t you respond to that Snap?” I asked.

“I didn’t receive a Snap from you. Could you send it again?”

So, the good news for Snapchat is that My AI Snaps isn’t sexting its users. But the bad news is that the product just isn’t good.

Part of the reason why it can’t really sext is because it doesn’t have memory of the Snaps you’ve sent it once it responds. The most salacious response I got was when I sent a generic photo of a man’s lower torso wearing Tom Ford boxer briefs (… It’s what came up on Google when I searched “penis,” okay). My AI replied with a somewhat misshapen AI woman wearing a black silk robe: “Ooh, loving the Tom Ford vibes! How about adding a sleek black robe to complete the look?”

Image Credits: Screenshots by TechCrunch

My AI will refuse to reply to photos of actual human penises. But it turns out that when you Google “penis,” even with Safe Search turned off, you get a lot of vaguely suggestive, yet technically safe for work images that men’s health blogs use to illustrate articles about erectile dysfunction, and whether or not women really care about penis size. So, I showed this poor robot a photo of a curved banana sticking out of a man’s pants zipper.

In response, I received a picture of some cupcakes.

“That banana in a pocket is so handy! Have you ever tried turning it into a pocket-sized dessert?”

As my editor said when I told her about this: Is that a banana in your pocket, or is this AI literally from outer space?

As I continued defiling my Yahoo-issued work computer with Google Images searches like “dildo” and “vibrator,” the AI’s responses were mixed. When presented with sex toys that are not inherently phallic, it will reply something like, “Your pink object steals the show, just like this fountain!” But the more overtly phallic a dildo appears, the less likely the AI will be to engage.

My AI is a computer, but the way it snaps is alien, even when you aren’t trying to break it.

The AI seems to identify an object in your snap, compliment it, and make a weird comparison to something else. I sent it a picture of a plant, and it sent back a picture of a running trail, with the caption, “Your plant is so cool, it’s giving this trail a run for its money!” When I sent it a puzzle, it responded with a picture of a bike, which said, “Puzzling inside while the bikes outside are gearing up for a ride!” It truly feels like a future life form has gone back in time and is trying desperately to talk like a normal human in the year 2023, but is simply saying gibberish.

Next, I tried sending My AI a tequila bottle. The AI responded, “Someone’s ready for a party!” I was testing this feature on my own Snapchat account, and I am indeed of drinking age, so I’m not sure the AI would respond the same way if I were underage. With other prompts, however, My AI simply chooses to play dumb. When I sent it a Snap of condoms, it commented on the color of my “packet stacks.”

Onto even more exciting things: a bottle of Advil liquid gels. In response, the AI responded with a photo of graffiti, which said, “Advil liquor: for when life’s a pain, but you still want to party like this graffiti wall!” It seems the AI read “liquid gels” as “liquor,” but all in all, it’s a strange response.

Image Credits: Screenshots by TechCrunch

I tried again with a prescription bottle. The AI responded with a photo of a skatepark: “Pill bottle: ‘I’m the life of the party!’ Skatepark: ‘Hold my ramps!’” Make of that what you will.

In Snap’s announcement blog post, the company suggests sharing your grocery haul with your AI to get a recipe recommendation. The results are relatively rudimentary. When presented with cheese and bread, My AI suggested adding tomato slices. When I showed My AI chili, it suggested I make some croutons to go with my soup. Most of its suggestions make sense, though it did tell me to put fruit in my coffee, which it misidentified as simply “liquid.”

Aside from some questionable comments about “Advil liquor,” pocket bananas and the like, My AI Snaps seems pretty docile. But while it likely won’t spark as much controversy as its text-based counterpart, it won’t even be useful, which is a disappointment for a paywalled feature.

Snapchat’s AI bot isn’t very smart, but at least it won’t send nudes by Amanda Silberling originally published on TechCrunch

AI-generated hate is rising: 3 things leaders should consider before adopting this new tech

When you hear the phrase “artificial intelligence,” it may be tempting to imagine the kinds of intelligent machines that are a mainstay of science fiction or extensions of the kinds of apocalyptic technophobia that have fascinated humanity since Dr. Frankenstein’s monster.

But the kinds of AI that are rapidly being integrated into businesses around the world are not of this variety — they are very real technologies that have a real impact on actual people.

While AI has already been present in business settings for years, the advancement of generative AI products such as ChatGPT, ChatSonic, Jasper AI and others will dramatically escalate the ease of use for the average person. As a result, the American public is deeply concerned about the potential for abuse of these technologies. A recent ADL survey found that 84% of Americans are worried that generative AI will increase the spread of misinformation and hate.

Leaders considering adopting this technology should ask themselves tough questions about how it may shape the future — both for good and ill — as we enter this new frontier. Here are three things I hope all leaders will consider as they integrate generative AI tools into organizations and workplaces.

Make trust and safety a top priority

While social media is used to grappling with content moderation, generative AI is being introduced into workplaces that have no previous experience dealing with these issues, such as healthcare and finance. Many industries may soon find themselves suddenly faced with difficult new challenges as they adopt these technologies. If you are a healthcare company whose frontline AI-powered chatbot is suddenly being rude or even hateful to a patient, how will you handle that?

For all of its power and potential, generative AI makes it easy, fast and accessible for bad actors to produce harmful content.

Over decades, social media platforms have developed a new discipline — trust and safety — to try to get their arms around thorny problems associated with user-generated content. Not so with other industries.

For that reason, companies will need to bring in experts on trust and safety to talk about their implementation. They’ll need to build expertise and think through ways these tools can be abused. And they’ll need to invest in staff who are responsible for addressing abuses so they are not caught flat-footed when these tools are abused by bad actors.

Establish high guardrails and insist on transparency

Especially in work or education settings, it is crucial that AI platforms have adequate guardrails to prevent the generation of hateful or harassing content.

While incredibly useful tools, AI platforms are not 100% foolproof. Within a few minutes, for example, ADL testers recently used the Expedia app, with its new ChatGPT functionality, to create an itinerary of famous anti-Jewish pogroms in Europe and a list of nearby art supply stores where one could purchase spray paint, ostensibly to engage in vandalism against those sites.

While we’ve seen some generative AIs improve their handling of questions that can lead to antisemitic and other hateful responses, we’ve seen others fall short when ensuring they will not contribute to the spread of hate, harassment, conspiracy theories and other types of harmful content.

Before adopting AI broadly, leaders should ask critical questions, such as: What kind of testing is being done to ensure that these products are not open to abuse? Which datasets are being used to construct these models? And are the experiences of communities most targeted by online hate being integrated into the creation of these tools?

Without transparency from platforms, there’s simply no guarantee these AI models don’t enable the spread of bias or bigotry.

Safeguard against weaponization

Even with robust trust and safety practices, AI still can be misused by ordinary users. As leaders, we need to encourage the designers of AI systems to build in safeguards against human weaponization.

Unfortunately, for all of their power and potential, AI tools make it easy, fast and accessible for bad actors to produce content for any of those scenarios. They can produce convincing fake news, create visually compelling deepfakes and spread hate and harassment in a matter of seconds. Generative AI-generated content could also contribute to the spread of extremist ideologies — or be used to radicalize susceptible individuals.

In response to these threats, AI platforms should incorporate robust moderation systems that can withstand the potential deluge of harmful content perpetrators might generate using these tools.

Generative AI has almost limitless potential to improve lives and revolutionize how we process the endless amount of information available online. I’m excited about the prospects for a future with AI but only with responsible leadership.

AI-generated hate is rising: 3 things leaders should consider before adopting this new tech by Walter Thompson originally published on TechCrunch

Twitter’s trust and safety lead Ella Irwin resigns

Twitter’s head of trust and safety Ella Irwin has resigned, the now-former executive confirmed to Reuters on Thursday.

Irwin took over for Yoel Roth, who famously left the company during the early days of Elon Musk’s chaotic Twitter takeover. While Roth clashed with Musk over the new Twitter owner’s hands-off approach to content moderation, Irwin appeared well-aligned with Musk’s ideals, which have welcomed neo-Nazis and fostered a rise in hate speech on the platform in recent months.

At the time of writing, Irwin and Musk had yet to comment on the situation. While we don’t yet know the catalyst, the timing does align with Twitter’s drama du jour.

On Thursday, conservative outlet the Daily Wire claimed that Twitter “canceled a deal” for the website to make its film, “What is a Woman,” available on the platform for free. The documentary is feature length attack on transgender people — and one that Musk said was perfectly at home on the social network.

“This was a mistake by many people at Twitter,” Musk told Daily Wire co-CEO Jeremy Boreing in a reply. “It is definitely allowed.”

In spite of Musk’s assertion, the documentary’s reach was limited on Twitter at the time of writing, citing the platform’s rules against hateful conduct.

While it’s not clear that the Irwin situation is connected, she likely would have been involved in the decision to label the video, which is currently igniting a firestorm among conservative and anti-transgender Twitter users who see Musk as reliable ally.

Twitter’s trust and safety lead Ella Irwin resigns by Taylor Hatmaker originally published on TechCrunch

Match Group debuts Archer, a new face-first gay dating app for men

Online dating behemoth Match Group is introducing a new gay dating app, just in time for Pride. The company, which owns the lion’s share of the global dating market, is launching Archer, a “social-first” dating experience for gay, bi and queer men.

Archer won’t actually be hitting the App Store just yet — that happens a little later this month. It rolls out in New York in June, with LA and DC lined up in the next few months and a full launch to the rest of the U.S. within the year.

While it’s sort of surprising that Match Group — which owns everything from OkCupid and Tinder to BLK and Plenty of Fish — took this long to add a gay-specific app to its massive portfolio, Archer does offer a few twists. Unlike gay hookup apps like Grindr, Scruff and their ilk, Archer aims to combine dating app necessities (messaging, the grid, etc.) with traditional social media features designed to foster deeper “self expression.”

At launch, that includes interest tags, but Archer has near-term plans to add an ephemeral Stories feature and the ability to follow other users — all things you’d expect in something like Instagram, but not necessarily in a dating app.

“We know that what somebody is looking for on a dating app really evolves every hour, every day, every week,” Archer Director of Brand Marketing and Communications Michael Kaye told TechCrunch. “You might be looking for something on a Friday or Saturday night that is very different than what you might be looking for on a Tuesday afternoon or a Sunday morning.

Match Group's Archer gay dating app

Image Credits: Archer

To that end, the app also offers a few different views: A Grindr-style grid with two different zoom levels and a linear layout that gives users looking for love an in-depth view of one profile at a time. Generally, the app looks well-designed and flexible, aiming to meet the many different needs of a busy and diverse dating community. 

“When we when we first were thinking about Archer, we recognized that relationships are so fluid within this community,” Kaye said. “So they might be single one day, then in a monogamous relationship, and then you might open up your relationship. So we wanted to make sure that whatever this app became it served every stage in your life.”

Archer is yet another experience from corporate dating giant Match Group, and that pedigree comes bundled with an at least marginally less NSFW mindset. In lieu of a sea of torsos, Archer users will be required to display a profile photo that includes their face. (Grindr’s founder similarly resents the sea of headless men, launching his own face-first follow-up app Motto late last year.) Archer uses selfie verification at signup and every user’s profile picture, face included, will be verified, presumably through the same system that Match Group apps like Tinder use. 

On the safety front, Archer will also employ AI in its chat feature to auto-blur potential nudity, masking unsolicited dick pics for users who might not be collecting them at the moment. The lack of torsos and relative dearth of dicks certainly sets a different tone from what gay men have mostly come to expect in hookup apps, but Archer is making a bet that its relatively (and literally) buttoned-up approach will cast a wider net. That said, you can still plan to share steamy torso et cetera images through private photo albums, which can display up to six photos at once and users can grant that access and revoke it easily at any time through the app.  

Detractors of yet another app from Big Dating might criticize these decisions for sanitizing the gay dating experience. That criticism is probably fair depending on what you’re looking for, but Archer does seem to be designed with the unique needs of gay, bi and queer men in mind. (And to its credit Archer lacks the ongoing ownership concerns plaguing Grindr, though Match Group isn’t without its own privacy missteps.) That involved surveying, interviewing and product testing with more than 1,000 men from the community it’s looking to serve — something you’d certainly hope a slick new gay dating app hoping to build goodwill with a marginalized community would do, of course.

“There are a lot of men within the community that will turn to dating actually moving to a new city,” Kaye said. “I know so many people who are in monogamous relationships, but have downloaded gay dating apps when moving to New York for the first time to find their chosen family in the area. So we figured why not create a platform that makes that really easy for you.”

Archer might be serving a specific cross-section of the dating world, but it’s still trying to be many different things to many different people — an approach that may pay off in the end, particularly in light of its parent company’s total dominance of the online dating scene. Archer is betting that gay men might want to hook up, find love, build community and connect around shared interests in a one-stop shop of an app. And they might be right, particularly for anyone tired of the headless torso dating scene.

Match Group debuts Archer, a new face-first gay dating app for men by Taylor Hatmaker originally published on TechCrunch

BeReal is adding a messaging feature called RealChat

BeReal is working on a chat feature, which will begin with a test among users in Ireland.

At launch, users will be able to message one on one with friends, send them a private BeReal (no time limit, just a front-back photo) and react with RealMoji (BeReal’s custom emojis).

BeReal users can only message each other if they’re already friends on the platform. The chat system will also launch with blocking and reporting features. While users can delete their own messages, that doesn’t mean that the message will disappear from their friends’ apps — but if two people in a chat delete the messages, then BeReal says it will be deleted entirely from their system within 30 days… and if you’re having such top secret conversations on BeReal, you might be better off using an encrypted messaging app anyway.

The company said in an email to TechCrunch that private messaging is one of the most commonly requested features from users. Sure, it’s easier to directly chat with a friend in the app if you want to talk to them about their BeReal without making a comment, which all of their other friends can see. But this is also a way for BeReal to keep users in the app longer, and to open it more than just once per day when they make their post.

BeReal spiked in popularity last year, despite being founded in 2020. The app wooed gen Z and millennial users with its less-than-polished approach to social media.

Each day at a random time, every user is simultaneously prompted with a notification that it is “time to BeReal.” That means that within two minutes, you must take a front-and-back camera photo (RIP Frontback) to share with your friends. While Instagram can sometimes feel like a highlight reel, BeReal has a lot more photos of people’s TVs or computer screens, because we are all too often watching Netflix or working. But that authenticity has its downsides, because the computer screen selfies get boring after a while.

Even as the app faced competition in the form of copycat features from TikTokInstagram and Snapchat, the app didn’t iterate much until this spring. In April, BeReal launched an integration with Spotify, which shows what you’re listening to when you post your BeReal. Then, the app rolled out the Bonus BeReal feature, which lets users post more than one BeReal per day, so long as they post their first BeReal on time.

According to BeReal, the app has 20 million daily active users. Though the young company has a Series B round of $60 million to back it up, there are no ads or paid features in the app yet, so the potential for income is limited.

BeReal is adding a messaging feature called RealChat by Amanda Silberling originally published on TechCrunch

GetYourGuide books $194M at a $2B valuation with travel experiences back in business

The tourist industry is bouncing back after a rough couple of pandemic-stressed years, and in the case of some tourism and travel startups, its momentum is coming in at a pace that’s defying the even the current market climate.

Today, GetYourGuide — a Berlin-based startup that has built a marketplace for finding and booking tourist, travel and other experiences, with some 75,000 experiences from 16,000 providers listed at any given time — is announcing that it has raised $194 million.

It will be using the funds in three basic areas: first, to continue expanding into new markets. Second, to add in more hikes, tours and other experiential events like meeting Gianni, the key holder for the whole of the Vatican, at 6am and turning on all the lights as you walk through the halls with him. And third, to bring in more AI and other technology to improve discovery and personalization on the platform.

The money is coming in the form of an $85 million Series F and a revolving credit facility of $109 million. Blue Pool Capital led the equity round with KKR and Temasek also participating, while UniCredit led the credit facility with participation from BNP Paribas, Citibank and KfW.

The round values GetYourGuide at $2 billion — double the startup’s valuation compared to its previous round, a monster $484 million Series E in 2019.

The funding, and valuation boost, stand out in the current market because consumer-oriented startups have been finding it extremely challenging to raise money; and all startups, not just consumer, are seeing a lot of pressure on their valuations — two trends that GYG has just bucked. (And it’s not the only travel startup making those waves: just yesterday Hostaway announced a $175 million raise.)

But the news also represents a pretty major turnaround for GetYourGuide itself.

Pre-Covid, Berlin’s GetYourGuide was one of the hottest startups in Europe, built on a very simple idea: it took one of the most old-fashioned and outdated aspects of tourism — guided tours — and reinvented them as “experiences” to meet the needs, interests, and Instagrammable requirements of a new wave of younger consumers, all discoverable and bookable using everyone’s favorite device, the smartphone.

The idea took off like a rocket — a successfully launched one. Bookings went up, investors flocked to the startup, it moved into a very impressive digs in the east of Berlin, and people started to think that maybe it wasn’t just Airbnb that could within a decade upend how we think about travel.

Then Covid-19 happened.

“We went from high-flying to zero revenues — zero revenues for multiple quarters,” CEO and co-founder Johannes Reck recalled. “A lot of startups struggled at that time, but we were one of the worst hit. Of course, no one wanted to go on tours with other people” — which was effectively the only product that GYG offered. “It was really bad.”

Reck took a bold bet at that time: he decided that consumer behavior, the interest in experiences that had been driving business for the startup, wouldn’t change; it would likely just pause under pandemic circumstances.

“I was always convinced that we would go back and that our market would come out better than it was even pre-pandemic. First it’s because people crave experiences. Covid was a big setback but not a fork in the road where consumer behavior would be different,” he said. “Second, I was very sure travel would return, and travelers would not want to sit in hotel rooms for the next century.”

The company’s $484 million round led by SoftBank closed just months before Covid-19 hit, so GYG had plenty of cash. But on top of that it secured a $133 million convertible note, in case things got really hairy. It also laid off 20% of its staff, all told, but then it held tight. “We didn’t cut deep,” Reck said. “We stopped and waited for 8-10 months to pass.”

It took a bit longer — around two years in fact — but eventually things started to pick back up. GYG never did exercise the convertible note, Reck said.

The end of 2022, with the Omicron wave of Covid-19 subsiding, was the turning point, he recalled, with everything “just starting to fall back in place.” By Q1 of 2023, the startup was seeing booking volumes that were 4 times bigger than its volumes in Q1 in 2019 (the last comparable year of non-Covid normalcy). It’s not speaking specific numbers on volumes but it had about 25-30 million tickets sold on its app between 2019 and 2020; four times that would be 100-120 million.

Reck added that now it’s looking like the company is “on the route to profitability” in many of its major markets.

Of course, that route, unlike a GYG tour, doesn’t have a well-defined starting or end point, nor estimated time of arrival. But it seems to be one that investors are happy to book and follow regardless.

“There is immense opportunity in the digitization of the experiences industry, and we believe GetYourGuide’s global leadership and market-leading customer-centricity in the category stems from its deep expertise in this complex space,” said Oliver Weisberg, CEO of Blue Pool Capital, in a statement. “We believe GetYourGuide is uniquely positioned to be the global leader in the category; we are pleased to lead the equity financing given the strength of the business.”

Meanwhile, the future for GYG has a couple of interesting technology and business variables at play.

Reck said that GYG remains very committed to the idea of selling human-led group tours. That means: no self-guided tours, no virtual tours, and no generative-AI created tours are on its roadmap today.

Reck calls the group tour, devised and led by an actual person, “the core product” of GYG. “Our mission just doesn’t happen if you are glued to your smartphone,” he said. He speaks not just from opinion but experience: “We’ve tested so many alternative formats, including virtual experiences,” Reck said. “They all flopped.”

But that’s not to say that there aren’t some big opportunities for using AI in the business. Reck said.

About a week ago, the company launched a ChatGPT integration that lets users initiate a search of GYG’s catalogue by way of natural-language queries. That solves a big pain point for the company, which has been that basic keyword searches are not good enough to produce useful search results.

Over time, there could also be further extensions of this where GYG can start to get more accurate ideas of what people like to do and look at to give them even more accurate search results; and GYG aggregates the data to get a better picture of what its customer base wants more or less of — analytics and data that it could in turn feed back to its suppliers to build better future tours.

“I don’t see AI as an end in itself but a tool to help suppliers and users,” he said. “There are so many different types of experiences, and AI will help figure out what is for you.”

GetYourGuide books $194M at a $2B valuation with travel experiences back in business by Ingrid Lunden originally published on TechCrunch