Google blocks Truth Social from the Play Store — Will Apple be next?

Google’s decision to block the Truth Social app’s launch on the Play Store over content moderation issues raises the question as to why Apple hasn’t taken similar action over the iOS version of the app that’s been live on the App Store since February. According to a report by Axios, Google found numerous posts that violated its Play Store content policies, blocking the app’s path to go live on its platform. But some of these same types of posts appear to be available on the iOS app, TechCrunch found.

This could trigger a re-review of Truth Social’s iOS app at some point, as both Apple’s and Google’s policies are largely aligned in terms of how apps with user-generated content must moderate their content.

Axios this week first reported Google’s decision to block the distribution of the Truth Social app on its platform, following an interview given by the app’s CEO, Devin Nunes. The former Congressman and member of Trump’s transition team, now social media CEO, suggested that the holdup with the app’s Android release was on Google’s side, saying, “We’re waiting on them to approve us, and I don’t know what’s taking so long.”

But this was a mischaracterization of the situation, Google said. After Google reviewed Truth Social’s latest submission to the Play Store, it found multiple policy violations, which it informed Truth Social about on August 19. Google also informed Truth Social as to how those problems could be addressed in order to gain entry into the Play Store, the company noted.

“Last week, Truth Social wrote back acknowledging our feedback and saying that they are working on addressing these issues,” a Google spokesperson shared in a statement. This communication between the parties was a week ahead of Nunes’s interview where he implied the ball was now in Google’s court. (The subtext to his comments, of course, was that conservative media was being censored by Big Tech once again.)

The issue at hand here stems from Google’s policy for apps that feature user-generated content, or UGC. According to this policy, apps of this nature must implement “robust, effective and ongoing UGC moderation, as it reasonable and consistent with the type of UGC hosted by the app.” Truth Social’s moderation, however, is not robust. The company has publicly said it relies on an automated A.I. moderation system, Hive, which is used to detect and censor content that violates its own policies. On its website, Truth Social notes that human moderators “oversee” the moderation process, suggesting that it uses an industry-standard blend of AI and human moderation. (Of note, the app store intelligence firm Apptopia told TechCrunch the Truth Social mobile app is not using the Hive AI. But it says the implementation could be server-side, which would be beyond the scope of what it can see.)

Truth Social’s use of A.I.-powered moderation doesn’t necessarily mean the system is sufficient to bring it into compliance with Google’s own policies. The quality of AI detection systems varies and those systems ultimately enforce a set of rules that a company itself decides to implement. According to Google, several Truth Social posts it encountered contained physical threats and incitements to violence — areas the Play Store policy prohibits.

Image Credits: Truth Social’s Play Store listing

We understand Google specifically pointed to the language in its User Generated Content policy and Inappropriate Content policy when making its determination about Truth Social. These policies include the following requirements:

Apps that contain or feature UGC must:

  • require that users accept the app’s terms of use and/or user policy before users can create or upload UGC;
  • define objectionable content and behaviors (in a way that complies with Play’s Developer Program Policies), and prohibit them in the app’s terms of use or user policies;
  • implement robust, effective and ongoing UGC moderation, as is reasonable and consistent with the type of UGC hosted by the app


  • Hate Speech – We don’t allow apps that promote violence, or incite hatred against individuals or groups based on race or ethnic origin, religion, disability, age, nationality, veteran status, sexual orientation, gender, gender identity, caste, immigration status, or any other characteristic that is associated with systemic discrimination or marginalization.
  • Violence – We don’t allow apps that depict or facilitate gratuitous violence or other dangerous activities.
  • Terrorist Content – We don’t allow apps with content related to terrorism, such as content that promotes terrorist acts, incites violence, or celebrates terrorist attacks.

And while users may be able to initially post such content — no system is perfect — an app with user-generated content like Truth Social (or Facebook or Twitter, for that matter) would need to be able to take down those posts in a timely fashion in order to be considered in compliance.

In the interim, the Truth Social app is not technically “banned” from Google Play — in fact, Truth Social is still listed for pre-order today, as Nunes also pointed out. It could still make changes to come into compliance, or it could choose another means of distribution.

Unlike on iOS devices, Android apps can be sideloaded or submitted to third-party app stores like those run by Amazon, Samsung, and others. Or, Truth Social could opt to do what the conservative social media app Parler did after its suspensions from the app stores last year. While Parler chose to make adjustments in order to return to Apple’s App Store, it now distributes the Android version of its app directly from its website — not the Play Store.

While Truth Social decides its course for Android, an examination of posts on Truth Social’s iOS version revealed a range of anti-semitic content, including Holocaust denial, as well as posts promoting the hanging of public officials and others (including those in the LGBTQ+ community), posts advocating for civil war, posts in support of white supremacy, and many other categories that would seem to be in violation of Apple’s own policies around objectionable content and UGC apps. Few were behind a moderation screen.

It’s not clear why Apple has not taken action against Truth Social, as the company hasn’t commented. One possibility is that, at the time of Truth Social’s original submission to Apple’s App Store, the brand-new app had very little content for an App Review team to parse, so didn’t have any violative content to flag. Truth Social does use content filtering screens on iOS to hide some posts behind a click-through warning, but TechCrunch found the use of those screens to be haphazard. While the content screens obscured some posts that appeared to break the app’s rules, the screens also obscured many posts that did not contain objectionable content.

Assuming Apple takes no action, Truth Social would not be the first app to grow out of the pro-Trump online ecosystem and find a home on the App Store. A number of other apps designed to lure the political right with lofty promises about an absence of censorship have also obtained a green light from Apple.

Social networks Gettr and Parler and video sharing app Rumble all court roughly the same audience with similar claims of “hands off” moderation and are available for download on the App Store. Gettr and Rumble are both available on the Google Play Store, but Google removed Parler in January 2021 for inciting violence related to the Capitol attack and has not reinstated it since.

All three apps have ties to Trump. Gettr was created by former Trump advisor Jason Miller, while Parler launched with the financial blessing of major Trump donor Rebekah Mercer, who took a more active role in steering the company after the January 6 attack on the U.S. Capitol. Late last year, Rumble struck a content deal with former President Trump’s media company, Trump Media & Technology Group (TMTG), to provide video content for Truth Social.

Many social networks were implicated in the Jan. 6 attack — both mainstream social networks and apps explicitly catering to Trump supporters. On Facebook, election conspiracy theorists flocked to popular groups and organized openly around hashtags including #RiggedElection and #ElectionFraud. Parler users featured prominently among the rioters who rushed into the U.S. Capitol, and Gizmodo identified some of those users through GPS metadata attached to their video posts

Today, Truth Social is a haven for political groups and individuals that were ousted from mainstream platforms over concerns that they might incite violence. Former President Trump, who founded the app, is the most prominent among deplatformed figure to set up shop there, but Truth Social also offers a refuge to QAnon, a cult-like political conspiracy theory that has been explicitly barred from mainstream social networks like Twitter, YouTube and Facebook due to its association with acts of violence.

Over the last few years alone, that includes a California father who said he shot his two children with a speargun due to his belief in QAnon delusions, a New York man who killed a mob boss and appeared with a “Q” written on his palm in court and various incidents of domestic terrorism that preceded the Capitol attack. In late 2020, Facebook and YouTube both tightened their platform rules to clean up QAnon content after years of allowing it to flourish. In January 2021, Twitter alone cracked down on a network of more than 70,000 accounts sharing QAnon-related content, with other social networks following suit and taking the threat seriously in light of the Capitol attack.

A report released this week by media watchdog NewsGuard details how the QAnon movement is alive and well on Truth Social, where a number of verified accounts continue to promote the conspiracy theory. Former President Trump, Truth Social CEO and former House representative Devin Nunes and Patrick Orlando, CEO of Truth Social’s financial backer Digital World Acquisition Corporation (DWAC) have all promoted QAnon content in recent months.

Earlier this week, former President Trump launched a blitz of posts explicitly promoting QAnon, openly citing the conspiracy theory linked to violence and domestic terrorism rather than relying on coded language to speak to its supporters as he has in the past. That escalation paired with the ongoing federal investigation into Trump’s alleged mishandling of high stakes classified information — a situation that’s already inspired real-world violence — raises the stakes on a social app where the former president is able to openly communicate to his followers in real-time.

That Google would take a preemptive action to keep Truth Social from the Play Store while Apple is, so far, allowing it to operate is an interesting shift in the two tech giant’s policies over app store moderation and policing. Historically, Apple has taken a heavier hand in App Store moderation — culling apps that weren’t up to standards, poorly designed, too adult, too spammy, or even just operating in a gray area that Apple later decides now needs enforcement. Why Apple is hands-off in this particular instance isn’t clear, but the company has come under intense federal scrutiny in recent months over its interventionist approach to the lucrative app marketplace.


Gen Z social app Yubo rolls out age ‘estimating’ technology to better identify minors using its service

Yubo, a social livestreaming app popular with a Gen Z audience, announced today it’s becoming one of the first major social platforms to adopt a new age verification technique that uses live image capture technology to identify minors using its app, in order to keep them separated from adult users. While other companies serving a younger crowd typically rely on traditional age gating techniques, these are easily bypassed as all that’s generally required is for a user to enter a birthdate in an online form.

Many kids know they can lie about their age to gain access to platforms designed for older users, which is how they end up in online spaces that aren’t kid-friendly or that have greater risks associated with their use.

Yubo, on the other hand, has been thinking about what the future of social networking should look like for the next generation of users — and not just from a product standpoint, but also from a product safety perspective.

Founded in 2015, Yubo users hang out in live-streaming rooms where they can socialize, play games, and make new friends. There aren’t creators on the platform broadcasting to fans, and Yubo has no plans to move in that direction — the way that nearly all other major social platforms have today. Instead, its app’s focus is on helping users socialize naturally, the way they’re already comfortable with, after having grown up using services FaceTime and hanging out with friends in other live video apps.

According to Yubo co-founder and CEO, Sacha Lazimi, Generation Z sees “no difference between online and offline life,” he says.

“They have exactly the same needs of socializing offline as online, but there were no solutions [for this],” Lazimi explains. This led Yubo to launch a live video feature that launched to the app’s userbase in February 2018.

“We are taking the best of offline interaction and adding to that the power of technology to make sure that you will connect to the right group of people anywhere in the world, at any time, in a safe environment,” he adds.

The company today has seen 60 million sign-ups, which is up from the 40 million it reported in 2020 when it closed on its $47.5 million Series C funding round. 99% of them are Gen Z users, ages 13 to 25.

While Yubo doesn’t share its monthly active users, it notes that it’s seeing increasing revenue via its a la carte premium features and subscriptions, which grew from 7 million euros in 2019 to now 25 million euros as of last year. The app doesn’t run ads.

But with this younger audience and growth, comes the need for increased safety. Previously, Yubo had partnered with the digital identity provider Yoti to help it vet potentially suspicious users. If people were using different phone numbers or devices, for example, or if they had been reported by others, Yubo would ask them to verify themselves by submitting their IDs. The process of managing the ID verification was handled by Yoti.

On average, Yubo processed 6,500 verifications per day in 2021. Following this verification, 67,000 accounts per month were suspended due to discrepancies in age, the company says.

But there was one challenge with this system — minors often don’t have an ID.

“A lot of teenagers — especially under 18 years old — do not have any identity documents,” notes Lazimi. “So we could not ask everyone to verify their identity.”

Image Credits: Yubo

That led the company to now adopt another Yoti product for age estimations. This system will direct new and existing users to an age verification and agreement screen either during sign-up or as a pop-up for existing users upon launching the app. When they accept, their camera will activate and they’ll be prompted to place their face within an oval that appears on the screen. The “liveness algorithm” also takes a short video that analyses movement to confirm the image used is not fake or being pulled from a search engine.

When the face has been detected, the user will receive confirmation that they’ve been verified or they’ll be told if their age doesn’t match the age they entered upon sign-up, or that they’re not using a legitimate picture.

If the user’s age is confirmed, they’ll be directed to the homepage and can use Yubo as before. If verification fails, they’ll need to go through a full ID check instead.

Image Credits: Yubo

The new technology, as you may imagine, is not perfect.

Lazimi admits that it works better with younger people’s faces than with adults. Currently, the Yoti age estimation system can effectively identify the ages of 6 to 12-year-old users within 1.3 years, and those between 13 and 19 within 1.5 years, Yoti claims. After that, accuracy decreases. For 20 to 25-year-olds, it’s accurate within a range of 2.5 years. For 26 to 30-year-olds, it’s within an average of 3 years. But this accuracy could improve over time, as more analysis is performed.

“It’s actually very accurate for young users, and especially users under 15…I believe for 13 to 14-year-old users, it’s around 99%,” he says. (It’s 98.9% accurate across all ages, genders, and skin tones, says Yoti.). To date, Yoti has run the technology across some 500 million faces, and is certified by software testing service iBeta.

“It’s less accurate for older users — that’s why we’ve launched with the youngest users, because those are the ones we want to protect more and also because it’s more accurate and more precise,” Lazimi says.

The company will initially roll out the technology initially to 13 and 14-year-old users with the goal of age-verifying 100% of users by the end of 2022.

The age estimation tech is not the first tool that Yubo has adopted to keep younger users safe on live streams, the company points out.

It’s also using A.I. technology and human moderation to monitor livestreams by taking second-by-second screenshots, then flagging inappropriate content to human moderators in real-time, including nudity, partial nudity (including underwear), suggestive content, drug use, weapons, blood, and violence. (You can see some complaints about this in Yubo’s App Store reviews, where teens are complaining it flagged boys for streaming with their shirts off.)

Yubo also includes educational safety features. For example, the app pop-ups reminders about personalized modification options (like Muted Words), and sends alerts sent to users if they are participating in harmful and inappropriate behaviors or sharing sensitive personal information. The company has a Safety Advisory Board with international online safety experts, as well.

“We are also working closely with government and NGOs because we believe that social networks need to have stronger regulation from the top,” says Lazimi. But, he adds, “we are not waiting for regulation to do safety features. We are doing it proactively,” he says.


Pulse, the maker of an automatic Slack status updater, acquires team communication startup Lounge

Lounge, a team communication startup that was looking to reimagine the future of work with features designed for remote and distributed workforces, has now found an exit after Slack entered its same market last year with competitive voice and video tools. Launched by former Life360 employees last year, Lounge confirmed it’s been acquired for an undisclosed sum by Pulse — another company building technology aimed at improving remote worker productivity, but specifically through features that automate updates to users’ Slack status via a combination of AI and custom rules.

Pulse was founded by Raj Singh, who has already had multiple exits, including to companies like Salesforce. In particular, Pulse was attracted to Lounge’s ideas around team-building activities for remote workers.

That was only a subset of what Lounge had offered, however. Its service was a more expansive platform with features like message boards, audio chat, group audio and visual representations of typical corporate office spaces — like employee desks, conference rooms and even break rooms. On top of this, it had built features designed to help remote workers connect and get to know one another, like photo-sharing tools and support for company-wide events — like steps or meditation challenges, for example.

According to Lounge CEO Alex Kwon, despite his startup’s unique features, it struggled against the lock-in and network effect of mainstream remote work platforms, like Slack and Microsoft Teams. But the final bullet was Slack’s introduction of Huddle, its own lightweight audio networking feature that effectively took aim at one of Lounge’s biggest selling points — drop-in voice chats.

Initially, Lounge decided to pivot to focus more on asynchronous team-building activities with deeper Slack integration.

Kwon says Singh later reached out to him and explained how Pulse aimed to become a platform for displaying the status of each team member right inside Slack, where they’re already working — as opposed to having employees use a whole new platform like Lounge. Pulse was also looking to expand its solution to other workplace apps, like Microsoft Teams, Google Workspace and even Discord. Both parties agreed that establishing team-building activities as a social layer for this existing offering could make sense. It also served Lounge’s original vision to make remote teams feel closer.

“Raj is an experienced entrepreneur in the B2B space with a previous exit to Salesforce, so the acquisition conversation became a natural next step for us,” notes Kwon.

Pulse aims to use Lounge’s asynchronous team-building activities like its Steps challenge and Team photos feature as the social roadmap for Pulse’s ambient status updater right inside Slack, Kwon says. You could imagine, then, how companywide team-building challenges could be turned into Slack status updates the same way Pulse today updates your status based on the apps you use — like Zoom, Google Meet, Skype, Slack Huddles, etc. — and your calendar, working hours and more.

As a result of the acquisition, Kwon is the only Lounge member joining Pulse, and only as a part-time product adviser — a signal the acquisition was more about the tech than talent.

Deal terms weren’t revealed but we understand it to be more stock than cash, implying a smaller exit.

Lounge had raised $1.2 million in funding from investors including Unusual Ventures, Hustle Fund, Translink, Unpopular Ventures and other angels.

Snap further invests in AR Shopping with dedicated in-app feature, new tools for retailers

At Snap’s Partner Summit on Thursday, the Snapchat maker announced a number of new initiatives focused on using its AR technology to aid with online shopping. Most notably, the company is introducing a new in-app destination within Snapchat called “Dress Up” that will feature AR fashion and virtual try-on experiences, and it’s launching tools that will allow retailers to integrate with Snapchat’s AR shopping technology within their own websites and apps, among other updates designed to ease the process of AR asset creation.

The company has been making strides with AR-powered e-commerce over the past year, having given its computer vision-based “Scan” feature a more prominent placement inside the Camera section of the app and upgrading it with commerce capabilities. Earlier in 2022, Snap also rolled out support for real-time pricing and product details to enhance its AR shopping listings.

These improvements have yielded increased consumer engagement with AR commerce, Snap says. Since January 2021, more than 250 million Snapchat users have engaged with AR shopping Lenses more than 5 billion times, the company notes.

Today, Snap announced it will put AR technology more directly into retailers’ own hands by allowing them to use Snap’s AR try-on technology within their own mobile apps and websites, with Camera Kit for AR Shopping.

This AR SDK (software development kit) will bring catalog-powered shopping lenses into the retailer’s own product pages to allow their customers to virtually try on their clothing, accessories, shoes and more. At launch, the feature works on iOS and Android apps, but Snap says it will work “soon” on websites, as well.

Its first global partner to use the technology is Puma, which will allow shoppers to virtually try on its sneakers using the Camera Kit integration. Shoppers would simply point their phone at their feet to see the sneakers they’re considering appear in an AR view.

Retailers will also gain access to a new AR Image Processing technology in Snap’s 3D asset manager, which Snap says will make it easier and faster to build augmented reality shopping experiences. Through a web interface, brands will be able to select their product SKUs and then turn them into Shopping Lenses, allowing them to create new Lenses in seconds, and for no additional cost, Snap claims.

To do so, partners will upload their existing product photography for the SKUs they sell, which Snap’s tech will then process using a deep-learning module that turns them into AR Image assets. This process uses AI to segment the garment from the brand’s model photography, essentially turning standard photos into AR assets.

These assets can then be used to create new try-on Lenses which can be used by shoppers at home who take a full-body selfie photo.

Virtual try-on using full-body images. Image Credits: Snap

The company is also adding new AR Shopping templates in its Lens Web Builder to turn those assets into Lense more quickly, without the need to understand AR development. Select partners in apparel, eyewear and footwear can try this out in beta today, and Snap will later expand the feature to include furniture and handbags.

Related to this, Snap is giving AR shopping a bigger spot within its own app for consumers.

Snapchat will introduce a new in-app destination called “Dress Up,” where users can browse and discover new try-on experiences from creators, retailers and fashion brands in one place. “Dress Up” will be first available in Lens Explorer, but will soon be added one tap away from the Camera in the AR Bar.

Snap’s Dress Up feature. Image Credits: Snap

Users will be able to return to outfits and other products they liked by navigating to a new shopping section from within their Profile, where they can view the items they’ve favorited, recently viewed and added to a cart.

As another example, Snap says that Zenni Optical’s AR Lenses have been tried on over 60 million times by users, and Lenses that used Snap’s “true size” technology were shown to have driven a 42% higher return on ad spend compared to Lenses without the feature.

Finally, in the realm of virtual fashion, Snap’s Bitmoji is getting an update, too. There are now over 1 billion of these mini avatars created to date, which people like to dress up in virtual fashion items. Snap says fashion brand partners will now be able to drop “Limited Edition” fashion items for Bitmoji exclusively for Snapchat users.

Amazon’s cashierless Just Walk Out technology comes to Houston Astros’ Major League Baseball Stadium

Amazon announced today it will bring its cashier-free checkout technology, Just Walk Out, to a Major League Baseball stadium for the first time. MLB team Houston Astros will introduce the Just Walk Out system to two stores at Minute Maid Park, allowing fans to purchase food and beverages.

19th Hole and Market, two food and beverage stands at the Astros’ home stadium, will use Just Walk out, which relies on A.I., sensors and computer vision to allow customers to shop without checking out, avoiding long checkout lines. Here’s how it works: customers insert their credit cards into the gates when entering the store. By picking up an item, customers add it to their virtual cart. They can place it back on the shelf to remove it from their virtual cart. Customers don’t need to check out; when they leave the store, their credit cards will be automatically charged for any items they took. Attendants will be available at the stores to help with shopping and customers do have to show ID to an attendant in order to purchase alcohol.

“We’re excited to work with the Houston Astros to offer fans a fast and convenient way to shop for their gameday essentials using Just Walk Out technology at the 19th Hole and Market stores,” Dilip Kumar, Vice President of Physical Retail & Technology at Amazon, said in a statement. “Our technology is designed to deliver a fast and frictionless shopping experience, so we’re thrilled to help eliminate checkout lines for fans when they need to refuel during games and between innings.”

“The Astros are proud to collaborate with Amazon to bring their Just Walk Out Shopping experience to Minute Maid Park in 2022,” Marcel Braithwaite, Senior Vice President, Business Operations for the Astros said in a statement. “We wanted to provide this state-of-the-art technology to our fans, giving them a more streamlined and convenient shopping experience so they can spend more time enjoying baseball.”

The ’19th Hole’ store is located on the Concourse level behind Section 156, and the Just Walk Out technology-enabled ‘Market’ store is located on the Honda Club level behind Section 211. Both stores offer a selection of snacks, soda, candy, and ready-to-drink alcoholic beverages, MLB said.

With the addition of these two baseball stadium stores, Amazon continues to expand their cashier-less technology, which is available in Amazon Go and Fresh locations. Amazon began licensing the technology to third-party retailers in 2020. It has since been licensed by U.K. grocery store chain Sainsbury’s and is used in airport shops. Last year, select Whole Foods and Starbucks locations began using Just Walk Out as well.

Only days ago, Amazon announced it opened another Fresh grocery store in greater Washington, D.C. that came equipped with Just Walk Out technology. This followed other Fresh store openings this year including those in Moorpark, California; Naperville, Illinois; and Seattle, Washington, which also featured the technology.

Microsoft launches its AI-powered notetaking app Journal as an official Windows app

A little over a year after its initial release, a digital note-taking app called Journal is making the leap from being an experimental project housed with Microsoft’s internal incubator, Microsoft Garage, to becoming a full-fledged Microsoft Windows application. The company this week announced the new note-taking app will now be available as “Microsoft Journal,” allowing users to capture their thoughts and create drawings using their digital pen on Windows tablets, 2-in-1s and other pen-capable devices.

The original idea behind Journal was to offer users an alternative to grabbing a pen and paper when inspiration strikes, while still allowing them to express themselves through writing. The concept was familiar to the company, which had first launched an ink-focused application called Journal back on its Tablet PC in 2002 and continued to release “ink” capabilities across apps like Whiteboard, OneNote, PowerPoint and more, the company explained at the time.

Journal, however, wanted to push the concept forward by combining the digital ink input with AI technologies.

The team trained the app’s AI to automatically recognize and categorize the things users write, including headings, starred items, keywords and even drawings. For some of the drawings and headings, the app puts a cue on the side of the page that users can tap to select the content and then take other actions like “move” or “copy.”

The AI also helped to improve the app’s search capabilities so you could pull up your old notes, lists, sketches and more, based on its understanding of your inked notes and content. And the AI helped to power new gestures, like scratch out and instant lasso — tools you could move between more easily, without mode switches.

Image Credits: Microsoft

Beyond its AI focus, Journal included drag-and-drop support for moving content to other pages or different applications; the ability to markup PDFs; keyword search with filters; Microsoft 365 integration for meeting notes; using touch to scroll through pages or tap ink to select text; and more.

“We are entering an age of computer-aided reasoning, where AI accelerates the tasks that people do, and makes us all more productive,” said Stevie Bathiche, technical fellow and leader of Microsoft’s Applied Sciences, speaking about the app’s exit from Garage. “Journal shows just how powerful an experience can be when software anticipates your intentions. This is just the beginning.”

During its time as a Garage project, the team learned that users have their own individual preferences for how they interact with content using touch and a digital pen, but there wasn’t a clear winner as to the most preferred method. They also found that annotating documents was one of Journal’s biggest use cases, with PDF imports accounting for over half the pages created in the app.

With the app’s official launch, Journal has been updated with a Windows 11 look and feel, with new colors and materials. The team says its focus in the near term is to now address user feedback and a backlog of new features. The app is rolling out to users from April 5 through April 8 but can be downloaded directly from the Microsoft Store. It works on both Windows 10 and 11 devices.

Google rolls out AI improvements to aid with Search safety and ‘personal crisis’ queries

Google today announced it will be rolling out improvements to its AI model to make Google Search a safer experience and one that’s better at handling sensitive queries, including those around topics like suicide, sexual assault, substance abuse and domestic violence. It’s also using other AI technologies to improve its ability to remove unwanted explicit or suggestive content from Search results when people aren’t specifically seeking it out.

Currently, when people search for sensitive information — like suicide, abuse or other topics — Google will display the contact information for the relevant national hotlines above its search results. But the company explains that people who are in crisis situations may search in all kinds of ways, and it’s not always obvious to a search engine that they’re in need, even if it would raise flags if a human saw their search queries. With machine learning and the latest improvements to Google’s AI model called MUM (Multitask Unified Model), Google says it will be able to automatically and more accurately detect a wider range of personal crisis searches because of how MUM is able to better understand the intent behind people’s questions and queries.

The company last year introduced its plan to redesign Search using AI technologies at its Search On event, but it hadn’t addressed this specific use case. Instead, Google then had focused on how MUM’s better understanding of user intent could be leveraged to help web searchers unlock deeper insights into the topic they’re researching, and lead users down new search paths. For example, if a user had searched for “acrylic painting,” Google could suggest “things to know” about acrylic painting, like different techniques and styles, tips on how to paint, cleaning tips and more. It also could point users to other queries they may not have thought to search for, like “how to make acrylic paintings with household items.” In this one example, Google said it could identify more than 350 different topics related to acrylic paintings.

In a somewhat similar way, MUM will now be used to help better understand the sort of topics that someone in crisis might search for, which aren’t always as obvious as typing in a direct cry for help.

“…if we can’t accurately recognize that, we can’t code our systems to show the most helpful search results. That’s why using machine learning to understand language is so important,” explained Google in a blog post.

For example, if a user searched for “Sydney suicide hot spots,” Google’s previous systems would understand the query to be one of information-seeking because that’s how the term “hots spots” is often used, including in travel search queries. But MUM understands the query is related to people trying to find a jumping spot for suicide in Sydney and would identify this search as potentialy being from someone in crisis, allowing it to show actionable information like suicide hotlines. Another suicide query that could see improvements from MUM is “most common ways suicide is completed,” which, again, Google would have previously understood only as an information-seeking search.

MUM also better understands longer search queries where the context is obvious to humans, but not necessarily to machines. For instance, a query like “why did he attack me when i said i dont love him” implies a domestic violence situation. But long, natural-language queries have been difficult for Google’s systems without the use of advanced AI.

In addition, Google notes that MUM can transfer its knowledge across the 75 languages it’s been trained on, which helps it to more quickly scale AI improvements like this to worldwide users. That means it will be able to display the actionable information from trusted partners, like local hotlines, for these types of personal crisis searches to a broader audience.

This isn’t the first time MUM had been put to work to help direct Google Searches. MUM had previously been used to improve searches for COVID-19 vaccine information, the company said. In the coming months, Google says it will use MUM to improve its spam protection features and will expand those to languages where it has little training data. Other MUM improvements will roll out soon, as well.

Another area getting a boost from AI technology is Google’s ability to filter explicit content from search results. Even when Google’s SafeSearch filtering technology is turned off, Google still attempts to reduce unwanted explicit content from those searches where finding racy content wasn’t the goal. And today, its algorithms improve on this ability as users conduct hundreds of millions of searches globally.

But the AI technology known as BERT now works to help Google better understand if people were seeking explicit content. The company says, over the past year, BERT has reduced unwanted shocking search results by 30%, based on an analysis conducted by “Search Raters” who measured oversexualized results across random samples of queries for web and image search. The technology has also been particularly effective in reducing explicit content for searches related to “ethnicity, sexual orientation and gender,” which Google says disproportionately impacts women and especially women of color, the analysis found.

Google says the MUM AI improvements will begin to roll out to Serach in the coming weeks.

A.I. creative platform D-ID, the tech behind those viral videos of animated family photos, raises $25M

D-ID, the Israeli company leveraging A.I. to create unique, viral experiences like “Deep Nostalgia,” which animates the faces of long-lost relatives in your old photos, announced today the raise of a $25 million Series B round of funding led by Macquarie Capital. The additional funds shortly follow D-ID’s recent launch of its Deep Nostalgia follow-up project, LiveStory, which adds audio to the animated photos, allowing the people in the photos to narrate their own life histories.

The company also recently announced plans to debut its own A.I.-powered video greeting mobile app, Wishful.

Deep Nostalgia and LiveStory, both collaborations with MyHeritage, helped put D-ID’s name on the map, as the resulting animations went viral across social media platforms like TikTok, thanks to users sharing their emotional reactions to seeing their loved ones again in this new format. The MyHeritage mobile app is a testament to D-ID’s success here, sporting a 4.8-star rating across some 42.3K user reviews — a large number of them praising the A.I.-powered animations. To date, Deep Nostalgia alone has created nearly 100 million animations the company says.

Earlier this month, D-ID expanded on the Deep Nostalgia technology to allow the people in the animated photos to “speak” by training a neural network using videos of people talking, in order to get the lips in the photos matched to the words the users provided. Though this technology is not quite advanced enough to enter deepfake territory, it’s akin to really good lip-syncing — or that’s how the company put it at the time.

However, D-ID’s broader business extends beyond consumer tech partnerships, like the one with MyHeritage or, in India, with short-form video app Josh, which used D-ID’s facial animation tech as a creative tool. Elsewhere, D-IDs APIs have been used by a range of licensees across media, education, marketing, and more. It’s worked with Warner Bros. to allow users to personalize a movie trailer with animated photos and for a Harry Potter exhibition. Mondelēz International, advertising agency Publicis, and Digitas Vietnam partnered with D-ID on marketing efforts for a local festival. The company also worked with nonprofit organizations and governments in public awareness campaigns on topics such as domestic violence and HIV awareness.

As a private company, D-ID doesn’t discuss its revenue figures, but it says it’s growing quickly and has many more customers in the pipeline. This year, in particular, the company is expecting to see significant growth, notes D-ID co-founder and CEO Gil Perry. The funds will help on that front by allowing the currently 33-person company to staff up its sales and marketing teams in the U.S., APAC and EMEA regions, he says. Plus, D-ID is also looking to double the number of experts on its deep-learning and computer vision teams.

The main drivers of the company’s growth come from enterprise customers, specifically in the U.S., Perry told TechCrunch. This market is where D-IDnow sees a large number of its API calls, related to the creation of the A.I.-powered videos made using its technology.

“Over the past year we have witnessed the skyrocketing success of our technology across so many applications and industries, with the power to generate so much good,” said Perry. “We are incredibly grateful for this new round of funding and strong partnership with Macquarie, which will enable us to scale up our business and our technology to the next level. We’re excited for what the future holds.”

Beyond A.I.-powered video, D-ID is also working on developing technologies for the metaverse. It has partnered with VR/AR platform maker, The Glimpse Group to help develop A.I., AR and VR applications for the metaverse. 

“We’re always looking 5 to 10 years ahead and we were much more aware about the metaverse — and we understand our capabilities [to be ahead of] other startups and the tech giants in this specific domain,” Perry said, in a recent interview. “What we’re planning to do is, first, we’re building full-body re-enactment in 3D…Two, we’re doing it fast.”

Perry believes for the metaverse to take off, avatars will need to be further developed to become more realistic. They need to “look, behave, move and interact with each other in a super realistic way,” he said. In the future, he imagines how users could enter the metaverse to chat with not only their family members from the past, but could also meet and interact with famous figures — like school children asking Albert Einstein questions, and having him respond in their own language. “This is where D-ID is heading.”

Beyond its existing partnerships, Perry said D-ID is also in conversations with many “tech giants,” but couldn’t provide exact names, as discussions are ongoing. “We’re in advanced stages with strategic players — leading tech giants from the phone manufacturers, cameras, social networks, music industry video conferences, and such,” Perry hinted.

Joining Macquarie in the Series B round are investments from Pitango, AXA, OurCrowd, OIF, and Maverick, bringing D-ID’s total raise to date to $48 million.

“Our purpose is about empowering people to innovate and invest for a better future. D-ID helps us imagine that future,” said David Standen, Global Co-Head of Macquarie Capital Venture Capital Group, in a statement. “We’re delighted to gain exposure to this rapidly growing sector. This technology offers so many uses across such a wide variety of industries. They’re true pioneers in a pioneering market. We know this is just the beginning for D-ID and Macquarie.”

Siri gains a new gender-neutral voice option in latest iOS update

Apple has developed a new Siri voice, now available in the beta versions of its iOS 15.4 software, that doesn’t sound obviously male or female. The decision to introduce a gender-neutral voice is one that sees the tech giant taking yet another step away from the criticism that, historically, digital assistants have reinforced unfair gender stereotypes.

Over the years, industry observers and experts argued how the creation of voice assistants with female-sounding names — like Alexa, Siri and Cortana — which also speak with female-sounding voices, implied that women should be the ones to do your bidding at any time and even take your abuse. A U.N. study additionally called out the female voiced-assistants and their submissive and sometimes even flirty and coy styles.

More problematically, the decision to make so many of the virtual assistants female by default was likely driven by a lack of diversity in the teams responsible for building our everyday technology. That issue doesn’t just lead to thoughtless choices with A.I. voices, it has also delayed the advance of useful tools for women. For example, it took years for Apple to realize that its Health app should probably include a period-tracking feature, considering it’s a health measure relevant to roughly half the human population.

Apple, to its credit, did address concerns with the Siri voice last year when it issued an update that added more diverse voices and, notably, also made it so Siri’s voice would no longer default to being female.

But what if you didn’t have to think about the gender of your A.I. voice assistant at all?

That’s clearly the intention here with the addition of the new and now fifth Siri voice, though Apple hasn’t yet explicitly said that’s the case.

However, the iOS software’s code provides some hints towards Apple’s thinking.

Developer Steve Mosser found a reference to a gender-neutral Siri voice in earlier versions of the iOS 15.4 beta, and this week he noted the fifth American Siri voice was added to Beta 4 with the filename of “Quinn.”

Quinn, a name with Irish origins, is a well-known gender-neutral name that has been used over the years for both boys and girls. It’s not a coincidence that it happens to also be the name for the new Siri voice. (Apple doesn’t display the voices’ filenames to end users, though — they’re identified in the user interface as just Voice 1, Voice 2, Voice 3, and so on.)

You may end up hearing Quinn’s voice and decide it sounds a bit more female or male to your ears. Though if you set your mind to hear it one way or the other, your interpretation may change to reflect your thinking.

What’s more, the new voice comes across as gender-neutral without reverting to some sort of more robotic cadence. The voice still sounds human, that is with the same natural inflection and smooth transitions heard in the other Siri voices, both new and old.

Apple tells TechCrunch the new voice was recorded by a member of the LGBTQ+ community. It leverages Neutral Text to Speech (Neural TTS) technology to offer its natural sounds. All the English-speaking voices use Neural TTS as do the voices in six other languages (French, German, Spanish, Chinese, Japanese, and Korean). In total, Siri users can choose from 16 different languages when setting up their device and choosing their preferred Siri voice.

When it comes to inclusion, Apple hasn’t just focused on Siri’s voice but also on what the digital assistant says. Over the past several years, Apple added Siri responses about Black Lives Matter and Stop Asian Hate, and introduced strong responses to abusive gender or sexuality-based utterances. Apple also rolled out more accessible voice features like Speak Screen, Dictation, and Voice Control.

“We’re excited to introduce a new Siri voice for English speakers, giving users more options to choose a voice that speaks to them,” an Apple spokesperson said, in response to our inquires about the new Siri voice. “Last year we introduced two new voices and removed the set voice default as part of Apple’s long-standing commitment to develop products and services that better reflect the diversity of the world we live in. Millions of people around the world rely on Siri every day to help get things done, so we work to make the experience feel as personalized as possible,” they said.

The new voice option will roll out English speakers with iOS 15.4 which is expected to arrive sometime in March.

Google’s Area 120 debuts Checks, an AI-powered privacy compliance solution for mobile apps

A team at Google is today launching a new product for mobile app developers called Checks which leverages A.I. technology to identify possible privacy and compliance issues within apps, amid a rapidly changing regulatory and policy landscape. The freemium solution will be offered to both Android and iOS app developers of all sizes, who will be able to have their apps analyzed then receive a report with actionable insights about how to address the problems that are found.

Checks was co-founded by Fergus Hurley (GM) and Nia Castelly (Legal Lead), who developed the project over the past two years as a part of Google’s in-house incubator, Area 120. The Checks team had previously built tools like Android Vitals to address developers’ technical challenges, and had the idea to use A.I. to now address privacy compliance challenges, as well.

Today’s app developers have to keep up with a number of newer regulations and policies, from Europe’s GDPR requirements to new rules implemented by the app stores themselves. Meanwhile, consumers have become savvier about the trade-offs involved in using free software — they now often want to know to what extent an app respects their privacy, how their data is accessed, stored, or shared, and more. And even if a developer’s app plays by all the rules, an SDK the developer uses may not — or the SDK’s data-sharing behavior may change over time — presenting another compliance challenge.

Image Credits: Google

With Checks, the idea is to make achieving compliance an easier process than it is today. To use Checks, developers submit their app for a privacy compliance analysis, which involves both an automated review, and, on some tiers of service, a human review, as well.

To get started, Android app developers can log in using their Google account, then provide their Google Play app ID. They’ll then answer a few questions and verify their access. Checks will scan across multiple sources of information, including the app’s privacy policy, SDK information, and network traffic, to generate its report. The solution also takes advantage of advances the team made with using Natural Language Processing to scan an app’s privacy disclosures. After the scan completes, developers are presented with a report that provides clear, actionable insights about the problems found and lists of resources.

The free tier can be used for completing Google Play’s new Data safety section, while paid tiers — Core, Premium, and Enterprise — are designed to meet the needs of professional developers and larger businesses, including those who develop on iOS.

There are no technical requirements or prerequisites for using Checks, which runs its analysis on both physical and virtual devices.

The $249/month Core offering adds compliance monitoring for regulations like GDPR and the California Consumer Privacy Act (CCPA), and proactive notifications about upcoming compliance requirements. Premium users ($499/month) can automate the monitoring of their app’s data-sharing practices and gain an understanding of SDKs, permissions, and where app data-sharing is taking place, among other things. Enterprise users (5+ apps and custom pricing) receive more frequent, advanced, and in-depth privacy checks, which include access to a compliance review team, plus custom analysis and testing flows, and more.

Checks says the data and reports it generates are not shared with the Google Play team.

The team gathered feedback from hundreds of app developers to build Checks then worked with 40 early adopters to test the product ahead of its launch. Testers included Headspace, Sesame Workshop, StoryToys, Carb Manager, Homer, and Lose It, among others.

Now, Checks is opening to a wider audience — interested developers can fill out the online form to register their interest on the Checks website.