Picsart acquires R&D company DeepCraft in seven-figure deal to aid video push

SoftBank-based digital creation platform Picsart, which recently hit uniciorn status, announced today it’s acquiring the research and development company DeepCraft. The deal is a combination of both cash and stock and is in the seven-figure range, but the exact terms aren’t being disclosed.

Picsart today offers a range of digital creation and editing tools aimed at both consumers and professionals alike that make photo and video editing more fun and approachable. The company believes DeepCraft’s A.I. technical talent and its breakthroughs in computer vision and machine learning will enhance Picsart’s own A.I. technology and help the company better support the recent growth of video creation on its service. The team will also help to complement Picsart’s A.I. research and development arm, PAIR (Picsart AI Research) with additional senior resources, the company says.

Founded in 2017, Armenia-based DeepCraft specialized in video and image processing and was the country’s first unicorn. Its co-founders, Armen Abroyan (CEO) and Vardges Hovhannisyan (CTO) have spent more than 20 years in A.I. and machine learning, and are well-known in their local community for their expertise. Abroyan previously held positions as the Deputy Minister for the Ministry of High Tech Industry Republic of Armenia, Lead AI Architect at RedKite, and Senior Software Developer at Synopsys. Meanwhile, Hovhannisyan spent 13 years as a senior R&D engineer at Synopsys.

At DeepCraft, the team worked with a number of clients on a contract basis, including Krisp, PatriotOne, and even the Armenian Government. This work has wrapped up and the team will now begin working from Picsart’s offices in Yerevan. In total, the full DeepCraft team of eight senior machine learning and video engineers will be joining Picsart as full-time employees, as a result of the deal.

Picsart had first entered the video market in 2018 with the acquisition of EFEKT (previously D’efekt), and has seen usage surge in recent years — particularly as its app has been adopted by social media creators and e-commerce shops which use video. So far in 2021, PicsArt has seen more than 180 million videos edited in its app — a 70% increase year-over-year. It now offers thousands of effects and dozens of video editing tools, and plans to grow this lineup as A.I. and cloud technology evolves, it says.

With DeepCraft, Picsart is particularly interested in how the team’s skillset and technology expertise can help it move forward with its support of video, which the company says will be a significant focus in 2022.

However, Picsart is not acquiring specific IP from DeepCraft as part of this deal, the company told TechCrunch.

PicsArt already had a relationship with DeepCraft ahead of the deal as the two had been collaborating on various technology developments.

“DeepCraft is a unique team of deep technology engineers, and we’ve already been working with them to build our core technologies for over a year,” said Picsart co-founder and CTO Artavazd Mehrabyan. “As we invest even further into advancing our video capabilities, we are confident the DeepCraft team will play a significant role in building the future of video,” he added.

The DeepCraft deal is the first acquisition from Picsart since raising its $130 million Series C round in August led by SoftBank’s Vision Fund 2. The round lifted the company to unicorn status, up from its prior valuation of around $600 million in 2019.

Alphabet CEO Sundar Pichai calls for federal tech regulation, investments in cybersecurity

In a wide-ranging interview at the WSJ Tech Live conference that touched on topics like the future of remote work, A.I. innovation, employee activism, and even misinformation on YouTube, Alphabet CEO Sundar Pichai also shared his thoughts on the state of tech innovation in the U.S. and the need for new regulations. Specifically, Pichai argued for the creation of a federal privacy standard in the U.S., similar to the GDPR in Europe. He also suggested it was important for the U.S. to stay ahead in areas like A.I., quantum computing, and cybersecurity, particularly as China’s tech ecosystem further separates itself from Western markets.

In recent months, China has been undergoing a tech crackdown which has included a number of new regulations designed to combat tech monopolies, limit customer data collection, and create new rules around data security, among other things. Although many major U.S. tech companies, Google included, don’t provide their core services in China, some who did are now exiting — like Microsoft, which just this month announced its plan to pull LinkedIn from the Chinese market.

Pichai said this sort of decoupling of Western tech from China may become more common.

He also said it would be important to stay ahead in areas where the U.S. and China compete, like A.I., quantum computing, and cybersecurity, noting that Google’s investments in these areas comes at a time when governments were slightly pulling back on “basic R&D funding.”

“The government has limited resources and it needs to focus,” noted Pichai, “but all of us are benefiting from foundational investments from 20 to 30 years ago — which is what a lot of the modern tech innovation is based on, and we take it for granted a bit,” he said. “So when I look at beat semiconductor supply chain [and] quantum…the government can play a key role, both in terms of policies and allowing us to bring in the best talent from anywhere in the world, or participating with universities and creating some of the longer-term research areas,” Pichai added. These are areas that private companies may not focus on from day one, but play out of 10 to 20 years, he said.

In the wake of increased cyberattacks across borders, Pichai said that the time had come for a sort of “Geneva Convention equivalent” for the cyber world, adding that governments should put security and regulation higher on their agendas.

He more directly argued in favor of new federal privacy regulations in the U.S. — something Google has pushed for many times in the past — suggesting that something like the GDPR in Europe is warranted.

“I think the GDPR has been a great foundation,” said Pichai. “I would really like to see a federal privacy standard in the U.S. and worried about a patchwork of regulations in states. That adds a lot of complexity,” he continued, noting that “larger companies can cope with more regulations and entrench themselves, whereas for a smaller company to start,  it can be a real tax.”

That’s a point that’s been consistently brought up when Facebook’s CEO Mark Zuckerberg calls for regulation, too. A more regulated U.S. tech industry could work in favor of larger companies like Facebook and Google which have the resources to address the regulatory hurdles. But a single federal standard could also give big tech only one law to battle against, instead of many scattered across the U.S. states.

Pichai additionally tied consumer privacy to security, even noting that “one of the biggest risks to privacy is the data getting compromised” — an interesting statement coming only days after Amazon, a top Google rival, saw its game streaming site Twitch hacked.

As for where to draw the line in regulating tech, Pichai said the law shouldn’t encroach on the open internet.

“I think the internet works well because it’s interoperable, it’s open, it works across borders, promotes trade across borders…and so, as we evolve and regulate the internet, I think it’s important to preserve those attributes,” he noted.

The exec also responded to many other questions about ongoing issues Alphabet and Google are facing, like the pandemic impacts to corporate culture, employee activism, misinformation on YouTube, and more.

On the latter, Pichai expressed a commitment to freedom of experience but noted at the end of the day, the company was trying to balance content creators, users, and advertisers. He said many brand advertisers would not want their ads to appear next to some types of content. Essentially, he suggested that the nature of YouTube’s ads-based economy could help to solve the misinformation problem.

“You can look at it from a free-market basis and say, [advertisers] don’t want their ads next to content because they think it’s brand-negative. So, in some ways, the incentives of the ecosystem actually help get to the right decision over time.”

He sidestepped the interviewer’s question as to whether YouTube was basically acting as a publisher as it made its content decisions, however.

Pichai also talked about Alphabet’s corporate culture in the pandemic era and going back to the office, saying that a three-two model (meaning three days of in-person vs. two days remote) can offer better balance. The in-person days allow for collaboration and community, while the remote days help employees better manage the issues that traditionally came with in-person work, like longer commutes. However, in another part of the interview, he spoke of missing his own commute, now that he does it less, saying it was time where he had the space for “deeper thinking.”

As for employee activism — which is seeing more activity as of late as tech companies grapple with large and diverse staffs who often share contradictory opinions on the decisions made at the executive level — Pichai says this is the “new normal” for business. But it’s also nothing new for Google, he pointed out. (Years ago, Google employees were protesting the company’s work on a censored search engine for the Chinese market, for instance.)

“If anything, we’ve been used to it for a while,” said Pichai, noting that the best the company could do is to try to explain its decisions.

“I view it as a strength of the company, at a high level, having employees be so engaged they deeply care about what the company does,” he said.

Marc Lore-backed ‘conversational commerce’ startup Wizard raises $50M Series A from NEA

Marc Lore, who earlier this year stepped down from his role as Walmart’s head of U.S. e-commerce, is now backing a new startup in the e-commerce space called Wizard. Lore has taken on the roles of co-founder, chairman of the board and investor in Wizard, a B2B startup in the “conversational commerce” space which believes the future of mobile commerce will take place over text. Ahead of its official launch, Wizard today is announcing its $50 million Series A, led by NEA’s Tony Florence.

Both Lore and Accel also participated in the round. Florence, Lore and Accel’s Sameer Gandhi have board seats alongside Wizard’s co-founder and CEO Melissa Bridgeford.

The startup has an interesting founding story, as it’s not quite as new as it would have you believe.

Bridgeford, who once left a finance career in New York, founded and ran Austin-based Stylelust, a text-based shopping platform that aimed to offer a shopping assistant for consumers. Its users could text screenshots and photos and be served recommendations of products they could then buy over text, without visiting a website. Stylelust took advantage of AI and image recognition capabilities to help provide consumers with options of what to buy. There was also a B2B component to Stylelust, which promised brands a “one-text checkout” experience. According to a cached version of its website, the company touted a 35% conversion rate — or 10x higher performance than web-based commerce.

Wizard says it “acquired” Stylelust, but the entire team (minus a few new C-Suite hires in September), are all prior Stylelust employees. Wizard did not have a product in the market at the time of the acquisition.

Technically speaking, it’s a brand-new company — and one that now has the ability to lean on Lore’s experience in e-commerce as well as that of top-tier investors.

Bridgeford described Wizard as an opportunity “to build our vision on a much larger scale and to partner with Marc, who’s really a tremendous visionary in retail tech and really a proven founder and a proven operator.”

“We really share the vision that conversational commerce is the future of retail,” Bridgeford adds.

The company isn’t yet willing to talk in detail about its product, however. Instead, it describes the B2B service as one that will enable brands and retailers to transact with consumers over text. The service is positioned as “an end-to-end shopping experience” on mobile from opt-in to search to payments and shipping and even reorders.

These text-based chats won’t feel like the annoying interactions you may have had with messaging app chatbots in the past, Bridgeford claims.

“What we’ve found is a combination of automation and human touch really provides the optimal experience for users, while also building a powerful technology on the backend that’s built to scale. That’s really where the Holy Grail is,” she explains. “And that’s really what we see for the future of conversational commerce…we’re incorporating chat abilities, natural language processing — all of those technologies are moving very quickly.”

In other words, the frustrating experience you may have had with a chatbot a year or two ago, may not be the experience you would have today.

“The goal of the technology is to make it seem like you are speaking with a human, when it’s really technology-enabled,” Bridgeford adds.

Stylelust also brought its brand relationships to Wizard as part of the deal.

An earlier version of the Stylelust website listed clients including Laughing Glass Cocktails, Desolas Mezcal, Pinhook Bourbon, Marsh House Rum and Neft Vodkas. A focus on wine and sports retail was also mentioned in an Austin Biz Journal feature. However, a write-up about Florida Funders’ backing of Stylust in 2020 noted relationships with top-tier retailers like Neiman Marcus, Walmart, Sephora and Allbirds.

It’s unclear which relationships will continue with Wizard or whether it will continue to focus on the alcohol brands or other retailers, as the company declined to discuss any details related to its business beyond the funding.

The startup plans to use the funds to hire in areas like AI, machine learning and natural language processing, as well as in non-tech roles, like sales, finance and operations. One of the key hires it’s still looking to make is a chief people officer. Though the current team is working in offices based in both New York and Austin, Wizard is hiring nationwide to fill roles on its remote tech team, it says.

Wizard already has some competitors whose services address certain aspects of its business, particularly in the text marketing space. But more broadly, there are other ways that consumers interact with brands over messaging which could evolve into more fully formed products over time, too. Today, consumers often discover products on social media, like Facebook and Instagram, then turn to Messenger or DMs for product questions. WhatsApp is building out a product catalog for businesses that enables consumers to discover products and services directly in the app. Even Apple entered the market with Business Chat, which already allows for purchases made through iMessage chats.

Wizard’s focus on SMS instead of requiring a dedicated messaging app or, say, an iPhone with iMessage, for instance, could help it to differentiate from competitors. Still, betting on SMS — increasingly a home to text-based spam and scams — is a riskier bet. But it’s one Lore is willing to make.

“Having spent most of my career so far in e-commerce, it’s been clear that conversational commerce is the future of retail,” said Lore. “With deep learning becoming more pervasive, the ability to create a hyper-personalized, conversational shopping experience is going to transform how people shop — and I’m confident that what Melissa and the team at Wizard are building will lead that transformation.”

Google is redesigning Search using A.I. technologies and new features

Google announced today it will be applying A.I. advancements, including a new technology called Multitask Unified Model (MUM) to improve Google Search. At the company’s Search On event, the company demonstrated new features, including those that leverage MUM, to better connect web searchers to the content they’re looking for, while also making web search feel more natural and intuitive.

One of the features being launched is called “Things to know,” which will focus on making it easier for people to understand new topics they’re searching for. This feature understands how people typically explore various topics and then shows web searchers the aspects of the topic people are most likely to look at first.

Image Credits: Google

For example, Google explained, if you were searching for “acrylic painting,” it may suggest “Things to know” like how to get started with painting, step-by-step, or the different styles of acrylic painting, tips about acrylic painting, how to clean acrylic paint, and more. In this example, Google is able to identify over 350 different topics related to acrylic painting, it notes.

This feature will launch in the coming months, but Google notes it will also be expanded in the future by using MUM to help web users unlock even deeper insights into the topic beyond what they may have thought to look for — like “how to make acrylic paintings with household items.”

Image Credits: Google

The company is also developing new ways to help web users both refine and broaden their searches without having to start over with a new query.

To continue the acrylic painting example, Google may offer to connect you to information about specific painting techniques, like puddle pouring, or art classes you could take. You could then zoom into one of those other topics in order to see a visually rich page of search results and ideas from across the web, including articles, images, videos, and more.

These pages are meant to better compete with Pinterest, it seems, as they’re also able to help people become inspired by searches — similar to how Pinterest’s image-heavy pinboard aims to turn people’s visual inspiration into action — like visiting a website or making an online purchase.

Google says the pages will be useful for searches where users are “looking for inspiration,” like “Halloween decorating ideas” or “indoor vertical garden ideas” or other ideas to try. This feature can be tried out today on mobile devices.

Google is also upgrading video search. Already, the company uses A.I. to identify key moments inside a video. Now, it will take things further with the launch of a feature that will identify the topics in a video — even if the topic isn’t explicitly mentioned in the video — then provide links that allow users to dig deeper and learn more.

Image Credits: Google

That means when you’re watching a YouTube video, MUM will be used to understand what the video is about and make suggestions. In an example, a video about Macaroni penguins may point users to a range of related videos, like those that talk about how Macaroni penguins find their family members and navigate predators. MUM can identify these terms to search for, even if they’re not explicitly said in the video.

This feature will roll out in an initial version on YouTube Search in the weeks ahead, and will be updated to include more visual enhancements in the coming months, says Google.

This change could also help to drive increased search traffic to Google, by leveraging YouTube’s sizable reach. Many Gen Z users already search for online content differently than older generations, studies have found. They tend to use multiple social media channels, have a mobile-first mindset, and are engaged with video content. A “Think with Google” study, for instance, found that 85% of Gen Z teenagers would use YouTube regularly to find content, while 80% said YouTube videos had successfully taught them something. Other data had demonstrated that Gen Z prefers to learn about new ideas and products through video as well, not text, native ads, or other content formats.

For Google, this sort of addition may be necessary because the shift to mobile is impacting its search dominance. Today, many mobile shopping searches today now start directly on Amazon. Plus, when iPhone users need to do something specific on their phone, they often turn to Siri, Spotlight, the App Store, or a native app to get help.

Google also today unveiled how it’s using MUM technology to improve visual searches using Google Lens.

Alexa’s new features will let users personalize the A.I. to their own needs

Amazon is preparing to roll out a trio of new features that will allow consumers to further personalize their Alexa experience by helping train the Alexa A.I. using simple tools. In a few months’ time, consumers will be able to teach Alexa to do things like identifying specific sounds in their household, such as a ringing doorbell or instant pot’s chime, for example. Or, for Ring users, the A.I. could notice when something has visually changed — like when a door that’s meant to be closed is now standing open. Plus, consumers will be able to more explicitly direct Alexa to adjust to their personal preferences around things like favorite sports teams, preferred weather app, or food preferences.

The features were introduced today at Amazon’s fall event, where the company is announcing its latest Echo devices and other new hardware.

The new sound identifying feature builds on something Alexa already offers, called Alexa Guard. This feature can identify certain sounds — like glass breaking or a fire or carbon monoxide alarm — which can be helpful for people who are away from home or for those who are hard of hearing or Deaf, as it helps them to know there is a potential emergency taking place. With an upgraded subscription, consumers can even play the sound of a barking dog when a smart camera detects motion outside.

Now, Amazon is thinking of how Alexa’s sound detection capability could be used for things that aren’t necessarily emergencies.

Image Credits: Amazon

With a new feature, consumers will be able to train Alexa to hear a certain type of sound that matters to them. This could be a crock pot’s beeping, the oven timer, a refrigerator that beeps when left open, a garage door opening, a doorbell’s ring, the sound of water running, or anything else that makes a noise that’s easy to identify because it generally sounds the same from time to time.

By providing Alexa with 6 to 10 samples, Alexa will “learn” what this noise is — a big reduction from the thousands of samples Amazon has used in the past to train Alexa about other sounds. Customers will be able teach Alexa a new custom sound directly from their Echo device or through the Alexa mobile app, Amazon says.

However, the enrollment and training process will take place in the cloud. But detection of the sound going forward will happen on the device itself, and Amazon will not send the audio to cloud after enrollment.

Once trained, users can then choose to kick off their own notifications or routines whenever Alexa hears that noise. Again, this could help from an accessibility standpoint or with elder care, as Alexa could display a doorbell notification on their Fire TV, for instance. But it could also just serve as another way to start everyday routines — like when the garage door sounds, Alexa could trigger a personalized “I’m Home” routine that turns on the lights and starts your favorite music.

Amazon says Custom Sound Event Detection will be available next year.

Along similar lines, consumers will also be able to train the A.I. in their Ring cameras to identify a region of interest in the camera feed, then determine if that area has changed. This change has to be fairly binary for now — like a shed door that’s either open or closed. It may not be able to handle something more specific where there is a lot of variation.

This functionality, called “Custom Event Alerts,” will start rolling out to Ring Spotlight Cam Battery customers in the coming months.

Finally, another Alexa feature will allow the smart assistant to learn a user’s preferences related to food, sports, or skill providers. (Skills are the third-party voice apps that run on Alexa devices.) Consumers will be able to say something like, “Alexa, learn my preferences,” to start teaching Alexa. But the learning can be done in subtler ways, too. For instance, if you ask Alexa for nearby restaurants, you could then say something like, “Alexa, some of us are vegetarian” to have steakhouses removed from the suggestions.

Meanwhile, after Alexa learns about your favorite sports teams, the A.I. will include more highlights from the teams you’ve indicated you care about when you ask for sports highlights.

And after you tell Alexa which third-party skill you’d like to use, the A.I. assistant will default to using that skill in the future instead of its own native responses.

For now, though, only third-party weather skills are supported. But Amazon wants to expand this to more skills over time. This could help to address skills’ lower usage, as people can’t remember which skills they want to launch. It would allow for a more “set it and forget it” type of customization, where you find a good skill, set it as your default, then just speak using natural language (e.g. “What’s the weather?”) without having to remember the skill by name going forward.

Amazon says that this preference data is only associated with the customer’s anonymized customer ID, and it can be adjusted. For example, if a vegetarian goes back to meat, they could say “Alexa, I’m not a vegetarian” the next time Alexa returns their restaurant suggestions. The data is not being used to customize Amazon.com shopping suggestions, the company said.

This preference teaching will be available before the end of the year.

Amazon says these features represent further steps towards its goal of bringing what it calls “ambient intelligence” to more people.

Ambient A.I., noted Rohit Prasad, SVP and head scientist for Alexa, “can learn about you and adapt to your needs, instead of you having to conform to it.”

“Alexa, to me, is not just a spoken language service. Instead, it is an ambient intelligence service that is available on many devices around you to understand the state of the environment, and even acts proactively on your behalf,” he said.

Amazon Fall 2021 Hardware Event

Tape It launches an AI-powered music recording app for iPhone

Earlier this year, Apple officially discontinued Music Memos, an iPhone app that allowed musicians to quickly record audio and develop new song ideas. Now, a new startup called Tape It is stepping in to fill the void with an app that improves audio recordings by offering a variety of features, including higher-quality sound, automatic instrument detection, support for markers, notes and images, and more.

The idea for Tape It comes from two friends and musicians, Thomas Walther and Jan Nash.

Walther had previously spent three and a half years at Spotify, following its 2017 acquisition of the audio detection startup Sonalytic, which he had co-founded. Nash, meanwhile, is a classically trained opera singer, who also plays bass and is an engineer.

They’re joined by designer and musician Christian Crusius, previously of the design consultancy Fjord, which was acquired by Accenture.

The founders, who had played in a band together for many years, were inspired to build Tape It because it was something they wanted for themselves, Walther says. After ending his stint at Spotify working in their new Soundtrap division (an online music startup Spotify also bought in 2017), he knew he wanted to work on a project that was more focused on the music-making side of things. But while Soundtrap worked for some, it wasn’t what either Walther or his friends had needed. Instead, they wanted a simple tool that would allow them to record their music with their phone — something that musicians often do today using Apple’s Voice Memos app and, briefly, Music Memos — until its demise.

Image Credits: Tape It

“Regardless of whether you’re an amateur or even like a touring professional…you will record your ideas with your phone, just because that’s what you have with you,” Walther explains. “It’s the exact same thing with cameras — the best camera is the one you have with you. And the best audio recording tool is the one you have with you.”

That is, when you want to record, the easiest thing to do is not to get out your laptop and connect a bunch of cables to it, then load up your studio software — it’s to hit the record button on your iPhone.

The Tape It app allows you to do just that, but adds other features that make it more competitive with its built-in competition, Voice Memos.

When you record using Tape It, the app leverages AI to automatically detect the instrument, then annotate the recording with a visual indication to make those recordings easier to find by looking for the colorful icon. Musicians can also add their own markers to the files right when they record them, then add notes and photos to remind themselves of other details. This can be useful when reviewing the recordings later on, Walther says.

Image Credits: Tape It

“If I have a nice guitar sound, I can just take a picture of the settings on my amplifier, and I have them. This is something musicians do all the time,” he notes. “It’s the easiest way to re-create that sound.”

Another novel, but simple, change in Tape It is it that breaks longer recordings into multiple lines, similar to a paragraph of text. The team calls this the “Time Paragraph,” and believes it will make listening to longer sessions easier than the default — which is typically a single, horizontally scrollable recording.

Image Credits: Tape It

The app has also been designed so it’s easier to go back to the right part of recordings, thanks to its smart waveforms, in addition to the optional markers and photos. And you can mark recordings as favorites so you can quickly pull up a list of your best ideas and sounds. The app offers full media center integration as well, so you can play back your music whenever you have time.

However, the standout feature is Tape It’s support for “Stereo HD” quality. Here, the app takes advantage of the two microphones on devices like the iPhone XS, XR, and other newer models, then improves the sound using AI technology and other noise reduction techniques, which it’s developed in-house. This feature is part of its $20 per year premium subscription.

Over time, Tape It intends to broaden its use of AI and other IP to improve the sound quality further. It also plans to introduce collaborative features and support for importing and exporting recordings into professional studio software. This could eventually place Tape It into the same market that SoundCloud had initially chased before it shifted its focus to becoming more of a consumer-facing service.

But first, Tape It wants to nail the single-user workflow before adding on more sharing features.

“We decided that it’s so important to make sure it’s useful, even just for you. The stuff that you can collaborate on — if you don’t like using it yourself, you’re not going to use it,” Walther says.

Tape It’s team of three is based in Stockholm and Berlin and is currently bootstrapping.

The app itself is a free download on iOS and will later support desktop users on Mac and Windows. An Android version is not planned.

Microsoft launches a personalized news service, Microsoft Start

Microsoft today is introducing its own personalized news reading experience called Microsoft Start, available as both a website and mobile app, in addition to being integrated with other Microsoft products, including Windows 10 and 11 and its Microsoft Edge web browser. The feed will combine content from news publishers, but in a way that’s tailored to users’ individual interests, the company says — a customization system that could help Microsoft to better compete with the news reading experiences offered by rivals like Apple or Google, as well as popular third-party apps like Flipboard or SmartNews.

Microsoft says the product builds on the company’s legacy with online and mobile consumer services like MSN and Microsoft News. However, it won’t replace MSN. That service will remain available, despite the launch of this new, in-house competitor.

To use Microsoft Start, consumers can visit the standalone website MicrosoftStart.com, which works on both Google Chrome and Microsoft Edge (but not Safari), or they can download the Microsoft Start mobile app for iOS or Android.

The service will also power the News and Interests experience on the Windows 10 taskbar and the Widgets experience on Windows 11. In Microsoft Edge, it will be available from the New Tab page, too.

Image Credits: Microsoft

At first glance, the Microsoft Start website it very much like any other online portal offering a collection of news from a variety of publishers, alongside widgets for things like weather, stocks, sports scores and traffic. When you click to read an article, you’re taken to a syndicated version hosted on Microsoft’s domain, which includes the Microsoft Start top navigation bar at the top and emoji reaction buttons below the headline.

Users can also react to stories with emojis while browsing the home page itself.

This emoji set is similar to the one being offered today by Facebook, except that Microsoft has replaced Facebook’s controversial laughing face emoji with a thinking face. (It’s worth noting that the Facebook laughing face has been increasingly criticized for being used to openly ridicule posts and mock people  — even on stories depicting tragic events, like Covid deaths, for instance.)

Microsoft has made another change with its emoji, as well: after you react to a story with an emoji, you only see your emoji instead of the top three and total reaction count. 

Image Credits: Microsoft

But while online web portals tend to be static aggregators of news content, Microsoft Start’s feed will adjust to users’ interests in several different ways.

Users can click a “Personalize” button to be taken to a page where they can manually add and remove interests from across a number of high-level categories like news, entertainment, sports, technology, money, finance, travel, health, shopping, and more. Or they can search for categories and interests that could be more specific or more niche. (Instead of “parenting,” for instance, “parenting teenagers.”)  This recalls the recent update Flipboard made to its own main page, the For You feed, which lets users make similar choices.

As users then begin to browse their Microsoft Start feed, they can also click a button to thumbs up or thumbs down an article to better adjust the feed to their preferences. Over time, the more the user engages with the content, the better refined the feed becomes, says Microsoft. This customization will leverage A.I. and machine learning, as well as human moderation, the company notes.

The feed, like other online portals, is supported by advertising. As you scroll down, you’ll notice every few rows will feature one ad unit, where the URL is flagged with a green “Ad” badge. Initially, these mostly appear to be product ads, making them distinct from the news content. Since Microsoft isn’t shutting down MSN and is integrating this news service into a number of other products, it’s expanding the available advertising real estate it can offer with this launch.

The website, app and integrations are rolling out starting today. (If you aren’t able to find the app yet, you can try scanning the QR code from your mobile device.)

 

LOVE unveils a modern video messaging app with a business model that puts users in control

A London-headquartered startup called LOVE, valued at $17 million following its pre-seed funding, aims to redefine how people stay in touch with close family and friends. The company is launching a messaging app that offers a combination of video calling as well as asynchronous video and audio messaging, in an ad-free, privacy-focused experience with a number of bells and whistles, including artistic filters and real-time transcription and translation features.

But LOVE’s bigger differentiator may not be its product alone, but rather the company’s mission.

LOVE aims for its product direction to be guided by its user base in a democratic fashion as opposed to having the decisions made about its future determined by an elite few at the top of some corporate hierarchy. In addition, the company’s longer-term goal is ultimately to hand over ownership of the app and its governance to its users, the company says.

These concepts have emerged as part of bigger trends towards a sort of “web 3.0,” or next phase of internet development, where services are decentralized, user privacy is elevated, data is protected, and transactions take place on digital ledgers, like a blockchain, in a more distributed fashion.

LOVE’s founders are proponents of this new model, including serial entrepreneur Samantha Radocchia, who previously founded three companies and was an early advocate for the blockchain as the co-founder of Chronicled, an enterprise blockchain company focused on the pharmaceutical supply chain.

As someone who’s been interested in emerging technology since her days of writing her anthropology thesis on currency exchanges in “Second Life’s” virtual world, she’s now faculty at Singularity University, where she’s given talks about blockchain, A.I., Internet of Things, Future of Work, and other topics. She’s also authored an introductory guide to the blockchain with her book “Bitcoin Pizza.”

Co-founder Christopher Schlaeffer, meanwhile, held a number of roles at Deutsche Telekom, including Chief Product & Innovation Officer, Corporate Development Officer, and Chief Strategy Officer, where he along with Google execs introduced the first mobile phone to run Android. He was also Chief Digital Officer at the telecommunication services company VEON.

The two crossed paths after Schlaeffer had already begun the work of organizing a team to bring LOVE to the public, which includes co-founders Chief Technologist, Jim Reeves, also previously of VEON, and Chief Designer, Timm Kekeritz, previously an interaction designer at international design firm IDEO in San Francisco, design director at IXDS, and founder of design consultancy Raureif in Berlin, among other roles.

Explained Radocchia, what attracted her to join as CEO was the potential to create a new company that upholds more positive values than what’s often seen today —  in fact, the brand name “LOVE” is a reference to this aim. She was also interested in the potential to think through what she describes as “new business models that are not reliant on advertising or harvesting the data of our users,” she says.

To that end, LOVE plans to monetize without any advertising. While the company isn’t ready to explain its business model in full, it would involve users opting in to services through granular permissions and membership, we’re told.

“We believe our users will much rather be willing to pay for services they consciously use and grant permissions to in a given context than have their data used for an advertising model which is simply not transparent,” says Radocchia.

LOVE expects to share more about the model next year.

As for the LOVE app itself, it’s a fairly polished mobile messenger offering an interesting combination of features. Like any other video chat app, you can you video call with friends and family, either in one-on-one calls or in groups. Currently, LOVE supports up to 5 call participants, but expects to expand that as it scales. The app also supports video and audio messaging for asynchronous conversations. There are already tools that offer this sort of functionality on the market, of course — like WhatsApp, with its support for audio messages, or video messenger Marco Polo. But they don’t offer quite the same expanded feature set.

Image Credits: LOVE

For starters, LOVE limits its video messages to 60 seconds for brevity’s sake. (As anyone who’s used Marco Polo knows, videos can become a bit rambling, which makes it harder to catch up when you’re behind on group chats.) In addition, LOVE allows you to both watch the video content as well as read the real-time transcription of what’s being said — the latter which comes in handy not only for accessibility’s sake, but also for those times you want to hear someone’s messages but aren’t in a private place to listen or don’t have headphones. Conversations can also be translated into 50 different languages.

“A lot of the traditional communication or messenger products are coming from a paradigm that has always been text-based,” explains Radocchia. “We’re approaching it completely differently. So while other platforms have a lot of the features that we do, I think that…the perspective that we’ve approached it has completely flipped it on its head,” she continues. “As opposed to bolting video messages on to a primarily text-based interface, [LOVE is] actually doing it in the opposite way and adding text as a sort of a magically transcribed add-on — and something that you never, hopefully, need to be typing out on your keyboard again,” she adds.

The app’s user interface, meanwhile, has been designed to encourage eye-to-eye contact with the speaker to make conversations feel more natural. It does this by way of design elements where bubbles float around as you’re speaking and the bubble with the current speaker grows to pull your focus away from looking at yourself. The company is also working with the curator of Serpentine Gallery in London, Hans Ulrich-Obrist, to create new filters that aren’t about beautification or gimmicks, but are instead focused on introducing a new form of visual expression that makes people feel more comfortable on camera.

For the time being, this has resulted in a filter that slightly abstracts your appearance, almost in the style of animation or some other form of visual arts.

The app claims to use end-to-end encryption and the automatic deletion of its content after seven days — except for messages you yourself recorded, if you’ve chosen to save them as “memorable moments.”

“One of our commitments is to privacy and the right-to-forget,” says Radocchia. “We don’t want to be or need to be storing any of this information.”

LOVE has been soft-launched on the App Store where it’s been used with a number of testers and is working to organically grow its user base through an onboarding invite mechanism that asks users to invite at least three people to join you. This same onboarding process also carefully explains why LOVE asks for permissions — like using speech recognition to create subtitles, or

LOVE says its at valuation is around $17 million USD following pre-seed investments from a combination of traditional startup investors and strategic angel investors across a variety of industries, including tech, film, media, TV, and financial services. The company will raise a seed round this fall.

The app is currently available on iOS, but an Android version will arrive later in the year. (Note that LOVE does not currently support the iOS 15 beta software, where it has issues with speech transcription and in other areas. That should be resolved next week, following an app update now in the works.)

Match Group to add audio and video chat, including group live video, to its dating app portfolio

Dating app maker and Tinder parent, Match Group, said during its Q2 earnings it will bring audio and video chat, including group live video, and other livestreaming technologies to several of the company’s brands over the next 12 to 24 months. The developments will be powered by innovations from Hyperconnect, the social networking company that this year became Match’s biggest acquisition to date, when it bought the Korean app maker for a sizable $1.73 billion. 

Since then, Match Group has been relatively quiet about its specific plans for Hyperconnect’s tech or its longer-term strategy with the operation, although Tinder was briefly spotted testing a group video chat feature called Tinder Mixer earlier this summer. The move had seemed to signal some exploration of social discovery features in the wake of the Hyperconnect deal. However, Tinder told us at the time the company had no plans to bring that specific product to market in the year ahead.

On Tuesday’s earnings, Match Group offered a little more insight into the future of Hyperconnect, following the acquisition’s official close in mid-June.

According to Match Group CEO Shar Dubey, who stepped into the top job last January, the company is excited about the potential to integrate technologies Hyperconnect has developed into existing Match-owned dating apps.

This includes, she said, “AR features, self-expression tools, conversational A.I., and a number of what we would consider metaverse elements, which have the element to transform the online meeting and getting-to-know-each-other process,” Dubey explained, without offering further specific details about how the products would work or which apps would receive these enhancements.

Many of these technologies emerged from Hyperconnect’s lab, Hyper X — the same in-house incubator whose first product is now one of the company’s flagship apps, Azar, which joined Match Group with the acquisition.

Dubey also noted that the work to begin these tech integrations was already underway at the company.

By year-end, Match Group said it expects to have at least two of its brands integrated with technologies from Hyperconnect. A number of other brands will implement Hyperconnect capabilities by year-end 2022.

In doing so, Match aims to transform what people think of when it comes to online dating.

To date, online dating been a fairly static experience across the industry, where apps focus largely on profiles and photos, and then offer some sort of matching technique — whether swipes or quizzes or something else. Tinder, in more recent years, began to break out of that mold as it innovated with an array of different experiences, like its choose-your-own-adventure in-app video series, “Swipe Night,” video profiles, instant chat features (via Tinder’s product, Hot Takes), and others. But it still lacked some the real-time elements that people have when meeting one another in the real world.

This is an area where Match believes Hyperconnect can help to improve the online dating experience.

“One of the holy grails for us in online dating has always been to bridge the disconnect that happens between people chatting online and then meeting someone in person,” Dubey said. “These technologies will eventually allow us to build experiences that will help people determine if they have that much elusive chemistry or not… Our ultimate vision here is for people to never have to go on a bad first date again,” she added.

Of course, Match Group’s positioning of the Hyperconnect deal as being more interesting because the innovation it brings — and not just the standalone apps it operates — also comes at a time when those apps have not met the company’s expectations on revenue.

In the second half the of 2021, Match Group said it expects Hyperconnect to contribute to $125 to $135 million in revenue — a financial outlook that the company admits reflects some pullback. It attributed this largely to Covid impacts, particularly in the Asia-Pacific region where Hyperconnect’s apps operate. Other impacts to Hyperconnect’s growth included a more crowded marketplace and Apple’s changes to IDFA (Identifier for Advertisers), which has impacted a number of apps — including other social networking apps, like Facebook.

While Match still believes Hyperconnect will post “solid revenue growth” in 2021, it said that these new technology integrations into the Match Group portfolio are now “a higher priority” for the company.

Match Group posted mixed earnings in Q1 with revenue of $707.8 million, above analyst estimates, but earnings per share of 46 cents, below projections of 49 cents a share. Paying customers grew 15% to 15 million, up from 13 million in the year-ago quarter. Shares declined by 7% on Wednesday morning, following the earnings announcement.

As clinical guidelines shift, heart disease screening startup pulls in $43M Series B

Cleerly Coronary, a company that uses A.I powered imaging to analyze heart scans, announced a $43 million Series B funding this week. The funding comes at a moment when it seems that a new way of screening for heart disease is on its way. 

Cleerly was started in 2017 by James K. Min a cardiologist, and the director of the Dalio Institute for Cardiac Imaging at New York Presbyterian Hospital/Weill Cornell Medical College. The company, which uses A.I to analyze detailed CT scans of the heart, has 60 employees, and has raised $54 million in total funding.

The Series B round was led by Vensana Capital, but also included LVR Health, New Leaf Venture Partners, DigiTx Partners, and Cigna Ventures. 

The startup’s aim is to provide analysis of detailed pictures of the human heart that have been examined by artificial intelligence. This analysis is based on images taken via Cardiac Computer Tomography Angiogram (CTA), a new, but rapidly growing manner of scanning for plaques. 

“We focus on the entire heart, so every artery, and its branches, and then atherosclerosis characterization and quantification,” says Min. “We look at all of the plaque buildup in the artery, [and] the walls of the artery, which historical and traditional methods that we’ve used in cardiology have never been able to do.”

Cleerly is a web application, and it requires that a CTA image specifically, which the A.I. is trained to analyze, is actually taken when patients go in for a checkup. 

When a patient goes in for a heart exam after experiencing a symptom like chest pain, there are a few ways they can be screened. They might undergo a stress test, an echocardiogram (ECG), or a coronary angiogram – a catheter and x-ray-based test. CTA is a newer form of imaging in which a scanner takes detailed images of the heart, which is illuminated with an injected dye. 

Cleerly’s platform is designed to analyze those CTA images in detail, but they’ve only recently become a first-line test (a go-to, in essence) when patients come in with suspected heart problems. The European Society of Cardiology updated guidelines to make CTA a first-line test in evaluating patients with chronic coronary disease. In the UK, it became a first-line test in the evaluation of patients with chest pain in 2016.

CTA is already used in the US, but guidelines may expand how often it’s actually used. A review on CTA published on the American College of Cardiology website notes that it shows “extraordinary potential.” 

There’s movement on the insurance side, too. In 2020, United Healthcare announced the company will now reimburse for CTA scans when they’re ordered to examine low-to medium risk patients with chest pain. Reimbursement qualification is obviously a huge boon to broader adoption.

CTA imaging might not be great for people who already have stents in their hearts, or, says Min, those who are just in for a routine checkup (there is low-dose radiation associated with a CTA scan). Rather, Cleerly will focus on patients who have shown symptoms or are already at high risk for heart disease. 

The CDC estimates that currently 18.2 million adults currently have coronary artery heart disease (the most common kind), and that 47 percent of Americans have one of the three most prominent risk factors for the disease: high blood pressure, high cholesterol, or a smoking habit. 

These shifts (and anticipated shifts) in guidelines suggest that a lot more of these high-risk patients may be getting CTA scans in the future, and Cleerly has been working on mining additional information from them in several large-scale clinical trials.

There are plenty of different risk factors that contribute to heart disease, but the most basic understanding is that heart attacks happen when plaques build up in the arteries, which narrows the arteries and constricts the flow of blood. Clinical trials have suggested that the types of plaques inside the body may contain information about how risky certain blockages are compared to others beyond just much of the artery they block. 

A trial on 25,251 patients found that, indeed, the percentage of construction in the arteries increases the risk of heart attack. But the type of plaque in those arteries identified high-risk patients better than other measures. Patients who went on to have sudden heart attacks, for example, tended to have higher levels of fibrofatty or necrotic core plaque in their hearts. 

These results do suggest that it’s worth knowing a bit more detail about plaque in the heart. Note that Min is an author of this study, but it was also conducted at 13 different medical centers. 

As with all A.I based diagnostic tools the big question is: How well does it actually recognize features within a scan? 

At the moment FDA documents emphasize that it is not meant to supplant a trained medical professional who can interpret the results of a scan. But tests have suggested it fares pretty well. 

A June 2021 study compared Cleerly’s A.I analysis of CTA scans to that of three expert readers, and found that the A.I had a diagnostic accuracy of about 99.7 percent when evaluating patients who had severe narrowing in their arteries. Three of nine study authors hold equity in Cleerly. 

With this most recent round of funding, Min says he aims to pursue more commercial partnerships and scale up to meet the existing demand. “We have sort of stayed under the radar, but we came above the radar because now I think we’re prepared to fulfill demand,” he says. 

Still, the product itself will continue to be tested and refined. Cleerly is in the midst of seven performance indication studies that will evaluate just how well the software can spot the litany of plaques that can build up in the heart.