AWS Braket gets improved support for hybrid quantum-classical workloads

In 2019, AWS launched Braket, its quantum computing service that makes hardware and software tools from its partners Rigetti, IonQ and D-Wave available in its cloud. Given how quickly quantum computing is moving ahead, it’s maybe no surprise that a lot has changed since then. Among other things, hybrid algorithms that use classical computers to optimize quantum algorithms — a process similar to training machine learning models — have become a standard tool for developers. Today, AWS announced improved support for running these hybrid algorithms on Braket.

Previously, to run these algorithms, developers would have to set up and manage the infrastructure to run the optimization algorithms on classical machines and then manage the integration with the quantum computing hardware, in addition to the monitoring and visualization tools for analyzing the results.

Image Credits: AWS

But that’s not all. “Another big challenge is that [Quantum Processing Units] are shared, inelastic resources, and you compete with others for access,” AWS’s Danilo Poccia explains in today’s announcement. “This can slow down the execution of your algorithm. A single large workload from another customer can bring the algorithm to a halt, potentially extending your total runtime for hours. This is not only inconvenient but also impacts the quality of the results because today’s QPUs need periodic re-calibration, which can invalidate the progress of a hybrid algorithm. In the worst case, the algorithm fails, wasting budget and time.”

With the new Amazon Braket Hybrid Jobs feature, developers get a fully managed service that handles the hardware and software interactions between the classical and quantum machines — and developers will get priority access to quantum processing units to provide them with more predictability. Braket will automatically spin up the necessary resources (and shut them down once a job is completed). Developers can set custom metrics for their algorithms and, using Amazon CloudWatch, they can visualize the results in near real time.

“As application developers, Braket Hybrid Jobs gives us the opportunity to explore the potential of hybrid variational algorithms with our customers,” said Vic Putz, head of engineering at QCWare. “We are excited to extend our integration with Amazon Braket and the ability to run our own proprietary algorithms libraries in custom containers means we can innovate quickly in a secure environment. The operational maturity of Amazon Braket and the convenience of priority access to different types of quantum hardware means we can build this new capability into our stack with confidence.”

Particular Audience takes in $7.5M to give retailers way to take on Amazon

Being in control of customer data is one of the ways retailers, like Amazon, Spotify and Netflix, are able to tap into consumer behavior and create customized experiences whenever a user logs in.

Those are some of the reasons Amazon, in particular, is poised to grab 50% of the U.S. e-commerce market this year, and why Sydney-based Particular Audience wants to break down the data silos going on within e-commerce to give any retailer a chance to gather similar data on their customers to personalize experiences.

Particular Audience provides product discovery tools for retailers that are powered by artificial intelligence and machine learning. In fact, the company wants to go further and offer personalization based on anonymity and without compromising personal data, CEO James Taylor told TechCrunch.

Taylor launched Particular Audience in 2019 after taking a few years to work out the technology. The global pandemic threw a wrench in some plans, with Taylor and a handful of executives taking a pay cut so as to not have to let any employees go. However, with the e-commerce industry growing over the past 18 months, the company was able to get back to where it was, he said.

The company has now amassed a real-time data set on product search, sales, pricing and availability from across the internet, from its browser plugin SimilarInc.com, which gathers the data from its online shopper community without tracking or cookies. Retailers can analyze that data to tell them, for example, how better to promote high-margin or overstocked items.

“Data IP is the current frontier,” he said. “It is data that is going to improve predictions to personalize inventory and reduce waste while also helping with supply chain management. The goal is to create website data visibility that would benefit all of the other merchants other than Amazon.”

To continue developing its technology, the company secured $7.5 million in Series A funding in a round led by Equity Venture Partners and that included existing investors Carthona Capital and a group of angel investors. This latest investment gives the company $9.5 million in total funding raised to date, which includes $1.3 million in seed funding raised in 2019.

Particular Audience

How Particular Audience works on a website. Image Credits: Particular Audience

Particular Audience is working with approximately 100 websites currently. In addition to Sydney, the company also has an office in London. Europe makes up more than 50% of Particular Audience’s global revenue, and the new funding enables the company to open a new office in Amsterdam next year.

North America is also a growth territory for the company, where it has already opened an office in Vancouver, with plans to open a New York office in 2022 as well. The company has 60 employees, up from 20 last year, and Taylor expects to add 40 more in the next year, including rounding out its leadership team with a head of product.

The funding will also be invested into building out an API-first product suite and retail media platform so retailers can gain a revenue stream from cost per clicks. Meanwhile, the company saw 460% year over year in revenue growth and expects to hit $100 million in gross merchandise value through its products this year, up 19 times in the last two years, Taylor said.

As part of the investment, Daniel Szekely, partner at Equity Venture Partners, will join the board.

“Personalization of the internet is a critical frontier for e-commerce retailers, and in a world of growing online shopping options and diminishing consumer attention spans, delivering an experience that meets individual consumers’ needs is absolutely critical,” he said in a written statement. “James and his outstanding team have tackled this issue in a novel way, and the important need for their solution has been made obvious as the business gets pulled into multiple geographies. We’re thrilled to back them in their Series A and know this is just the beginning of the journey.”

 

Investors: Up your ante at the iMerit ML DataOps Summit 2021

The “oil bidness,” as they say in Texas, is so 20th century. Data, artificial intelligence and machine learning are the power triad fueling the future. If you’re an investor placing bets on the data operations market, you can’t afford to miss the iMerit ML DataOps Summit on December 2, 2021.

This free, one-day virtual conference will explore the AI and ML landscape as it exists today and what it holds for future tech industries across the spectrum including autonomous mobility, healthcare AI and geospatial.

Pro Tip: Attending iMerit ML DataOps Summit is free, but you must register here to attend.

The summit is sponsored by iMerit, a leading AI data solutions company providing high-quality data across computer vision, natural language processing and content that powers machine learning and artificial intelligence applications.

Here are just two presentations that savvy investors won’t want to miss.

Radha Basu, iMerit’s founder and CEO, opens the conference with 2022: The Year of ML DataOps – The Ground Truth of AI. She’ll share why machine learning data operations play a critical role in bringing artificial intelligence to market at scale and unveils why 2022 is shaping up to be the “Year of ML DataOps.”

State of the Industry: Exploring the AI and ML DataOps Market — Join this discussion with Gartner’s Sumit Agarwal, Bessemer Venture Partners’ Ethan Kurzweil and iMerit’s CRO Jeff Mills as they take a deep dive into the current and future state of the artificial intelligence and machine learning data operations market.

Explore the full event agenda, and just look at some of the VC companies that will attend the iMerit ML DataOps Summit. Talk about a prime networking opportunity.

  • Insight Partners
  • Accel
  • Bessemer Venture Partners
  • J.P. Morgan
  • Xerox Ventures
  • DNX Ventures
  • Ridge Ventures
  • Sutter Hill Ventures
  • BMW i Ventures
  • Red Ventures
  • First Ascent Ventures

The iMerit ML DataOps Summit 2021 takes place on December 2, 2021. Investors, take this opportunity to expand your knowledge of these rapidly evolving technologies, place more-informed bets on the AI and ML data ops market and move your business forward. Register today for this free, virtual event.

Flowrite is an AI writing productivity tool that wants to help you hit inbox zero

When TechCrunch asks Flowrite if it’s ‘Grammarly on steroids’, CEO and co-founder Aaro Isosaari laughs, saying that’s the comment they always get for the AI writing productivity tool they’ve been building since late summer 2020 — drawing on early access to OpenAI’s GPT-3 API, and attracting a wait-list of some 30,000 email-efficiency seeking prosumers keen to get their typing fingers on its beta.

The quest for ‘Inbox zero’ — via lightning speed email composition — could be rather easier with this AI-powered sidekick. At least if you’re the sort of person who fires off a bunch of fairly formulaic emails each and every day.

What does Flowrite do exactly? It turns a few instructions (yes you do have to type these) into a fully fledged, nice to read email. So where Grammarly helps improve a piece of (existing) writing, by suggesting tweaks to grammar/syntex/style etc, Flowrite helps you write the thing in the first place, so long as the thing is email or some other professional messaging type comms.

Email is what Flowrite’s AI models have been trained on, per Isosaari. And frustration with how much time he was having to spend composing emails was the inspiration for the startup. So its focus is firmly professional comms — rather than broader use cases for AI-generated words, such as copy writing etc (which GPT-3 is also being used for).

“In my previous work I knew that this is a problem that I had — I’d spend several hours every day communicating with different stakeholders on email and other messaging platforms,” he says. “We also knew that there are a lot more people — it’s not just our problem as co-founders; there’s millions of people who could benefit from communicating more effectively and efficiently in their day to day work.”

Here’s how Flowrite works: The user provides a set of basic (bullet pointed) instructions covering the key points of what they want to say and the AI-powered tool does the rest — generating a full email text that conveys the required info in a way that, well, flows.

Automation is thus doing the wordy leg work of filling in courteous greetings/sign-offs and figuring out appropriate phrasing to convey the sought for tone and impression.

Compared to email templates (an existing tech for email productivity), Isosaari says the advantage is the AI-powered tool adapts to context and “isn’t static”.

One obvious but important point is that the user does also of course get the chance to check over — and edit/tweak — the AI’s suggested text before hitting send so the human remains firmly the agent in the loop.

Isosaari gives an example use-case of a sales email where the instructions might boil down to typing something like “sounds amazing • let’s talk more in a call • next week, Monday PM” — in order to get a Flowrite-generated email that includes the essential details plus “all the greetings” and “added formalities” the extended email format requires.

(Sidenote: Flowrite’s initial pitch to TechCrunch was via email — but did not apparently involve the use of its tool. At least the email did not include a disclosure that: “This email is Flowrittenas a later missive from Isosaari (to send the PR as requested) did. Which, perhaps, gives an indication of the sorts of email comms you might want to speed-write (with AI) and those you maybe want to dedicated more of your human brain to composing (or at least look like you wrote it all yourself).)

“We’ve built an AI powered writing tools that helps professionals of all kinds to write and communicate faster as part of their daily workflow,” Isosaari tells TechCrunch. “We know that there’s millions of people who spend hours every day on emails and messages in a professional context — so communicating with different stakeholders, internally and externally, takes a lot of work, daily working hours. And Flowrite helps people to do that faster.”

The AI tool could also be a great help to people who find writing difficult for specific reasons such as dyslexia or because English is not their native language, he further suggests.

One obvious limitation is that Flowrite is only able to turn out emails in English. And while GPT-3 does have models for some other common languages, Isosaari suggests the quality of its ‘human-like’ responses there “might not be as good” as they are in English — hence he says they’ll remain focused there for now.

They’re using GPT-3’s language model as the core AI tech — but have also, recently, begun to use their own accumulated data to “fine tune it”, with Isosaari noting: “Already we’ve built a lot of things on top of GPT-3 so we’re building a wrapper on it.”

The startup’s promise for the email productivity tool is also that the AI will adapt to the user’s writing style — so that faster emails won’t also mean curtly out of character emails (which could lead to fresh emails asking if you’re okay?).

Isosaari says the tech is not not mining your entire email history to do this — but rather only looks at the directly preceding context in an email thread (if there is one).

Flowrite does also currently rely on cloud processing, since it’s calling GPT-3’s tech, but he says they want to move to on-device processing, which would obviously help address any confidentiality concerns, when we ask about that.

For now the tool is browser-based and integrates with web email. Currently it only works for Chrome and Gmail but Isosaari confirms the team’s plan is to expand integrations — such as for messaging platforms like Slack (but still initially at least, only for the web app version).

While the tech tool is still in a closed beta, the startup has just announced a $4.4 million seed raise.

The seed is led by Project A, along with Moonfire Ventures and angel investors Ilkka Paananen (CEO & Co-founder of Supercell), Sven Ahrens (director of global growth at Spotify), and Johannes Schildt (CEO & Co-Founder of Kry). Existing investors Lifeline Ventures and Seedcamp also joined in the round.

What types of emails and professionals is Flowrite best suited for? On the content side, Isosaari says it’s “typically replies where there’s some kind of existing context that you are responding to”.

“It’s able to understand the situation really well and adapt to it in a really natural way,” he suggests. “And also for outreaches — things like pitches and proposals… What it doesn’t work that well for is if you want to write something that is really, really complex — because then in order to do that you would need to have all that information in the instructions. And then obviously if you need to spend a lot of time writing the instruction that could be even close to the final email — and there’s not much value that Flowrite can provide at that point.”

It’s also obviously not going to offer great utility if you’re firing off “really, really short emails” — since if you’re just answering with a couple of words it’s likely quicker to type that yourself.

In terms of who’s likely to use Flowrite, Isosaari says they’ve had a broad range of early adopters seeking to tap into the beta. But he describes the main user profile as “executives, managers, entrepreneurs who communicate a lot on a daily basis” — aka, people who “need to give a good impression about themselves and communicate very thoughtfully”.

On the business model front, Flowrite’s initial focus is on prosumers/individual users — although Isosaari says it may look to expand out from there, perhaps first supporting teams. And he also says he could envisage some kind of SaaS offering for businesses down the line.

Currently, it’s not charging for the beta — but does plan to add pricing early next year.

“Once we move out of the beta then we’ll be starting to monetize,” he adds, suggesting that a full launch out of beta (so no more waitlist) could happen by mid 2022. 

The seed funding will primarily be spent on growing the team, according to Isosaari, especially on the engineering side — with the main goal at this early stage being to tool up around AI and core product.

Expanding features is another priority — including adding a “horizontal way” of using the tool across the browser, such as with different email clients.

Don’t miss the product demos at iMerit ML DataOps Summit 2021

The iMerit ML DataOps Summit 2021 kicks off on December 2 — that’s just eight days away. Are you ready to improve your dataops chops? More than 1,400 senior leaders in AI and ML will be in the virtual building, and this is one data deployment you can’t afford to miss.

More good news: This day-long data download is 100 percent free, but you have to register here to reserve your seat.

This summit is in partnership with iMerit, a leading AI data solutions company providing high-quality data across computer vision, natural language processing and content that powers machine learning and artificial intelligence applications.

This day is positively packed with presentations — from solving edge cases and scaling data pipelines to improve deployment speed to the latest research in motion planning behaviors and the value of humans-in-the-loop. Oh, right — and more!

Can a tech conference really be a tech conference without product demos? Never fear, we’ve got ’em. Take a gander at the demo descriptions below, and then check out the event agenda for exact times for all the programming. Note: Product demos in the agenda have a handy “iMerit Unveils” prefix.

Reporting, Analytics and Insights for Scaling your ML Data Pipeline iMerit’s VP of Product, Glen Ford shares the challenges companies face when moving from proof-of-concept to production-ready ML deployments. During this phase, workflows within the data pipeline can quickly move from cumbersome to unmanageable. With a single point of management for reporting, analytics and insights, you can scale your ML data pipeline more efficiently and effectively, allowing you to reach your goals faster.

AI Data Solutions for Solving Edge Cases with Greater Precision Edge cases are a major challenge for ML models and, if addressed successfully, they can become the greatest competitive differentiator in your AI. In this quick session, iMerit’s VP of Engineering Sudeep George, shares a solution to solve edge cases by creating proprietary data sets with greater precision, turning them into massive opportunities for companies and their AI.

The First End-to-End AI Data Solutions Platform — Companies today face the challenge of piecing together an ML data pipeline solution that allows them to create the high-quality data needed for their ML, while accomplishing it in a scalable, cost-efficient and timely manner. Brett Hallinan, iMerit’s Director of Marketing, unveils the first end-to-end AI data solution platform that ensures you receive the structured proprietary data you need to advance your AI.

The iMerit ML DataOps Summit 2021 takes place on December 2. Connect with your global AI and ML community and don’t forget to take a hefty helping of product demos. Register for a free event pass here.

Zenity raises $5M to help secure low-code/no-code applications

As more companies adopt low-code/no-code tools to build their line-of-business applications, it’s maybe no surprise that we are now seeing a new crop of services in this ecosystem that focus on keeping these tools secure. These include Tel Aviv-based Zenity, which is coming out of stealth today and announcing a $5 million seed funding round. The round was led by Vertex Ventures and UpWest. A number of angel investors, including the former CISO of Google, Gerhard Eschelbeck, and the former CIO of SuccessFactors, Tom Fisher, also participated in this round.

Zenity argues that as employees start building their own applications and adopt tools like robotic process automation (RPA), this new class of applications also opens up new avenues for potential breaches and ransomware attacks.

Image Credits: Zenity

“Companies are heavily adopting Low-Code/No-Code, without realizing the risks it employs nor their part in the shared responsibility model,” said Zenity co-founder and CEO Ben Kliger. “We empower CIOs and CISOs to seamlessly govern their Low-Code/No-Code applications and prevent unintentional data leaks, disturbance to business continuity, compliance risks or malicious breaches.”

The Zenity platform helps businesses build a catalog of low-code/no-code apps in their organization, mitigate potential issues and set up a governance policy for their organization that can then be automatically enforced. The company argues that the methods of traditional security services don’t transfer to low-code/no-code applications, yet the need for a tool like this is only growing, especially given that most of the developers that use don’t have a security background (or maybe any software development background at all).

The company was founded by CEO Kliger and CTO Michael Bargury, who both previously worked on the Azure and cloud security teams at Microsoft.

“The challenge is mitigating the risks and security threats associated with Low-Code/No- Code solutions without disturbing the business,” said Tom Fisher, Zenity advisor and former CIO of Oracle, Qualcomm, CTO of eBay. “Zenity provides the perfect combination of governance and security tools with a pro-business approach that helps business developers build with confidence.”

Jina.ai raises $30M for its for its neural search platform

Berlin-based Jina.ai, an open-source startup that uses neural search to help its users find information in their unstructured data (including videos and images), today announced that it has raised a $30 million Series A funding round led by Canaan Partners. New investor Mango Capital, as well as existing investors GGV Capital, SAP.iO and Yunqi Partners also participated in this round, which brings the company’s total funding to $39 million to date.

Jina.ai CEO and co-founder Han Xiao, who co-founded the company together with Nan Wang and Bing He, explained that the idea behind neural search is to use deep learning neural networks to go beyond traditional keyword-based search tools. Making use of relatively new machine learning technologies like transfer learning and representation learning, the company’s core Jina framework can help developers quickly build search tools for their specific use cases.

“Given an image, audio, video or whatever — we first use deep neural networks to translate this data format into a universal representation,” Xiao explained. “In this case, it’s mostly a mathematic vector — 100-dimensional vectors. And then, the matching [algorithm] does not count how many letters match but counts the mathematical distance, the vector distance between these two vectors. In this way, you can basically use this kind of methodology to solve all kinds of data search problems or relevance problems.”

Xiao described Jina as akin to TensorFlow for search (with TensorFlow being Google’s open-source machine learning framework). Just like TensorFlow or PyTorch defined the design pattern of how people design AI systems, Jina wants to define how people build neural search systems — and become the de-facto standard for doing so in the process.

But Jina is only one of the company’s current set of products. It also offers the Jina Hub, a marketplace that allows developers to share and discover the building blocks for Jina-based neural search applications, as well as the recently launched Finetuner, a tool for fine-tuning any deep neural network.

“Over the last 18 months, we spent a lot of effort on building the core infrastructure, on building the foundation of this big neural search tower — and that part is already done,” Xiao said. “And now we are slowly building the first floor, the second floor of this big building — and we try to provide an end-to-end development experience.”

Image Credits: Jina.ai

The company says the Jina AI developer community currently counts about 1,000 users, with applications that range from a video game developer who use it to auto-fill relevant game assets in the right-click many of its game editor to a legal-tech startup that uses it to enable its chatbot to provide a Q&A experience that draws on data from PDF documents.

The open-source Jina framework already has almost 200 external contributors since its launch in May 2020 and the company also hosts an active Slack community around the project.

“The reasons we are doing open source is mostly because of the velocity of open source — and I believe the velocity of the development is a key factor for the success of a software project. A lot of software just dies because this velocity goes to zero,” Xiao said. “We are building the community and we are leveraging the community to gather feedback to iterate fast. And this is super important for infrastructure software like us. we are building the community. And we are leveraging the community to gather feedback to fast iterate. And this is super important for infrastructure software like us. You need all these top-tier developers to give your feedback about the usability, accessibility and so on in order to improve it quickly.”

Jina.ai plans to use the new funding to double its team and especially to expand its operations in North America. With this expanded team, the company plans to invest in R&D to expand the overall Jina ecosystem and launch new tools and services around it.

“Traditional search systems built for textual data don’t work in a world brimming with images, video, and other multimedia. Jina AI is moving companies from black and white into color, unlocking unstructured data in a way that’s fast, scalable, and data-agnostic,” said Canaan Partners’ Joydeep Bhattacharyya. “The early applications of its open-source framework already show glimmers of the future, with neural search underpinning opportunities to improve decision-making, refine operations and even create new revenue streams.”

TechCrunch+ roundup: 5 pitch deck slides to fix, initial viable product, MLOps acceleration

This is a fantastic time to found a startup, but unless you plan to bootstrap it, you will still need to go through the laborious exercise of crafting a pitch deck.

With so much riding on the outcome, this can be an extremely stressful process — a convincing deck requires you to come up with data-driven answers for existential questions:

Can you lay out your plan for tripling revenue YoY? What’s your ideal product use case?


Full TechCrunch+ articles are only available to members.
Use discount code TCPLUSROUNDUP to save 20% off a one- or two-year subscription.


It’s tempting to make overly sunny projections or copy what’s worked for others, but your pitch isn’t meant to impress — it’s supposed to show how well you understand the business you’re building and the space in which you’re operating.

According to Jose Cayasso, CEO and co-founder of pitch deck design agency Slidebean, there are five slides where pretty much all founders miss the mark:

  • Go-to-market
  • Use case/audience
  • TAM
  • Possible outcomes
  • Team

Using examples from Airbnb, Uber and others, he shares several strategies for avoiding the most common pitfalls, along with the pitch deck framework Slidebean uses with most of its clients.

“Remember, a pitch deck needs to achieve two things: tell your company story and convince the investor that they can make money with this,” says Cayasso.

Thanks for reading; I hope you have an excellent weekend.

Walter Thompson
Senior Editor, TechCrunch+
@yourprotagonist

Dear Sophie: Any advice on visa issues for new hires?

lone figure at entrance to maze hedge that has an American flag at the center

Image Credits: Bryce Durbin/TechCrunch

Dear Sophie,

I run operations at an early-stage startup, and I’ve been tasked with hiring and other HR responsibilities. I’m feeling out of my depth with hiring and trying to figure out visa issues for prospective hires.

Do you have any advice?

— Doubling Down in Daly City

Making the case for IVP: Initial viable product

Hand placing cherry on top of cup cake

Image Credits: Flashpop (opens in a new window) / Getty Images

As a concept, minimum viable product (MVP) has given founders maximum flexibility.

The goal is to keep shipping until you reach product-market fit, but there’s a catch: “Minimal is a sliding scale that will always slide onto you,” according to Aron Solomon, head of strategy at Esquire Digital.

Instead of putting MVP on a pedestal, he proposes adding an initial viable product (IVP) to the roadmap.

“If your IVP is your presentation of an unbaked pepperoni pizza, your MVP is when you present a can of sauce, a package of cheese, a Slim Jim, and a pencil sketch of an oven.”

Here’s where MLOps is accelerating enterprise AI adoption

A modern ships telegraph isolated on white background - all settings from full astern to full speed ahead

Image Credits: donvictorio (opens in a new window) / Getty Images

The concept of MLOps gained traction as a few specific best practices for working with machine learning (ML) models, but it is maturing into a standalone approach for managing the ML lifecycle.

This evolution has played a key role in helping companies adopt and employ ML and AI, according to Ashish Kakran, principal at Thomvest Ventures.

In a TechCrunch+ post, Kakran lays out several challenges companies can address using MLOps:

  • Cross-team collaboration to deploy ML
  • Integration with ML tools
  • Model lifecycle management
  • Bringing ML models to production
  • Regulation and compliance
  • Accelerating AI adoption

Investors bet that Sweetgreen will make sweet amounts of green

Despite the fact that Americans are famous for failing to eat their vegetables, salad chain Sweetgreen is seeing a lot of success on the public markets.

The company priced its IPO above its planned range at $28 per share and was trading at nearly double that price soon after it debuted.

That pricing and the resulting ~9x multiple reflects the fact that the market now considers tech-enabled businesses like Sweetgreen, Allbirds and Rent the Runway to be around the same value as software businesses in 2015, writes Alex Wilhelm.

“How can we call it anything but a win?”

4 strategies for setting marketplace take rates

People handing over money

Image Credits: Image Source (opens in a new window) / Getty Images

E-commerce platform founders may be tempted to set transaction fees just a little higher than they initially planned, but greed isn’t always good.

Boosting take rates by a point or two could raise early revenue when it’s needed most, but there’s an opportunity cost, since “a higher take rate typically leads to lower transaction volume,” according to angel investor and product manager Tanay Jaipuria.

Take rates should directly reflect the stage of your business, he advises, since platforms with higher rates see lower transaction volumes.

To learn how different companies use this lever, Jaipuria studied take rates for more than 25 marketplaces, including Apple, Shutterstock and OpenSea.

“It’s important for founders to remember that maximizing the take rate of the platform is not the goal,” he says.

Are rivals snacking on Instacart’s core grocery delivery market?

San Francisco is an outlier, but a walk through its residential neighborhoods shows how successful Instacart became during the pandemic.

Nearly every restaurant has a hand-lettered “Instacart pickup here” sign, and its drivers now deliver everything from Safeway groceries to Walgreens prescriptions. On more than one occasion, I’ve seen neighbors accepting boba tea deliveries.

But after The Information reported that the delivery platform’s growth plateaued in 2021, Alex Wilhelm gathered data from competitors Amazon, Walmart, DoorDash and Uber to see if they are “snacking on Instacart’s core business.”

Is that weed you’re smoking green enough?

Image Credits: Anuj singh / 500px (opens in a new window) / Getty Images

North America’s legal cannabis industry is about a decade old, but many stakeholders are developing a framework “to make sure that for once, an industry starts off on the right path,” reports Jesse Klein.

Twenty companies have formed a coalition to promote sustainability practices aimed at reducing energy and water usage, along with emissions. To “green it up,” they’re studying new tech like LEDs, as well as traditional agricultural practices.

“They’re making pretty good margins and they’ve kind of got a PR problem,” said Stephen Doig, senior research and strategy adviser at Dartmouth’s Arthur L. Irving Institute for Energy and Society.

“Getting it right, right now [will] make a big difference.”

Unicorns Braze and UserTesting begin public life in diverging ways

In the software space, even the tiniest difference in metrics can affect a company’s fortunes once it goes public.

Braze and UserTesting both provide ways to centralize and optimally use customer data, and their growth metrics are quite evenly matched. Yet, when they went public earlier this week, Braze priced above its price range and UserTesting priced below.

“It appears that those metrics — software TAM is so large these days that we’re not going to compare vanity metrics for the sake of being kind to S-1 scribblers — are enough to give it the revenue multiple differential that we see, and thus explain the difference in the two companies’ IPO pricing runs,” writes Alex Wilhelm.

Kettle books $25M for its reinsurance platform against fire and other catastrophes

One of the most noticeable — and noted — effects of climate change has been its impact on how other events in the environment — be they natural or man-made occurrences — play out: forest fires burn more violently and for longer; floods happen more often and are more severe when they do; and so on, with climate change often cited as the main culprit for all of the catastrophes. Today, an insurance startup called Kettle that believes it has built a better product — specifically, reinsurance underwriting product to insure insurers — to account for catastrophic events like these, by way of better data science, is announcing some funding on the heels of (sadly) more need for its services.

It has closed a Series A of $25 million, money that it will be using to build out tools and services for a specific set of catastrophes in one specific market: fires in California. Acrew Capital is leading the round, with Homebrew, True Ventures, Anthemis, Valor, DCVC, and LowerCarbon Capital also participating.

Kettle’s longer-term plan is to expand to more disaster types, and more states, in the coming years, but for now, fires in California present a particularly acute set of problems.

Events like the Caldor and Dixie fires have contributed to an overall rise in the rate and size of wildfires in California, Kettle says. 2020 saw over 4% of the state burning. On average there are some 10,000 fires every year in California, but the outsized nature of some of the fires seems to be growing, with 14 fires causing 98% of the damage due to wildfire in the state.

Nathaniel Manning, Kettle’s CEO who co-founded the company with Andrew Engler, said that these forces have created a gap in the market for insurance: in short, those who might want to insure their homes against these kinds of wildfires are either unable to, or end up having to pay exorbitant premiums.

Manning said that this is primarily because insurance companies — while ironically being the trailblazers in data science decades ago to determine risk for unexpected events — have failed to keep up with how to use that technology to account for recent developments like climate change, subsequent catastrophic environmental events, and their impact on the things that typically get insured like property, life, automobiles and so on.

“The industry hasn’t updated,” he said. “It’s the classic innovator’s dilemma.” Typically, insurance companies are using the same modeling that they have always used to try to understand what are new kinds of risks, “but you can’t look at the last five years and determine the next ten years anymore.” Communication, and making it more accurate and reflective of the situation at hand, is something of a fixation for Manning: prior to Kettle, he had been the CEO of Ushahidi, the crowdsourced information startup.

Kettle mostly presents itself as a reinsurance technology provider to customer-facing insurance companies (it also currently resells insurance that it underwrites via one channel, aimed at the most expensive properties and their owners, starting at anything over $3 million and up to $10 million).

This is a huge business, typified by incumbent behemoths like Lloyd’s of London, who in theory mitigate the risk insurance companies face when they get the formula wrong. Manning’s belief is that reinsurance companies also are not using enough data, and accurate enough data science or technology overall, to do their jobs to match today’s circumstances.

Reinsurance is currently a $400 billion-a-year industry, but it’s struggling with the cracks just starting to emerge. There has been, Kettle said, a 68% drop in return on equity because catastrophes, and their unintended consequences, have caused more than $1 billion in damage over the past 15 years. This presents an opportunity to provide a different spin on how to provide this service. Kettle’s approach is to pinpoint specific situations — in this case wildfires in California — to provide reinsurance specifically for policies or parts of policies that cover just that.

Using machine learning in which it combines weather data, satellite imagery and other data sets, Kettle applies a lot what has helped AI stand out from non-AI processes in other fields: the ability for machines to simply make more calculations than any human or even group of humans can.

“Normally, an insurance company will run between 10,000 and 100,000 simulations to predict outcomes,” Manning said. “We run over 500 bill This means that it can account better for eventualities to help create pricing that meets them. Kettle claims to have been accurate on its predictions 89% of the time so far. In August, Kettle said that some 26 insurance carriers have been in contact with it to help model their risk, and Manning told me that the company expects three to four commercial deals to close by the end of this year.

There is often something a little weird feeling about technology that essentially is built around the idea of bad events happening, and potentially profits from those things that go wrong. Insurance often falls into the category, not least because a lot of insurance hasn’t really been built that well, to fit modern times, and often feels exploitative, or arbitrary, or there by grace of lobbyists making sure it is mandated, more than any actual need for it. (And insurance fraud speaks to the other side of that inefficiency coin.)

Manning accepts this, but also sees it very differently.

“I think the industry itself is very poorly managed,” he admitted. “The incentives are not in the right direction, and creating a system where the customer and company have different incentive structures is not great.

“But I do think it’s important,” he continued. “As a homeowner, if my home burns down I’ll get its value back. That can be a truly life changing thing.”

For investors, the disruptiveness Kettle is bringing is what attracted them, although longer term you have to imagine that the big incumbents can’t not be considering how to update their data models, too. And that could mean more business for Kettle, or an acquisition, or… death, which is perhaps fitting for a insuretech. For now, though, there’s a lot of potential still for this young startup.

“When you take a minute to think about it, it becomes very obvious why traditional reinsurers can’t accurately underwrite climate risk — their methodologies  look to the past,” says Lauren Kolodny, Partner at Acrew Capital, in a statement. “And our climate is changing in ways that can’t be predicted on the basis of historical data. Kettle is solving a massive, global problem. And we’re so thrilled to deepen our partnership with this incredible team.”

Solving entertainment’s globalization problem with AI and ML

The recent controversy surrounding the mistranslations found in the Netflix hit “Squid Game” and other films highlights technology’s challenges when releasing content that bridges languages and cultures internationally.

Every year across the global media and entertainment industry, tens of thousands of movies and TV episodes exhibited on hundreds of streaming platforms are released with the hope of finding an audience among 7.2 billion people living in nearly 200 countries. No audience is fluent in the roughly 7,000 recognized languages. If the goal is to release the content internationally, subtitles and audio dubs must be prepared for global distribution.

Known in the industry as “localization,” creating “subs and dubs” has, for decades, been a human-centered process, where someone with a thorough understanding of another language sits in a room, reads a transcript of the screen dialogue, watches the original language content (if available) and translates it into an audio dub script. It is not uncommon for this step to take several weeks per language from start to finish.

Once the translations are complete, the script is then performed by voice actors who make every effort to match the action and lip movements as closely as possible. Audio dubs follow the final cut dialogue, and then subtitles are generated from each audio dub. Any compromise made in the language translation may, then, be subjected to further compromise in the production of subtitles. It’s easy to see where mistranslations or changes in a story can occur.

The most conscientious localization process does include some level of cultural awareness because some words, actions or contexts are not universally translatable. For this purpose, the director of the 2019 Oscar-winning film “Parasite,” Bong Joon-ho, sent detailed notes to his translation team before they began work. Bong and others have pointed out that limitations of time, available screen space for subtitles, and the need for cultural understanding further complicate the process. Still, when done well, they contribute to higher levels of enjoyment of the film.

The exponential growth of distribution platforms and the increasing and continuous flow of fresh content are pushing those involved in the localization process to seek new ways to speed production and increase translation accuracy. Artificial intelligence (AI) and machine learning (ML) are highly anticipated answers to this problem, but neither has reached the point of replacing the human localization component. Directors of titles such as “Squid Game” or “Parasite” are not yet ready to make that leap. Here’s why.

Culture matters

First, literal translation is incapable of catching 100% of the story’s linguistic, cultural or contextual nuance included in the script, inflection or action. AI companies themselves admit to these limitations, commonly referring to machine-based translations as “more like dictionaries than translators,” and remind us that computers are only capable of doing what we teach them while stating they lack understanding.

For example, the English title of the first episode of “Squid Game” is “Red Light, Green Light.” This refers to the name of the children’s game played in the first episode. The original Korean title is “무궁화 꽃이 피던 날” (“Mugunghwa Kkoch-I Pideon Nal”), which directly translates as “The Day the Mugunghwa Bloomed,” which has nothing to do with the game they’re playing.

In Korean culture, the title symbolizes new beginnings, which is the game’s protagonists’ promise to the winner. “Red Light, Green Light” is related to the episode, but it misses the broader cultural reference of a promised fresh start for people down on their luck — a significant theme of the series. Some may believe that naming the episode after the game played because the cultural metaphor of the original title is unknown to the translators may not be a big deal, but it is.

How can we expect to train machines to recognize these differences and apply them autonomously when humans don’t make the connection and apply them themselves?

Knowing versus knowledge

It’s one thing for a computer to translate Korean into English. It is another altogether for it to have knowledge about relationship differences like those in “Squid Game” — between immigrants and natives, strangers and family members, employees and bosses — and how those relationships impact the story. Programming cultural understanding and emotional recognition into AI is challenging enough, especially if those emotions are displayed without words, such as a look on someone’s face. Even then, it is hard to predict emotional facial response that may change with culture.

AI is still a work in progress as it relates to explainability, interpretability and algorithmic bias. The idea that machines will self-train themselves is far-fetched given where the industry stands concerning executing AI/ML. For a content-heavy, creative industry like media and entertainment, context is everything; there is the content creator’s expression of context, and then there is the audience’s perception of it.

Moreover, with respect to global distribution, context equals culture. A digital nirvana is achieved when a system can orchestrate and predict the audio, video and text in addition to the multiple layers of cultural nuance that are at play at any given frame, scene, theme and genre level. At the core, it all starts with good-quality training data — essentially, taking a data-centric approach versus a model-centric one.

Recent reports indicate Facebook catches only 3% to 5% of problematic content on its platform. Even with millions of dollars available for development, programming AI to understand context and intent is very hard to do. Fully autonomous translation solutions are some ways off, but that doesn’t mean AI/ML cannot reduce the workload today. It can.

Through analysis of millions of films and TV shows combined with the cultural knowledge of individuals from nearly 200 countries, a two-step human and AI/ML process can provide the detailed insights needed to identify content that any country or culture may find objectionable. In “culturalization,” this cultural roadmap is then used in the localization process to ensure story continuity, avoid cultural missteps and obtain global age ratings — all of which reduce post-production time and costs without regulatory risk.

Audiences today have more content choices than ever before. Winning in the global marketplace means content creators have to pay more attention to their audience, not just at home but in international markets.

The fastest path to success for content creators and streaming platforms is working with companies that understand local audiences and what matters to them so their content is not lost in translation.