With new owner Naver, Wattpad looks to supercharge its user-generated IP factory

Toronto-based Wattpad is officially part of South Korean internet giant Naver as of today, with the official close of the $600 million cash and stock acquisition deal. Under the terms of the acquisition, Wattpad will continue to be headquartered in, and operate from Canada, with co-founder and Allen Lau remaining CEO of the social storytelling company and reporting to the CEO of Naver’s Webtoon, Jun Koo Kim.

I spoke to Lau about what will change, and what won’t, now that Wattpad is part of Naver and Webtoon. As mentioned, Wattpad will remain headquartered in Toronto — and in fact, the company will be growing its headcount in Canada under its new owners with significant new hiring.

“For Wattpad itself, last year was one of our fastest growing years in terms of both in terms of revenue and company size,” Lau said. “This year will be even faster; we’re planning to hire over 100 people, primarily in Toronto and Halifax. So in terms of the number of jobs, and the number of opportunities, this puts us on another level.”

While the company is remaining in Canada and expanding its local talent pool, while maintaining its focus on delivering socially collaborative fiction, Lau says that the union with Naver and Webtoon is about more than just increasing the rate at which it can grow. The two companies share unique “synergies,” he says, that can help each better capitalize on their respective opportunities.

“Naver is one of the world’s largest internet companies,” Lau told me. “But the number one reason that this merger is happening is because of Webtoon. Webtoon is the largest digital publisher in the world, and they have over 76 million monthly users. Combined with our 90 million, that adds up to 166 total monthly users — the reach is enormous. We are now by far the leader in this space, in the storytelling space, in both comics and fiction: By far the largest one in the world.”

The other way in which the two companies complement each other is around IP. Wattpad has demonstrated its ability to take its user-generated fiction, and turn that into successful IP upon which original series and movies are based. The company has both a Books and a Studios publishing division, and has generated hits like Netflix’s The Kissing Booth out of the work of the authors on its platform. Increasingly, competing streaming services are looking around for new properties that will resonate with younger audiences, in order to win and maintain subscriptions.

“Wattpad is the IP factory for user generated content,” Lau said. “And Webtoons also have a lot of amazing IP that are proven to build audience, along with all the data and analytics and insight around those. So the combined library of the top IPs that are blockbusters literally double overnight [with the merger]. And not just the size, but the capability. Because before the acquisition, we had our online fiction, we have both publishing business, and we have TV shows and movies, as well; but with the combination, now we also have comics, we also have animation and potentially other capabilities, as well.”

The key to Wattpad’s success with developing IP in partnership with the creators on its platform isn’t just that its’ user-generated and crowd-friendly; Wattpad also has unique insight into the data behind what’s working about successful IP with its fans and readers. The company’s analytics platform can then provide collaborators in TV and movies with unparalleled, data-backed perspective into what should strike a chord with fans when translated into a new medium, and what might not be so important to include in the adaptation. This is what provides Wattpad with a unique edge when going head-to-head with legacy franchises including those from Disney and other megawatt brands.

“No only do we have the fan bases — it’s data driven,” Lau said. “When we adapt from the fiction on our platform to a movie, we can tell the screenwriter, ‘Keep chapter one, chapter five and chapter seven, but in seven only the first two paragraphs,’ because that’s what the 200,000 comments are telling us. That’s what our machine learning story DNA technology can tell you this is the insight; where are they excited? This is something unprecedented.”

With Naver and Webtoon, Wattpad gains the ability to leverage its insight-gathering IP generation in a truly cross-media context, spanning basically every means a fan might choose to engage with a property. For would-be Disney competitors, that’s likely to be an in-demand value proposition.

Longevity startup Gero AI has a mobile API for quantifying health changes

Sensor data from smartphones and wearables can meaningfully predict an individual’s ‘biological age’ and resilience to stress, according to Gero AI.

The ‘longevity’ startup — which condenses its mission to the pithy goal of “hacking complex diseases and aging with Gero AI” — has developed an AI model to predict morbidity risk using ‘digital biomarkers’ that are based on identifying patterns in step-counter sensor data which tracks mobile users’ physical activity.

A simple measure of ‘steps’ isn’t nuanced enough on its own to predict individual health, is the contention. Gero’s AI has been trained on large amounts of biological data to spots patterns that can be linked to morbidity risk. It also measures how quickly a personal recovers from a biological stress — another biomarker that’s been linked to lifespan; i.e. the faster the body recovers from stress, the better the individual’s overall health prognosis.

A research paper Gero has had published in the peer-reviewed biomedical journal Aging explains how it trained deep neural networks to predict morbidity risk from mobile device sensor data — and was able to demonstrate that its biological age acceleration model was comparable to models based on blood test results.

Another paper, due to be published in the journal Nature Communications later this month, will go into detail on its device-derived measurement of biological resilience.

The Singapore-based startup, which has research roots in Russia — founded back in 2015 by a Russian scientist with a background in theoretical physics — has raised a total of $5 million in seed funding to date (in two tranches).

Backers come from both the biotech and the AI fields, per co-founder Peter Fedichev. Its investors include Belarus-based AI-focused early stage fund, Bulba Ventures (Yury Melnichek). On the pharma side, it has backing from some (unnamed) private individuals with links to Russian drug development firm, Valenta. (The pharma company itself is not an investor).

Fedichev is a theoretical physicist by training who, after his PhD and some ten years in academia, moved into biotech to work on molecular modelling and machine learning for drug discovery — where he got interested in the problem of ageing and decided to start the company.

As well as conducting its own biological research into longevity (studying mice and nematodes), it’s focused on developing an AI model for predicting the biological age and resilience to stress of humans — via sensor data captured by mobile devices.

“Health of course is much more than one number,” emphasizes Fedichev. “We should not have illusions about that. But if you are going to condense human health to one number then, for a lot of people, the biological age is the best number. It tells you — essentially — how toxic is your lifestyle… The more biological age you have relative to your chronological age years — that’s called biological acceleration — the more are your chances to get chronic disease, to get seasonal infectious diseases or also develop complications from those seasonal diseases.”

Gero has recently launched a (paid, for now) API, called GeroSense, that’s aimed at health and fitness apps so they can tap up its AI modelling to offer their users an individual assessment of biological age and resilience (aka recovery rate from stress back to that individual’s baseline).

Early partners are other longevity-focused companies, AgelessRx and Humanity Inc. But the idea is to get the model widely embedded into fitness apps where it will be able to send a steady stream of longitudinal activity data back to Gero, to further feed its AI’s predictive capabilities and support the wider research mission — where it hopes to progress anti-ageing drug discovery, working in partnerships with pharmaceutical companies.

The carrot for the fitness providers to embed the API is to offer their users a fun and potentially valuable feature: A personalized health measurement so they can track positive (or negative) biological changes — helping them quantify the value of whatever fitness service they’re using.

“Every health and wellness provider — maybe even a gym — can put into their app for example… and this thing can rank all their classes in the gym, all their systems in the gym, for their value for different kinds of users,” explains Fedichev.

“We developed these capabilities because we need to understand how ageing works in humans, not in mice. Once we developed it we’re using it in our sophisticated genetic research in order to find genes — we are testing them in the laboratory — but, this technology, the measurement of ageing from continuous signals like wearable devices, is a good trick on its own. So that’s why we announced this GeroSense project,” he goes on.

“Ageing is this gradual decline of your functional abilities which is bad but you can go to the gym and potentially improve them. But the problem is you’re losing this resilience. Which means that when you’re [biologically] stressed you cannot get back to the norm as quickly as possible. So we report this resilience. So when people start losing this resilience it means that they’re not robust anymore and the same level of stress as in their 20s would get them [knocked off] the rails.

“We believe this loss of resilience is one of the key ageing phenotypes because it tells you that you’re vulnerable for future diseases even before those diseases set in.”

“In-house everything is ageing. We are totally committed to ageing: Measurement and intervention,” adds Fedichev. “We want to building something like an operating system for longevity and wellness.”

Gero is also generating some revenue from two pilots with “top range” insurance companies — which Fedichev says it’s essentially running as a proof of business model at this stage. He also mentions an early pilot with Pepsi Co.

He sketches a link between how it hopes to work with insurance companies in the area of health outcomes with how Elon Musk is offering insurance products to owners of its sensor-laden Teslas, based on what it knows about how they drive — because both are putting sensor data in the driving seat, if you’ll pardon the pun. (“Essentially we are trying to do to humans what Elon Musk is trying to do to cars,” is how he puts it.)

But the nearer term plan is to raise more funding — and potentially switch to offering the API for free to really scale up the data capture potential.

Zooming out for a little context, it’s been almost a decade since Google-backed Calico launched with the moonshot mission of ‘fixing death’. Since then a small but growing field of ‘longevity’ startups has sprung up, conducting research into extending (in the first instance) human lifespan. (Ending death is, clearly, the moonshot atop the moonshot.) 

Death is still with us, of course, but the business of identifying possible drugs and therapeutics to stave off the grim reaper’s knock continues picking up pace — attracting a growing volume of investor dollars.

The trend is being fuelled by health and biological data becoming ever more plentiful and accessible, thanks to open research data initiatives and the proliferation of digital devices and services for tracking health, set alongside promising developments in the fast-evolving field of machine learning in areas like predictive healthcare and drug discovery.

Longevity has also seen a bit of an upsurge in interest in recent times as the coronavirus pandemic has concentrated minds on health and wellness, generally — and, well, mortality specifically.

Nonetheless, it remains a complex, multi-disciplinary business. Some of these biotech moonshots are focused on bioengineering and gene-editing — pushing for disease diagnosis and/or drug discovery.

Plenty are also — like Gero —  trying to use AI and big data analysis to better understand and counteract biological ageing, bringing together experts in physics, maths and biological science to hunt for biomarkers to further research aimed at combating age-related disease and deterioration.

Another recent example is AI startup Deep Longevity, which came out of stealth last summer — as a spinout from AI drug discovery startup Insilico Medicine — touting an AI ‘longevity as a service’ system which it claims can predict an individual’s biological age “significantly more accurately than conventional methods” (and which it also hopes will help scientists to unpick which “biological culprits drive aging-related diseases”, as it put it).

Gero AI is taking a different tack toward the same overarching goal — by honing in on data generated by activity sensors embedded into the everyday mobile devices people carry with them (or wear) as a proxy signal for studying their biology.

The advantage being that it doesn’t require a person to undergo regular (invasive) blood tests to get an ongoing measure of their own health. Instead our personal device can generate proxy signals for biological study passively — at vast scale and low cost. So the promise of Gero’s ‘digital biomarkers’ is they could democratize access to individual health prediction.

And while billionaires like Peter Thiel can afford to shell out for bespoke medical monitoring and interventions to try to stay one step ahead of death, such high end services simply won’t scale to the rest of us.

If its digital biomarkers live up to Gero’s claims, its approach could, at the least, help steer millions towards healthier lifestyles, while also generating rich data for longevity R&D — and to support the development of drugs that could extend human lifespan (albeit what such life-extending pills might cost is a whole other matter).

The insurance industry is naturally interested — with the potential for such tools to be used to nudge individuals towards healthier lifestyles and thereby reduce payout costs.

For individuals who are motivated to improve their health themselves, Fedichev says the issue now is it’s extremely hard for people to know exactly which lifestyle changes or interventions are best suited to their particular biology.

For example fasting has been shown in some studies to help combat biological ageing. But he notes that the approach may not be effective for everyone. The same may be true of other activities that are accepted to be generally beneficial for health (like exercise or eating or avoiding certain foods).

Again those rules of thumb may have a lot of nuance, depending on an individual’s particular biology. And scientific research is, inevitably, limited by access to funding. (Research can thus tend to focus on certain groups to the exclusion of others — e.g. men rather than women; or the young rather than middle aged.)

This is why Fedichev believes there’s a lot of value in creating a measure than can address health-related knowledge gaps at essentially no individual cost.

Gero has used longitudinal data from the UK’s biobank, one of its research partners, to verify its model’s measurements of biological age and resilience. But of course it hopes to go further — as it ingests more data. 

“Technically it’s not properly different what we are doing — it just happens that we can do it now because there are such efforts like UK biobank. Government money and also some industry sponsors money, maybe for the first time in the history of humanity, we have this situation where we have electronic medical records, genetics, wearable devices from hundreds of thousands of people, so it just became possible. It’s the convergence of several developments — technological but also what I would call ‘social technologies’ [like the UK biobank],” he tells TechCrunch.

“Imagine that for every diet, for every training routine, meditation… in order to make sure that we can actually optimize lifestyles — understand which things work, which do not [for each person] or maybe some experimental drugs which are already proved [to] extend lifespan in animals are working, maybe we can do something different.”

“When we will have 1M tracks [half a year’s worth of data on 1M individuals] we will combine that with genetics and solve ageing,” he adds, with entrepreneurial flourish. “The ambitious version of this plan is we’ll get this million tracks by the end of the year.”

Fitness and health apps are an obvious target partner for data-loving longevity researchers — but you can imagine it’ll be a mutual attraction. One side can bring the users, the other a halo of credibility comprised of deep tech and hard science.

“We expect that these [apps] will get lots of people and we will be able to analyze those people for them as a fun feature first, for their users. But in the background we will build the best model of human ageing,” Fedichev continues, predicting that scoring the effect of different fitness and wellness treatments will be “the next frontier” for wellness and health (Or, more pithily: “Wellness and health has to become digital and quantitive.”)

“What we are doing is we are bringing physicists into the analysis of human data. Since recently we have lots of biobanks, we have lots of signals — including from available devices which produce something like a few years’ long windows on the human ageing process. So it’s a dynamical system — like weather prediction or financial market predictions,” he also tells us.

“We cannot own the treatments because we cannot patent them but maybe we can own the personalization — the AI that personalized those treatments for you.”

From a startup perspective, one thing looks crystal clear: Personalization is here for the long haul.

 

AI is ready to take on a massive healthcare challenge

Which disease results in the highest total economic burden per annum? If you guessed diabetes, cancer, heart disease or even obesity, you guessed wrong. Reaching a mammoth financial burden of $966 billion in 2019, the cost of rare diseases far outpaced diabetes ($327 billion), cancer ($174 billion), heart disease ($214 billion) and other chronic diseases.

Cognitive intelligence, or cognitive computing solutions, blend artificial intelligence technologies like neural networks, machine learning, and natural language processing, and are able to mimic human intelligence.

It’s not surprising that rare diseases didn’t come to mind. By definition, a rare disease affects fewer than 200,000 people. However, collectively, there are thousands of rare diseases and those affect around 400 million people worldwide. About half of rare disease patients are children, and the typical patient, young or old, weather a diagnostic odyssey lasting five years or more during which they undergo countless tests and see numerous specialists before ultimately receiving a diagnosis.

No longer a moonshot challenge

Shortening that diagnostic odyssey and reducing the associated costs was, until recently, a moonshot challenge, but is now within reach. About 80% of rare diseases are genetic, and technology and AI advances are combining to make genetic testing widely accessible.

Whole-genome sequencing, an advanced genetic test that allows us to examine the entire human DNA, now costs under $1,000, and market leader Illumina is targeting a $100 genome in the near future.

The remaining challenge is interpreting that data in the context of human health, which is not a trivial challenge. The typical human contains 5 million unique genetic variants and of those we need to identify a single disease-causing variant. Recent advances in cognitive AI allow us to interrogate a person’s whole genome sequence and identify disease-causing mechanisms automatically, augmenting human capacity.

A shift from narrow to cognitive AI

The path to a broadly usable AI solution required a paradigm shift from narrow to broader machine learning models. Scientists interpreting genomic data review thousands of data points, collected from different sources, in different formats.

An analysis of a human genome can take as long as eight hours, and there are only a few thousand qualified scientists worldwide. When we reach the $100 genome, analysts are expecting 50 million-60 million people will have their DNA sequenced every year. How will we analyze the data generated in the context of their health? That’s where cognitive intelligence comes in.

Lightmatter’s photonic AI ambitions light up an $80M B round

AI is fundamental to many products and services today, but its hunger for data and computing cycles is bottomless. Lightmatter plans to leapfrog Moore’s law with its ultra-fast photonic chips specialized for AI work, and with a new $80M round the company is poised to take its light-powered computing to market.

We first covered Lightmatter in 2018, when the founders were fresh out of MIT and had raised $11M to prove that their idea of photonic computing was as valuable as they claimed. They spent the next three years and change building and refining the tech — and running into all the hurdles that hardware startups and technical founders tend to find.

For a full breakdown of what the company’s tech does, read that feature — the essentials haven’t changed.

In a nutshell, Lightmatter’s chips perform certain complex calculations fundamental to machine learning in a flash — literally. Instead of using charge, logic gates, and transistors to record and manipulate data, the chips use photonic circuits that perform the calculations by manipulating the path of light. It’s been possible for years, but until recently getting it to work at scale, and for a practical, indeed a highly valuable purpose has not.

Prototype to product

It wasn’t entirely clear in 2018 when Lightmatter was getting off the ground whether this tech would be something they could sell to replace more traditional compute clusters like the thousands of custom units companies like Google and Amazon use to train their AIs.

“We knew in principle the tech should be great, but there were a lot of details we needed to figure out,” CEO and co-founder Nick Harris told TechCrunch in an interview. “Lots of hard theoretical computer science and chip design challenges we needed to overcome… and COVID was a beast.”

With suppliers out of commission and many in the industry pausing partnerships, delaying projects, and other things, the pandemic put Lightmatter months behind schedule, but they came out the other side stronger. Harris said that the challenges of building a chip company from the ground up were substantial, if not unexpected.

A rack of Lightmatter servers.

Image Credits: Lightmatter

“In general what we’re doing is pretty crazy,” he admitted. “We’re building computers from nothing. We design the chip, the chip package, the card the chip package sits on, the system the cards go in, and the software that runs on it…. we’ve had to build a company that straddles all this expertise.”

That company has grown from its handful of founders to more than 70 employees in Mountain View and Boston, and the growth will continue as it brings its new product to market.

Where a few years ago Lightmatter’s product was more of a well-informed twinkle in the eye, now it has taken a more solid form in the Envise, which they call a ‘general purpose photonic AI accelerator.” It’s a server unit designed to fit into normal datacenter racks but equipped with multiple photonic computing units, which can perform neural network inference processes at mind-boggling speeds. (It’s limited to certain types of calculations, namely linear algebra for now, and not complex logic, but this type of math happens to be a major component of machine learning processes.)

Harris was reticent to provide exact numbers on performance improvements, but more because those improvements are increasing than that they’re not impressive enough. The website suggests it’s 5x faster than an NVIDIA A100 unit on a large transformer model like BERT, while using about 15 percent of the energy. That makes the platform doubly attractive to deep-pocketed AI giants like Google and Amazon, which constantly require both more computing power and who pay through the nose for the energy required to use it. Either better performance or lower energy cost would be great — both together is irresistible.

It’s Lightmatter’s initial plan to test these units with its most likely customers by the end of 2021, refining it and bringing it up to production levels so it can be sold widely. But Harris emphasized this was essentially the Model T of their new approach.

“If we’re right, we just invented the next transistor,” he said, and for the purposes of large-scale computing, the claim is not without merit. You’re not going to have a miniature photonic computer in your hand any time soon, but in datacenters, where as much as 10 percent of the world’s power is predicted to go by 2030, “they really have unlimited appetite.”

The color of math

A Lightmatter chip with its logo on the side.

Image Credits: Lightmatter

There are two main ways by which Lightmatter plans to improve the capabilities of its photonic computers. The first, and most insane sounding, is processing in different colors.

It’s not so wild when you think about how these computers actually work. Transistors, which have been at the heart of computing for decades, use electricity to perform logic operations, opening and closing gates and so on. At a macro scale you can have different frequencies of electricity that can be manipulated like waveforms, but at this smaller scale it doesn’t work like that. You just have one form of currency, electrons, and gates are either open or closed.

In Lightmatter’s devices, however, light passes through waveguides that perform the calculations as it goes, simplifying (in some ways) and speeding up the process. And light, as we all learned in science class, comes in a variety of wavelengths — all of which can be used independently and simultaneously on the same hardware.

The same optical magic that lets a signal sent from a blue laser be processed at the speed of light works for a red or a green laser with minimal modification. And if the light waves don’t interfere with one another, they can travel through the same optical components at the same time without losing any coherence.

That means that if a Lightmatter chip can do, say, a million calculations a second using a red laser source, adding another color doubles that to two million, adding another makes three — with very little in the way of modification needed. The chief obstacle is getting lasers that are up to the task, Harris said. Being able to take roughly the same hardware and near-instantly double, triple, or 20x the performance makes for a nice roadmap.

It also leads to the second challenge the company is working on clearing away, namely interconnect. Any supercomputer is composed of many small individual computers, thousands and thousands of them, working in perfect synchrony. In order for them to do so, they need to communicate constantly to make sure each core knows what other cores are doing, and otherwise coordinate the immensely complex computing problems supercomputing is designed to take on. (Intel talks about this “concurrency” problem building an exa-scale supercomputer here.)

“One of the things we’ve learned along the way is, how do you get these chips to talk to each other when they get to the point where they’re so fast that they’re just sitting there waiting most of the time?” said Harris. The Lightmatter chips are doing work so quickly that they can’t rely on traditional computing cores to coordinate between them.

A photonic problem, it seems, requires a photonic solution: a wafer-scale interconnect board that uses waveguides instead of fiber optics to transfer data between the different cores. Fiber connections aren’t exactly slow, of course, but they aren’t infinitely fast, and the fibers themselves are actually fairly bulky at the scales chips are designed, limiting the number of channels you can have between cores.

“We built the optics, the waveguides, into the chip itself; we can fit 40 waveguides into the space of a single optical fiber,” said Harris. “That means you have way more lanes operating in parallel — it gets you to absurdly high interconnect speeds.” (Chip and server fiends can find that specs here.)

The optical interconnect board is called Passage, and will be part of a future generation of its Envise products — but as with the color calculation, it’s for a future generation. 5-10x performance at a fraction of the power will have to satisfy their potential customers for the present.

Putting that $80M to work

Those customers, initially the “hyper-scale” data handlers that already own datacenters and supercomputers that they’re maxing out, will be getting the first test chips later this year. That’s where the B round is primarily going, Harris said: “We’re funding our early access program.”

That means both building hardware to ship (very expensive per unit before economies of scale kick in, not to mention the present difficulties with suppliers) and building the go-to-market team. Servicing, support, and the immense amount of software that goes along with something like this — there’s a lot of hiring going on.

The round itself was led by Viking Global Investors, with participation from HP Enterprise, Lockheed Martin, SIP Global Partners, and previous investors GV, Matrix Partners and Spark Capital. It brings their total raised to about $113 million; There was the initial $11M A round, then GV hopping on with a $22M A-1, then this $80M.

Although there are other companies pursuing photonic computing and its potential applications in neural networks especially, Harris didn’t seem to feel that they were nipping at Lightmatter’s heels. Few if any seem close to shipping a product, and at any rate this is a market that is in the middle of its hockey stick moment. He pointed to an OpenAI study indicating that the demand for AI-related computing is increasing far faster than existing technology can provide it, except with ever larger datacenters.

The next decade will bring economic and political pressure to rein in that power consumption, just as we’ve seen with the cryptocurrency world, and Lightmatter is poised and ready to provide an efficient, powerful alternative to the usual GPU-based fare.

As Harris suggested hopefully earlier, what his company has made is potentially transformative in the industry and if so there’s no hurry — if there’s a gold rush, they’ve already staked their claim.

Microsoft’s Reading Progress makes assessing reading levels easier for kids and teachers

Among the many, many tasks required of grade school teachers is that of gauging each student’s reading level, usually by a time-consuming and high-pressure one-on-one examination. Microsoft’s new Reading Progress application takes some of the load off the teacher’s shoulders, allowing kids to do their reading at home and using natural language understanding to help highlight obstacles and progress.

The last year threw most educational plans into disarray, and reading levels did not advance the way they would have if kids were in school. Companies like Amira are emerging to fill the gap with AI-monitored reading, and Microsoft aims to provide teachers with more tools on their side.

Reading Progress is an add-on for Microsoft Teams that helps teachers administer reading tests in a more flexible way, taking pressure off students who might stumble in a command performance, and identifying and tracking important reading events like skipped words and self-corrections.

Teachers pick reading assignments for each students (or the whole class) to read, and the kids do so on their own time, more like doing homework than taking a test. They record a video directly in the app, the audio of which is analyzed by algorithms watching for the usual stumbles.

As you can see in this video testimony by 4th grader Brielle, this may be preferable to many kids:

If a bright and confident kid like Brielle feels better doing it this way (and is now reading two years ahead of her grade, nice work Brielle!), what about the kids who are having trouble reading due to dyslexia, or are worried about their accent, or are simply shy? Being able to just talk to their own camera, by themselves in their own home, could make for a much better reading — and therefore a more accurate assessment.

It’s not meant to replace the teacher altogether, of course — it’s a tool that allows overloaded educators to prioritize and focus better and track things more objectively. It’s similar to how Amira is not meant to replace in-person reading groups — impossible during the pandemic — but provides a similarly helpful process of quickly correcting common mistakes and encouraging the reader.

Microsoft published about half a dozen things pertaining to Reading Progress today. Here’s its origin story, a basic summary, its product hub, a walkthrough video, and citations supporting its approach. There’s more, too, in this omnibus post about new education-related products out soon or now.

Personalized nutrition startup Zoe closes out Series B at $53M total raise

Personalized nutrition startup Zoe — named not for a person but after the Greek word for ‘life’ — has topped up its Series B round with $20M, bringing the total raised to $53M.

The latest close of the B round was led by Ahren Innovation Capital, which the startup notes counts two Nobel laureates as science partners. Also participating are two former American football players, Eli Manning and Ositadimma “Osi” Umenyiora; Boston, US-based seed fund Accomplice; healthcare-focused VC firm THVC and early stage European VC, Daphni.

The U.K.- and U.S.-based startup was founded back in 2017 but operated in stealth mode for three years, while it was conducting research into the microbiome — working with scientists from Massachusetts General Hospital, Stanford Medicine, Harvard T.H. Chan School of Public Health, and King’s College London.

One of the founders, professor Tim Spector of King’s College — who is also the author of a number of popular science books focused on food — became interested in the role of food (generally) and the microbiome (in particular) on overall health after spending decades researching twins to try to understand the role of genetics (nature) vs nurture (environmental and lifestyle factors) on human health.

Zoe used data from two large-scale microbiome studies to build its first algorithm which it began commercializing last September — launching its first product into the U.S. market: A home testing kit that enables program participants to learn how their body responds to different foods and get personalized nutrition advice.

The program costs around $360 (which Zoe takes in six instalments) and requires participants to (self) administer a number of tests so that it can analyze their biology, gleaning information about their metabolic and gut health by looking at changes in blood lipids, blood sugar levels and the types of bacteria in their gut.

Zoe uses big data and machine learning to come up with predictive insights on how people will respond to different foods so that it can offer individuals guided advice on what and how to eat, with the goal of improving gut health and reducing inflammatory responses caused by diet.

The combination of biological responses it analyzes sets it apart from other personalized nutrition startups with products focused on measuring one element (such as blood sugar) — is the claim.

But, to be clear, Zoe’s first product is not a regulated medical device — and its FAQ clearly states that it does not offer medical diagnosis or treatment for specific conditions. Instead it says only that it’s “a tool that is meant for general wellness purposes only”. So — for now — users have to take it on trust that the nutrition advice it dishes up is actually helpful for them.

The field of scientific research into the microbiome is undoubtedly early — Zoe’s co-founder states that very clearly when we talk — so there’s a strong component here, as is often the case when startups seek to use data and AI to generate valuable personalized predictions, whereby early adopters are helping to further Zoe’s research by contributing their data. Potentially ahead of the sought for individual efficacy, given so much is still unknown around how what we eat affects our health.

For those willing to take a punt (and pay up), they get an individual report detailing their biological responses to specific foods that compares them to thousands of others. The startup also provides them with individualized ‘Zoe’ scores for specific foods in order to support meal planning that’s touted as healthier for them.

“Reduce your dietary inflammation and improve gut health with a 4 week plan tailored to your unique biology and life,” runs the blurb on Zoe’s website. “Built around your food scores, our app will teach you how to make smart swaps, week by week.”

The marketing also claims no food is “off limits” — implying there’s a difference between Zoe’s custom food scores and (weight-loss focused) diets that perhaps require people to cut out a food group (or groups) entirely.

“Our aim is to empower you with the information and tools you need to make the best decisions for your body,” is Zoe’s smooth claim.

The underlying premise is that each person’s biology responds differently to different foods. Or, to put it another way, while we all most likely know at least one person who stays rake-thin and (seemingly) healthy regardless of what (or even how much) they eat, if we ate the same diet we’d probably expect much less pleasing results.

“What we’re able to start scientifically putting some evidence behind is something that people have talked about for a long time,” says co-founder George Hadjigeorgiou. “It’s early [for scientific research into the microbiome] but we have shown now to the world that even twins have different gut microbiomes, we can change our gut microbiomes through diet, lifestyle and how we live — and also that there are associations around particular [gut] bacteria and foods and a way to improve them which people can actually do through our product.”

Users of Zoe’s first product need to be willing (and able) to get pretty involved with their own biology — collecting stool samples, performing finger prick tests and wearing a blood glucose monitor to feed in data so it can analyze how their body responds to different foods and offer up personalized nutrition advice.

Another component of its study of biological responses to food has involved thousands of people eating “special scientific muffins”, which it makes to standardized recipes, so it can benchmark and compare nutritional responses to a particular blend of calories, carbohydrate, fat, and protein.

While eating muffins for science sounds pretty fine, the level of intervention required to make use of Zoe’s first at-home test kit product is unlikely to appeal to those with only a casual interest in improving their nutrition.

Hadjigeorgiou readily agrees the program, as it is now, is for those with a particular problem to solve that can be linked to diet/nutrition (whether obesity, high cholesterol or a disease like type 2 diabetes, and so on). But he says Zoe’s goal is to be able to open up access to personalized nutrition advice much more widely as it keeps gathering more data and insights.

“The idea is, as always, we start with a focused set of people with problems to solve who we believe will have a life-changing experience,” he tells TechCrunch. “At this point we are not trying to create a product for everyone — and we understand that that has limitations in terms of how much we scale in the beginning. Although even still within this focused group of people I can assure you there’s tonnes of people!

“But absolutely the whole idea is that after we get a first [set of users]… then with more data and with more experience we can simplify and start making this simpler and more accessible — both in terms of its simplicity and also it’s price. So more and more people. Because at the end of the day everyone has this right to be able to optimize and understand and be in control — and we want to make that available to everyone.

“Regardless of background and regardless of socio-economic status. And, in fact, many of the people who have the biggest problems around health etc are the ones who have maybe less means and ability to do that.”

Zoe isn’t disclosing how many early users it’s onboarded so far but Hadjigeorgiou says demand is high (it’s currently operating a wait-list for new sign ups).

He also touts promising early results from interim trial with its first users — saying participants experienced more energy (90%), felt less hunger (80%) and lost an average of 11 pounds after three months of following their AI-aided, personalized nutrition plan. Albeit, without data on how many people are involved in the trials it’s not possible to quantify the value of those metrics.

The extra Series B funding will be used to accelerate the rollout of availability of the program, with a U.K. launch planned for this year — and other geographies on the cards for 2022. Spending will also go on continued recruitment in engineering and science, it says.

Zoe already grabbed some eyeballs last year, as the coronavirus pandemic hit the West, when it launched a COVID-19 symptom self-reporting app. It has used that data to help scientists and policy makers understand how the virus affects people.

The Zoe COVID-19 app has had some 5M users over the last year, per Hadjigeorgiou — who points to that (not-for-profit) effort as an example of the kind of transformative intervention the company hopes to drive in the nutrition space down the line.

“Overnight we got millions and millions of people contributing to help uncover new insights around science around COVID-19,” he says, highlighting that it’s been able to publish a number of research papers based on data contributed by app users. “For example the lack of smell and taste… was something that we first [were able to prove] scientifically, and then it became — because of that — an official symptom in the list of the government in the U.K.

“So that was a great example how through the participation of people — in a very, very fast way, which we couldn’t predict when we launched it — we managed to have a big impact.”

Returning to diet, aren’t there some pretty simple ‘rules of thumb’ that anyone can apply to eat more healthily — i.e. without the need to shell out for a bespoke nutrition plan? Basic stuff like eat your greens, avoid processed foods and cut down (or out) sugar?

“There are definitely rules of thumb,” Hadjigeorgiou agrees. “We’ll be crazy to say they’re not. I think it all comes back to the point that although there are rules of thumb and over time — and also through our research, for example — they can become better, the fact of the matter is that most people are becoming less and less healthy. And the fact of the matter is that life is messy and people do not eat even according to these rules of thumb so I think part of the challenge is… [to] educate and empower people for their messy lives and their lifestyle to actually make better choices and apply them in a way that’s sustainable and motivating so they can be healthier.

“And that’s what we’re finding with our customers. We are helping them to make these choices in an empowering way — they don’t need to count calories, they don’t need to restrict themselves through a Keto [diet] regime or something like that. We basically empower them to understand this is the impact food has on your body — real time, how your blood sugar levels change, how your bacteria change, how your blood fat levels changes. And through that empowerment through insight then we say hey, now we’ll give you this course, it’s very simple, it’s like a game — and we’ll given you all these tools to combine different foods, make foods work for you. No food is off limits — but try to eat most days a 75 score [based on the food points Zoe’s app assigns].

“In that very empowering way we see people get very excited, they see a fun game that is also impacting their gut and metabolism and they start feeling these amazing effects — in terms of less hunger, more energy, losing weight and over time as well evolving their health. That’s why they say it’s life changing as well.”

Gamifying research for the goal of a greater good? To the average person that surely sounds more appetitizing than ‘eat your greens’.

Though, as Hadjigeorgiou concedes, research in the field of microbiome — where Zoe’s commercial interests and research USP lie — is “early”. Which means that gathering more data to do more research will remain a key component of the business for the foreseeable future. And with so much still to be understood about the complex interactions between food, exercise and other lifestyle factors and human health, the mission is indeed massive.

In the meanwhile, Zoe will be taking it one suggestive nudge at a time.

“Sugar is bad, kale’s great but the whole kind of magic happens in the middle,” Hadjigeorgiou goes on. “Is oatmeal good for you? Is rice good for you? Is wholewheat pasta good for you? How do you combine wholewheat pasta and butter? How much do you have? This is where basically most of our life happens.

“Because people don’t eat ice-cream the whole day and people don’t eat kale the whole day. They eat all these other foods in the middle and that’s where the magic is — knowing how much to have, how to combine them to make it better, how to combine it with exercise to make it better? How to eat a food that doesn’t dip your sugar levels three hours after you eat it which causes hunger for you. Theses are all the things we’re able to predict and present in a simple and compelling way through a score system to people — and in turn help them [understand their] metabolic response to food.”

Persona lands $50M for identity verification after seeing 10x YoY revenue growth

The identity verification space has been heating up for a while and the COVID-19 pandemic has only accelerated demand with more people transacting online.

Persona, a startup focused on creating a personalized identity verification experience “for any use case,” aims to differentiate itself in an increasingly crowded space. And investors are banking on the San Francisco-based company’s ability to help businesses customize the identity verification process — and beyond — via its no-code platform in the form of a $50 million Series B funding round. 

Index Ventures led the financing, which also included participation from existing backer Coatue Management. In late January 2020, Persona raised $17.5 million in a Series A round. The company declined to reveal at which valuation this latest round was raised.

Businesses and organizations can access Persona’s platform by way of an API, which lets them use a variety of documents, from government-issued IDs through to biometrics, to verify that customers are who they say they are. The company wants to make it easier for organizations to implement more watertight methods based on third-party documentation, real-time evaluation such as live selfie checks and AI to verify users.

Persona’s platform also collects passive signals such as a user’s device, location, and behavioral signals to provide a more holistic view of a user’s risk profile. It offers a low code and no code option depending on the needs of the customer.

The company’s momentum is reflected in its growth numbers. The startup’s revenue has surged by “more than 10 times” while its customer base has climbed by five times over the past year, according to co-founder and CEO Rick Song, who did not provide hard revenue numbers. Meanwhile, Persona’s headcount has more than tripled to just over 50 people.

When we look back at the space five to 10 years ago, AI was the next differentiation and every identity verification company is doing AI and machine learning,” Song told TechCrunch. “We believe the next big differentiator is more about tailoring and personalizing the experience for individuals.”

As such, Song believes that growth can be directly tied to Persona’s ability to help companies with “unique” use cases with a SaaS platform that requires little to no code and not as much heavy lifting from their engineering teams. Its end goal, ultimately, is to help businesses deter fraud, stay compliant and build trust and safety while making it easier for them to customize the verification process to their needs. Customers span a variety of industries, and include Square, Robinhood, Sonder, Brex, Udemy, Gusto, BlockFi and AngelList, among others.

“The strategy your business needs for identity verification and management is going to be completely different if you’re a travel company verifying guests versus a delivery service onboarding new couriers versus a crypto company granting access to user funds,” Song added. “Even businesses within the same industry should tailor the identity verification experience to each customer if they want to stand out.”

Image Credits: Persona

For Song, another thing that helps Persona stand out is its ability to help customers beyond the sign-on and verification process. 

“We’ve built an identity infrastructure because we don’t just help businesses at a single point in time, but rather throughout the entire lifecycle of a relationship,” he told TechCrunch.

In fact, much of the company’s growth last year came in the form of existing customers finding new use cases within the platform in addition to new customers signing on, Song said.

“We’ve been watching existing customers discover more ways to use Persona. For example, we were working with some of our customer base on a single use case and now we might be working with them on 10 different problems — anywhere from account opening to a bad actor investigation to account recovery and anything in between,” he added. “So that has probably been the biggest driver of our growth.”

Index Ventures Partner Mark Goldberg, who is taking a seat on Persona’s board as part of the financing, said he was impressed by the number of companies in Index’s own portfolio that raved about Persona.

“We’ve had our antennas up for a long time in this space,” he told TechCrunch. “We started to see really rapid adoption of Persona within the Index portfolio and there was the sense of a very powerful and very user friendly tool, which hadn’t really existed in the category before.”

Its personalization capabilities and building block-based approach too, Goldberg said, makes it appealing to a broader pool of users.

“The reality is there’s so many ways to verify a user is who they say they are or not on the internet, and if you give people the flexibility to design the right path to get to a yes or no, you can just get to a much better outcome,” he said. “That was one of the things we heard — that the use cases were not like off the rack, and I think that has really resonated in a time where people want and expect the ability to customize.”

Persona plans to use its new capital to grow its team another twofold by year’s end to support its growth and continue scaling the business.

In recent months, other companies in the space that have raised big rounds include Socure and Sift.

Analytics as a service: Why more enterprises should consider outsourcing

With an increasing number of enterprise systems, growing teams, a rising proliferation of the web and multiple digital initiatives, companies of all sizes are creating loads of data every day. This data contains excellent business insights and immense opportunities, but it has become impossible for companies to derive actionable insights from this data consistently due to its sheer volume.

According to Verified Market Research, the analytics-as-a-service (AaaS) market is expected to grow to $101.29 billion by 2026. Organizations that have not started on their analytics journey or are spending scarce data engineer resources to resolve issues with analytics implementations are not identifying actionable data insights. Through AaaS, managed services providers (MSPs) can help organizations get started on their analytics journey immediately without extravagant capital investment.

MSPs can take ownership of the company’s immediate data analytics needs, resolve ongoing challenges and integrate new data sources to manage dashboard visualizations, reporting and predictive modeling — enabling companies to make data-driven decisions every day.

AaaS could come bundled with multiple business-intelligence-related services. Primarily, the service includes (1) services for data warehouses; (2) services for visualizations and reports; and (3) services for predictive analytics, artificial intelligence (AI) and machine learning (ML). When a company partners with an MSP for analytics as a service, organizations are able to tap into business intelligence easily, instantly and at a lower cost of ownership than doing it in-house. This empowers the enterprise to focus on delivering better customer experiences, be unencumbered with decision-making and build data-driven strategies.

Organizations that have not started on their analytics journey or are spending scarce data engineer resources to resolve issues with analytics implementations are not identifying actionable data insights.

In today’s world, where customers value experiences over transactions, AaaS helps businesses dig deeper into their psyche and tap insights to build long-term winning strategies. It also enables enterprises to forecast and predict business trends by looking at their data and allows employees at every level to make informed decisions.

Heirlume raises $1.38M to remove the barriers of trademark registration for small businesses

Platforms like Shopify, Stripe and WordPress have done a lot to make essential business-building tools, like running storefronts, accepting payments, and building websites accessible to businesses with even the most modest budgets. But some very key aspects of setting up a company remain expensive, time-consuming affairs that can be cost-prohibitive for small businesses — but that, if ignored, can result in the failure of a business before it even really gets started.

Trademark registration is one such concern, and Toronto-based startup Heirlume just raised $1.7 million CAD (~$1.38 million) to address the problem with a machine-powered trademark registration platform that turns the process into a self-serve affair that won’t break the budget. Its AI-based trademark search will flag if terms might run afoul of existing trademarks in the U.S. and Canada, even when official government trademark search tools, and even top-tier legal firms might not.

Heirlume’s core focus is on levelling the playing field for small business owners, who have typically been significantly out-matched when it comes to any trademark conflicts.

“I’m a senior level IP lawyer focused in trademarks, and had practiced in a traditional model, boutique firm of my own for over a decade serving big clients, and small clients,” explained Heirlume co-founder Julie MacDonnell in an interview. “So providing big multinationals with a lot of brand strategy, and in-house legal, and then mainly serving small business clients when they were dealing with a cease-and-desist, or an infringement issue. It’s really those clients that have my heart: It’s incredibly difficult to have a small business owner literally crying tears on the phone with you, because they just lost their brand or their business overnight. And there was nothing I could do to help because the law just simply wasn’t on their side, because they had neglected to register their trademarks to own them.”

In part, there’s a lack of awareness around what it takes to actually register and own a trademark, MacDonnell says. Many entrepreneurs just starting out seek out a domain name as a first step, for instance, and some will fork over significant sums to register these domains. What they don’t realize, however, is that this is essentially a rental, and if you don’t have the trademark to protect that domain, the actual trademark owner can potentially take it away down the road. But even if business owners do realize that a trademark should be their first stop, the barriers to actually securing one are steep.

“There was an an enormous, insurmountable barrier, when it came to brand protection for those business owners,” she said. “And it just isn’t fair. Every other business service, generally a small business owner can access. Incorporating a company or even insurance, for example, owning and buying insurance for your business is somewhat affordable and accessible. But brand ownership is not.”

Heirlume brings the cost of trademark registration down from many thousands of dollars, to just under $600 for the first, and only $200 for each additional after that. The startup is also offering a very small business-friendly ‘buy now, pay later’ option supported by Clearbanc, which means that even businesses starting on a shoestring can take step of protecting their brand at the outset.

In its early days, Heirlume is also offering its core trademark search feature for free. That provides a trademark search engine that works across both U.S. and Canadian government databases, which can not only tell you if your desired trademark is available or already held, but also reveal whether it’s likely to be able to be successfully obtained, given other conflicts that might arise that are totally ignored by native trademark database search portals.

Heirlume search tool comparison

Image Credits: Heirlume

Heirlume uses machine learning to identify these potential conflicts, which not only helps users searching for their trademarks, but also greatly decreases the workload behind the scenes, helping them lower costs and pass on the benefits of those improved margins to its clients. That’s how it can achieve better results than even hand-tailored applications from traditional firms, while doing so at scale and at reduced costs.

Another advantage of using machine-powered data processing and filing is that on the government trademark office side, the systems are looking for highly organized, curated data sets that are difficult for even trained people to get consistently right. Human error in just data entry can cause massive backlogs, MacDonnell notes, even resulting in entire applications having to be tossed and started over from scratch.

“There are all sorts of datasets for those [trademark requirement] parameters,” she said. “Essentially, we synthesize all of that, and the goal through machine learning is to make sure that applications are utterly compliant with government rules. We actually have a senior level trademark examiner that that came to work for us, very excited that we were solving the problems causing backlogs within the government. She said that if Heirlume can get to a point where the applications submitted are perfect, there will be no backlog with the government.”

Improving efficiency within the trademark registration bodies means one less point of friction for small business owners when they set out to establish their company, which means more economic activity and upside overall. MacDonnell ultimately hopes that Heirlume can help reduce friction to the point where trademark ownership is at the forefront of the business process, even before domain registration. Heirlume has a partnership with Google Domains to that end, which will eventually see indication of whether a domain name is likely to be trademarkable included in Google Domain search results.

This initial seed funding includes participation from Backbone Angels, as well as the Future Capital collective, Angels of Many and MaRS IAF, along with angel investors including Daniel Debow, Sid Lee’s Bertrand Cesvet and more. MacDonnell notes that just as their goal was to bring more access and equity to small business owners when it comes to trademark protection, the startup was also very intentional in building its team and its cap table. MacDonnell, along with co-founders CTO Sarah Guest and Dave McDonnell, aim to build the largest tech company with a majority female-identifying technology team. Its investor make-up includes 65% female-identifying or underrepresented investors, and MacDonnell says that was a very intentional choice that extended the time of the raise, and even led to turning down interest from some leading Silicon Valley firms.

“We want underrepresented founders to be to be funded, and the best way to ensure that change is to empower underrepresented investors,” she said. “I think that we all have a responsibility to actually do do something. We’re all using hashtags right now, and hashtags are not enough […] Our CTO is female, and she’s often been the only female person in the room. We’ve committed to ensuring that women in tech are no longer the only person in the room.”

Computer vision inches towards ‘common sense’ with Facebook’s latest research

Machine learning is capable of doing all sorts of things as long as you have the data to teach it how. That’s not always easy, and researchers are always looking for a way to add a bit of “common sense” to AI so you don’t have to show it 500 pictures of a cat before it gets it. Facebook’s newest research takes a big step towards reducing the data bottleneck.

The company’s formidable AI research division has been working on how to advance and scale things like advanced computer vision algorithms for years now, and has made steady progress, generally shared with the rest of the research community. One interesting development Facebook has pursued in particular is what’s called “semi-supervised learning.”

Generally when you think of training an AI, you think of something like the aforementioned 500 pictures of cats — images that have been selected and labeled (which can mean outlining the cat, putting a box around the cat, or just saying there’s a cat in there somewhere) so that the machine learning system can put together an algorithm to automate the process of cat recognition. Naturally if you want to do dogs or horses, you need 500 dog pictures, 500 horse pictures, etc — it scales linearly, which is a word you never want to see in tech.

Semi-supervised learning, related to “unsupervised” learning, involves figuring out important parts of a dataset without any labeled data at all. It doesn’t just go wild, there’s still structure; for instance, imagine you give the system a thousand sentences to study, then showed it ten more that have several of the words missing. The system could probably do a decent job filling in the blanks just based on what it’s seen in the previous thousand. But that’s not so easy to do with images and video — they aren’t as straightforward or predictable.

But Facebook researchers have shown that while it may not be easy, it’s possible and in fact very effective. The DINO system (which stands rather unconvincingly for “DIstillation of knowledge with NO labels”) is capable of learning to find objects of interest in videos of people, animals, and objects quite well without any labeled data whatsoever.

Animation showing four videos and the AI interpretation of the objects in them.

Image Credits: Facebook

It does this by considering the video not as a sequence of images to be analyzed one by one in order, but as an complex, interrelated set,like the difference between “a series of words” and “a sentence.” By attending to the middle and the end of the video as well as the beginning, the agent can get a sense of things like “an object with this general shape goes from left to right.” That information feeds into other knowledge, like when an object on the right overlaps with the first one, the system knows they’re not the same thing, just touching in those frames. And that knowledge in turn can be applied to other situations. In other words, it develops a basic sense of visual meaning, and does so with remarkably little training on new objects.

This results in a computer vision system that’s not only effective — it performs well compared with traditionally trained systems — but more relatable and explainable. For instance, while an AI that has been trained with 500 dog pictures and 500 cat pictures will recognize both, it won’t really have any idea that they’re similar in any way. But DINO — although it couldn’t be specific — gets that they’re similar visually to one another, more so anyway than they are to cars, and that metadata and context is visible in its memory. Dogs and cats are “closer” in its sort of digital cognitive space than dogs and mountains. You can see those concepts as little blobs here — see how those of a type stick together:

Animated diagram showing how concepts in the machine learning model stay close together.

Image Credits: Facebook

This has its own benefits, of a technical sort we won’t get into here. If you’re curious, there’s more detail in the papers linked in Facebook’s blog post.

There’s also an adjacent research project, a training method called PAWS, which further reduces the need for labeled data. PAWS combines some of the ideas of semi-supervised learning with the more traditional supervised method, essentially giving the training a boost by letting it learn from both the labeled and unlabeled data.

Facebook of course needs good and fast image analysis for its many user-facing (and secret) image-related products, but these general advances to the computer vision world will no doubt be welcomed by the developer community for other purposes.