Reid Hoffman on the evolution of ‘blitzscaling’ amid the pandemic

When LinkedIn co-founder and Greylock partner Reid Hoffman first coined the term “blitzscaling,” he kept it simple: It’s a concept that encourages entrepreneurs to prioritize speed over efficiency during a period of uncertainty. Years later, founders are navigating a pandemic, perhaps the most uncertain period of their lives, and Hoffman has a clarification to make.

“Blitzscaling itself isn’t the goal,” Hoffman said during TechCrunch Disrupt 2021. “Blitzscaling is being inefficient; it’s spending capital inefficiently and hiring inefficiently; it’s being uncertain about your business model; and those are not good things.” Instead, he said, blitzscaling is a choice companies may have to make for a set period of time to outpace a competitor or react to a pandemic rather than a route to take from idea to IPO.

That doesn’t mean startups should avoid prioritizing breakneck speed, especially in industries like fintech and edtech, where the pandemic spotlighted a lot of potential. Instead, Hoffman thinks the pandemic’s real impact on his definition is that “the benchmark for what you may need to do in order to outpace your competitors to scale in an ecosystem may have changed.”

A lot of new money

Hoffman’s broadened view of blitzscaling blends well with his firm’s recent announcement of a $500 million seed fund. The close came weeks after Andreessen Horowitz closed its own $400 million seed fund.

Greylock claims that its new fund is “the largest pool of venture capital dedicated to backing founders at one,” and explicitly said that it is “willing to write large seed checks at lean-in valuations, which gives companies more runway to hit milestones without taking on additional dilution.” It’s fair to say that Greylock’s checks could help seed-stage startups afford to blitzscale while still prioritizing runway and other business-oriented resources.

D-ID launches ‘Speaking Portrait,’ a way to turn photos into custom, photo-realistic videos

The company whose tech powered the sensational MyHeritage app that turned classic family photos into lifelike moving portraits is back with a new implementation of its technology: Transforming still photographs into ultra-realistic video, capable of saying whatever you want.

D-ID’s Speaking Portraits may look like the notorious “deepfakes” that have made headlines over the past couple of years, but the underlying tech is actually quite different, and there’s no training required for basic functionality.

D-ID, which actually debuted at TechCrunch Battlefield in 2018 with a very different focus (scrambling facial recognition tech), debuted its new Speaking Portraits product live at TechCrunch Disrupt 2021. The company showed off a number of use cases, including using its new tech to create a multilingual TV anchor capable of expressing various emotions; creating virtual chatbot personas for customer support interactions; developing training courses for professional development use; and creating interactive conversational video ad kiosks.

Both this new product and D-ID’s partnership with MyHeritage, which saw the latter company’s app briefly take over the top of Apple’s App Store charts, are obviously major departures from the company’s initial focus. Up until even May of last year, D-ID was still raising funding based on its earlier approach, but its partnership with MyHeritage debuted in February, followed by a similar deal with GoodTrust after that and a splashy tie-up with Warner Bros. on the Hugh Jackman film “Reminiscence” that allowed fans to insert themselves into its trailer.

D-ID’s pivot might seem more dramatic than most, but from a technical perspective its new focus on bringing photos to life is not so far off from its de-identification software. D-ID CEO and co-founder Gil Perry told me that the company chose the new direction because it was apparent that there’s a very large addressable market when it comes to this kind of application.

Big-name clients like Warner Bros., as well as an App Store-dominating app from a relatively unknown brand, would seem to support that assessment. Speaking Portraits, however, is aimed at clients both big and small, and allows anyone to generate a full HD video from a source image, plus either recorded speech or typed text. D-ID is launching the product with support for English, Spanish and Japanese, but plans to add other languages in the future, too, as customers request support for those.

D-ID offers two basic categories of Speaking Portrait, including a “Single Portrait” that can be made using just a single still image, which features an animated head but other parts stay static. This one will also work with the existing background in the photo only.

For a bit more uncanny reality, there’s a “Trained Character” option that requires submitting a 10-minute training video of the character requested, following guidelines supplied by the company. This has the advantage of being able to work against a custom, swappable background, and features some preset animation options for the character’s body and hands.

Check out an example of a Speaking Portrait newscaster generated using the trained character method below to get a sense of how realistic it can be:

The demo that Perry showed us live at Disrupt today was created from a still photo of himself as a child. The photo was mapped to facial expressions performed by a sort of human puppeteer who also voiced the script for what the Speaking Portrait version of Gil ended up saying during the interaction between his current and younger self. You can see a video of how the speaker’s expressions were mirrored by the animated photo below:

Obviously, the ability to create photo-realistic videos from just a single photo that can convincingly deliver any lines you want is a bit of a hair-raising prospect. We’ve already seen far-ranging debates about the ethics of deepfakes, as well as industry efforts to try to fingerprint and identify when AI generated realistic, but artificial, results.

Perry said at Disrupt that D-ID is “keen to make sure it’s used for good, not bad,” and that in order to achieve that, they’re going to be issuing a pledge at the end of October, alongside partners, that outline their commitments to “transparency and consent” when it comes to using tech like Speaking Portraits. The purpose of said commitment is to ensure that “users aren’t confused about what they’re seeing and that people involved give their consent.”

While D-ID wants to make assurances in its terms of use and public position on misuse of this kind of tech, Perry says it “can’t do it alone,” which is why he’s calling on others in the ecosystem to join forces in efforts to avoid abuse.

Submit your pitch deck now for live feedback at TechCrunch Disrupt 2021 next week

The art of pitching is perhaps the most important art that a founder learns on their journey to unicorn status and beyond. And like any art, it helps to get some critical feedback along the way from the judges on the other side of the table.

That’s why every Disrupt, we host Pitch Deck Teardown, a panel of VCs who read and critique several pitch decks in a row to offer feedback on everything from the overarching narrative and story to the mundane details of format, typography and colors.

At TechCrunch Disrupt 2021 next week, I’m excited to be hosting Maren Bannon of January Ventures, Bling Capital’s Ben Ling and Vanessa Larco of NEA for our next iteration of this popular workshopping panel.

If you’re a founder and want to submit your deck for consideration, head on over to this trusty Google Form and upload a copy of your pitch deck in PDF format. Remember that this will be presented publicly, so make sure it’s appropriate for a live studio audience. We’ll be selecting roughly six of them for inclusion in the event, and we’ll notify the founders selected by email.

Come join us next week! And if you need tickets to Disrupt, we still have some available for all the virtual excitement across two stages and dozens of fireside chats and panels.