You may need more than one pitch deck

In a lot of the pitch deck teardowns, I get stumped by a certain slide, even though I am 99% sure that when a founder uses it to pitch a VC, they will give a little bit of additional context that “unlocks” the slide and helps it make sense.

The specific slide that inspired this post was from Forethought’s Series C deck, which ended up raising $65 million for the TechCrunch Battlefield-winning company, but I’d be lying if I said that this was an uncommon problem.

Forbes Next Billion-Dollar Startups

Slide 6 from Forethought’s pitch deck. Image Credits: Forethought (opens in a new window)

The issue with this slide is that while it looks very pretty, it doesn’t contain a photo of Forethought’s founders. In addition, it is wedged between two slides about the company’s customers and its value proposition. That left me confused because now there were two ways of interpreting this slide: It’s possible that Forethought was named as one of Forbes’ next billion-dollar startups. Or it has all of the companies on said list as customers and is flexing in the same way that some companies will say “200 of the S&P 500 companies use our product.”

A quick search later confirmed that the former was true — Forethought is on the Forbes list — but if I have to conduct a search to make sense of a slide, that’s not a good sign.

It’s such a great example, though, of a slide that looks good and could work great with just a few words of voice-over. The thing is, though, that while most VC pitch decks are presentation decks, that is far from the only context in which you’ll be using your deck. In fact, there are at least four:

  1. The teaser deck.
  2. The send-ahead deck.
  3. The presentation deck.
  4. The leave-behind deck.

Let’s talk through the differences and similarities between your pitch decks and take a closer look at what each deck needs to do in each context. I’ll also show how you can avoid having to build and maintain 9,000 different decks.

Adobe launches Creative Cloud Express

Adobe today launched Creative Cloud Express, a new mobile and web app that brings some of the best features of the company’s sprawling Creative Cloud Suite and Acrobat PDF tools into a single application to help users quickly create anything from social media posts to promotional posters and videos.

Using a template-first approach with built-in access to stock images and other assets, Creative Cloud Express is meant to be far more accessible than the individual Creative Cloud apps. The app will come in both a free version and a paid $9.99/month edition with additional capabilities and a library of more complex templates. Access to the new application will also be includes in Adobe’s Creative Cloud All Apps and flagship single-app plans.

Image Credits: Adobe

Besides the web app, a free app is now also available in Apple’s app store, Google Play and the Microsoft Store.

The general idea behind Creative Cloud Express is to give non-professionals the tools they need to bring their vision to life. As Adobe’s Ashley Still noted, the company has seen a lot of growth from non-professional users in recent years. But while a lot of these users may initially think they want the precision and control of a full Creative Cloud app, the reality is that what they often really want is a fast and easy way to accomplish the same tasks.

“What we’re doing with Creative Cloud Express is we’re taking all the learnings from our breadth of web and mobile apps, as well as our core Creative Cloud technology — whether it’s Photoshop and imaging or video — and we’re bringing it into a unified offering called Creative Cloud Express,” Still said. “This is really for people who are focused on outcome and not process. They don’t want to start with a blank page, they want to start with an image from the 175 million strong Adobe Stock library. They don’t want to create a font, they want to get access to one of our 20,000 amazing fonts from the Adobe Font library. They don’t want to go to multiple applications to create a flyer and then create a PDF so they can print it. They want to be able to do it all in one place.”

Image Credits: Adobe

In practice, this means you get access to a vast library of templates to get started, but also tools to quickly remove the background from an image or apply Photoshop-style filters and effects to these images. Thanks to an integration with Creative Cloud Libraries, users will also be able to take assets from Photoshop and Illustrator that maybe a colleague created for them and then re-use them in the Creative Cloud Express app.

There are also tools to convert videos to GIFs, convert documents to PDFs and more. One interesting aspect here is the integration of Adobe Stock, which doesn’t feature a free plan but which is integrated (with some limitations) into the free version of Creative Cloud Express. Free users will get access to about 1 million images and other assets. Premium plan users will get access to 175 million Adobe Stock photos, 20,000 fonts and access to Photoshop Express and Premiere Rush.

“Less is more in Creative Cloud Express,” she said. “You don’t need to be able to do everything in Photoshop. Like you don’t need neural filters, but you do need to be able to do a few simple things like remove a background or make simple edits to an image. With Acrobat, you don’t need to password protect a PDF, you need to be able to create a PDF, edit a PDF. A lot of the simplicity comes with our own editing of what are the few most important things that that people need to achieve? What are their intentions?”

Image Credits: Adobe

In some ways, Creative Cloud Express feels a bit like Spark, its tools for building social graphics, short videos and web sites, on steroids. Adobe wouldn’t say so, but if you think of this as Spark on steroids, I don’t think you’re very far off as Creative Cloud Express shares a lot of the user interface design and overall philosophy. Indeed, if you went to adobe.com/express before the launch, it would redirect you to the Spark homepage.

But as Still noted, the company isn’t seeing this as a replacement for applications like Spark, Photoshop Express or Premiere Rush, all of which are also meant to take some of Adobe’s core features and AI tools and make them more accessible.

The target audience is also a bit wider for Creative Cloud Express. Adobe wants it to become the go-to content creation tool for anybody from students to small business owners.

“Everyone has a story to tell and it’s our mission to empower everyone to express their ideas,” said Scott Belsky, chief product officer and executive vice president, Creative Cloud, Adobe. “In this unique time, where millions of people are building a personal and professional brand, we’re excited to launch Creative Cloud Express as a simple, template-based tool that unifies the creation, collaboration and sharing process so anyone can create with ease.”

Adobe expands Acrobat Web, adds PDF text and image editing

For the longest time, Acrobat was Adobe’s flagship desktop app for working with — and especially editing — PDFs. In recent years, the company launched Acrobat on the web, but it was never quite as fully featured as the desktop version, and one capability a lot of users were looking for, editing text and images in PDFs, remained a desktop-only feature. That’s changing. With its latest update to Acrobat on the web, Adobe is bringing exactly this ability to its online service.

“[Acrobat Web] is strategically important to us because we have more and more people working in the browser,” Todd Gerber, Adobe’s VP for Document Cloud, told me. “Their day begins by logging into whether it’s G Suite or Microsoft Office 365. And so we want to be in all the surfaces where people are doing their work.” The team first launched the ability to create and convert PDFs, but as Gerber noted, it took a while to get to the point where being able to edit PDFs in a performant and real-time way was possible. “We could have done it earlier, but it wouldn’t have been up to the standards of being fast, nimble and quality.” He specifically noted that working with fonts was one of the more difficult problems the team faced in bringing this capability online.

He also noted that even though we tend to think of PDF as an Adobe format, it is an open standard and lots of third-party tools can create PDFs. That large ecosystem, with the potential for variations between implementations, also makes it more difficult to offer editing capabilities for Adobe.

With today’s launch, Adobe is also introducing a couple of additional browser-based features: protecting PDFs, splitting them into two and merging multiple PDFs. In addition, after working with Google last year to offer a handful of Acrobat shortcuts using the .new domain, Adobe is now launching a set of new shortcuts like EditPDF.new. The company plans to roll out more of these over the course of the next year.

In total, Adobe says, the company saw about 10 million clicks on its existing shortcuts, which just goes to show how many people try to convert or sign PDFs every day.

As Gerber noted, a lot of potential users don’t necessarily think of Acrobat first. Instead, what they want to do is compress a PDF or convert it. Acrobat Web and the .new domains help the company bring a new audience to the platform, he believes. “It’s unlocking a new audience for us that didn’t initially think of Adobe. They think about PDFs, they think about what they need to do with them,” he said. “So it’s allowing us to expand our customer base by being relevant in the way that they’re looking to discover and ultimately transact. Our journey with Acrobat web actually started with that notion: let’s go after the non-branded searches.”

Adobe, of course, funnels to the Acrobat desktop app all branded searches where users are explicitly looking for Acrobat, but for the more casual user, it brings them to Acrobat Web where they can easily perform whatever action they came for without even signing up for the service.

Adobe’s Document Services make PDFs easier to work with for developers

Over the course of the last year, Adobe has quietly continued to expand its tools for helping developers use PDFs in their applications. In April, the company launched a couple of SDKs, for example, which are now known as the PDF Embed API and PDF Tools API, and with that update, the company also launched its Adobe Documents Services platform. The idea here is to provide developers with easy-to-use tools to build PDFs into their applications and workflows. Today, the company is announcing a new partnership with Microsoft that brings Document Services to Power Automate, Microsoft’s low-code workflow automation platform.

“We had this vision about a year and a half back where we said, ‘how about bringing the best of what we provide in our own apps to third-party apps as well?’ ” Vibhor Kapoor, Adobe’s SVP for its Document Cloud business, told me. “That’s kind of the simple mindset where we said: let’s decompose the capabilities of Acrobat as microservices [and] as APIs and give it to developers and publishers because frankly, a PDF for developers and publishers has been a pain for lack of a better word. So we brought these services to life.”

The team worked to make embedding PDFs into web experiences better, for example (and Kapoor frankly noted that previously, the developer experience had always been “very suboptimal” and that the user experience, too, was not always intuitive). Now, with Document Services and the Embed API, it’s just a matter of a few lines of JavaScript to embed a PDF.

Image Credits: Adobe

Kapoor acknowledged that exposing these features in SDKs and APIs was a bit of a challenge, simply because the teams didn’t originally have to worry about this use case. But on top of the technical challenges, this was also a question of changing the overall mindset. “We never had a very developer-oriented offering in the past and that means that we need to build a team that understands developers, and figure out how we package these APIs and make them available,” he noted.

The new Power Automate integration brings over 20 new PDF-centric actions from the PDF Tools API to Microsoft’s platform. These will allow users to do things like create PDFs from documents in a OneDrive folder, for example, covert images to PDFs or apply optical character recognition to PDFs.

Since Adobe launched the platforms, about 6,000 developers have now started using it and Kapoor tells me that he is seeing “significant growth” in terms of the number of API calls that are being made. From a business perspective, adding Power Automate will also likely function as a new funnel for getting new developers on board.

Disney Imagineering has created autonomous robot stunt doubles

For over 50 years, Disneyland and its sister parks have been a showcase for increasingly technically proficient versions of its “animatronic” characters. First pneumatic and hydraulic and more recently fully electronic — these figures create a feeling of life and emotion inside rides and attractions, in shows and, increasingly, in interactive ways throughout the parks.

The machines they’re creating are becoming more active and mobile in order to better represent the wildly physical nature of the characters they portray within the expanding Disney universe. And a recent addition to the pantheon could change the way that characters move throughout the parks and influence how we think about mobile robots at large.

I wrote recently about the new tack Disney was taking with self-contained characters that felt more flexible, interactive and, well, alive than ‘static’, pre-programmed animatronics. That has done a lot to add to the convincing nature of what is essentially a very limited robot.

Traditionally, most animatronic figures cannot move from where they sit or stand, and are pre-built to exacting show specifications. The design and programming phases of the show are closely related, so that the hero characters are efficient and durable enough to run hundreds of times a day, every day, for years.

The Na’avi Shaman from Pandora: The World of Avatar, at Walt Disney World, represents the state of the art of this kind of figure.

However, with the expanded universe of Disney properties including more and more dynamic and heroic figures by the year, it makes sense that they’d want to explore ways of making the robots that represent those properties in the parks more believable and active.

That’s where the Stuntronics project comes in. Built out of a research experiment called Stickman, which we covered a few months ago, Stuntronics are autonomous, self-correcting aerial performers that make on-the-go corrections to nail high-flying stunts every time. Basically robotic stuntpeople, hence the name.

I spoke to Tony Dohi, Principle R&D Imagineer and Morgan Pope, Associate Research Scientist at Disney, about the project.

“So what this is about is the realization we came to after seeing where our characters are going on screen,” says Dohi, “whether they be Star Wars characters, or Pixar characters, or Marvel characters or our own animation characters, is that they’re doing all these things that are really, really active. And so that becomes the expectation our park guests have that our characters are doing all these things on screen — but when it comes to our attractions, what are our animatronic figures doing? We realized we have kind of a disconnect here.”

So they came up with the concept of a stunt double for the ‘hero’ animatronic figures that could take their place within a show or scene to perform more aggressive maneuvering, much in the same way a double replaces a valuable and delicate actor in a dangerous scene.

The Stuntronics robot features on-board accelerometer and gyroscope arrays supported by laser range finding. In its current form, it’s humanoid, taking on the size and shape of a performer that could easily be imagined clothed in the costume of, say, one of The Incredibles, or someone on the Marvel roster. The bot is able to be slung from the end of a wire to fly through the air, controlling its pose, rotation and center of mass to not only land aerial tricks correctly but to do them on target while holding heroic poses in midair.

One use of this could be mid-show in an attraction. For relatively static shots, hero animatronics like the Shaman or new figures Imagineering is constantly working on could provide nuanced performances of face and figure. Then, a transition to a scene that requires dramatic, un-fettered action and boom, a Stuntronics double could fly across the space on its own, calculating trajectories and striking poses with its on-board hardware, hitting a target dead on every time. Queue re-set for the next audience.

This focus on creating scenarios where animatronics feel more ‘real’ and dynamic is at work in other areas of Imagineering as well, with autonomous rolling robots and — some day — the holy grail of bipedal walking robots. But Stuntronics fills one specific gap in the repertoire of a standard Animatronic figure — the ability to convince you it can be a being of action and dynamism.

“So often our robots are in the uncanny valley where you got a lot of function, but it still doesn’t look quite right. And I think here the opposite is true,” says Pope. “When you’re flying through the air, you can have a little bit of function and you can produce a lot of stuff that looks pretty good, because of this really neat physics opportunity — you’ve got these beautiful kinds of parabolas and sine waves that just kind of fall out of rotating and spinning through the air in ways that are hard for people to predict, but that look fantastic.”

The original BRICK

Like many of the solutions Imagineering comes up with for its problems, Stuntronics started out as a research project without a real purpose. In this case, it was called BRICK (Binary Robotic Inertially Controlled bricK). Basically, a metal brick with sensors and the ability to change its center of mass to control its spin to hit a precise orientation at a precise height – to ‘stick the landing’ every time.

From the initial BRICK, Disney moved on to Stickman, an articulated version of the device that could now more aggressively control the rotation and orientation of the device. Combined with some laser rangefinders you had the bones of something that, if you squint, could emulate a ‘human’ acrobat.

“Morgan, I got together and said, maybe there’s something here, we’re not really be sure. But let’s poke at it in a bunch of different directions and see what comes out of it,” says Dohi.

But the Stickman didn’t stick for long.

“When we did the BRICK, I thought that was pretty cool,” says Pope. “And then by the time I was presenting the BRICK at a conference, Tony [Dohi] had helped us make stick man. And I was like, well, this isn’t cool anymore. The Stickman is what’s really cool. And then I was down in Australia presenting Stickman and I knew we were doing the full Stuntronic back at R&D. And I was like, well, this isn’t cool anymore,” he jokes.

“But it has been so much fun. Every step of the way I think oh, this is blowing my mind. but,they just keep pushing…so it’s nice to have that challenge.”

This process has always been one of the fascinating things to me about the way that Imagineering works as a whole. You have people that are enabled by management and internal structure to spool out the threads of a problem, even though you’re not really sure what’s going to come out of it. The biggest companies on the planet have similar R&D departments in place — though the ones that make a habit of disconnecting them from a balance sheet, like Apple, are few and far in between, in my experience. Typically, so much of R&D is tied to a profit/loss spreadsheet so tightly that it’s really, really difficult to sussurate something enough to see what comes of it.

The ability to kind of have vastly different specialities like math, physics, art and design to be able to put ideas on the table and sift through them and say hey, we have this storytelling problem on one hand and this research project on the other. If we drill down on this a bit more — would this serve the purpose? As long as the storytelling always remains the North Star then you end up having a a guiding light to serve drag you through the pile and you come out the other end, holding a couple of things that could be coupled to solve a problem.

“We’re set up to do the really high risk stuff that you don’t know is going to be successful or not, because you don’t know if there’s going to be a direct application of what you’re doing,” says Dohi. “But you just have a hunch that there might be something there, and they give us a long leash, and they let us explore the possibilities and the space around just an idea, which is really quite a privilege. It’s one of the reasons why I love this place.”

This process of play and iteration and pursuit of a goal of storytelling pops up again and again with Imagineering. It’s really a cluster of very smart people across a broad spectrum of disciplines that are governed by a central nervous system of leaders like Jon Snoddy, the head of R&D at the studios, who help to connect the dots between the research side and the other areas of Imagineering that deal with the Parks or interactive projects or the digital division.

There’s an economy and lack of ego to the organization that enables exploration without wastefulness and organically curtails the pursuit of things not in service to the story. In my time exploring the workings of Imagineering I’ve often found that there is a significant disconnect between how fascinating the process is and how well the organization communicates the cleverness of its solutions.

The Disney Research white papers are certainly infinitely fascinating to people interested in emerging tech, but the points of integration between the research and the practical applications in the parks often remain unexplored. Still, they’re getting better at understanding when they’ve really got something they feel is killer and thinking about better ways to communicate that to the world.

Indeed, near the end of our conversation, Dohi says he’s come up with a solid sound byte and I have him give me his best pitch.

“One of our goals of Stuntronics is to see if we can leap across the uncanny valley.”

Not bad.