Singapore’s PixCap draws $2.8M to power web-based 3D design

A clutch of startups is trying to topple Adobe’s dominance in three-dimensional modeling and do more than Canva. A freshly funded player is PixCap, which is entering the fray with a no-code, web-based 3D design tool.

Founded in 2020, Singapore-based PixCap just secured $2.8 million from a seed funding round. It was part of the seventh cohort of Surge, Sequoia Capital India and Southeast Asia’s accelerator, which led the round. Cocoon Capital, Entrepreneur First and angel investor Michael Gryseels also participated.

CJ Looi, CEO of PixCap, is building the company when the web experience is undergoing what he called an “evolution from 2D to 3D.” Tech firms from Foodpanda, Alibaba, Shopee, TikTok, Meituan to Lazada have all started to incorporate 3D elements into their logos, ads, and landing pages over the past two years, the founder pointed out in an interview.

These aren’t unique, sophisticated 3D assets developed for movies or video games; rather, they are simple designs like a brand mascot that are reusable across a firm’s marketing campaigns to “enhance user engagement,” suggested Looi, who previously worked on 3D vision and deep learning at robotics startup Dorabot, which is backed by Kai-Fu Lee’s Sinovation Ventures and Jack Ma-founded YF Capital.

“The trend is moving toward interaction,” the founder continued. “The benefit of 3D that 2D doesn’t provide is, in 3D, if you look at most TV ads, a lot of the content uses 3D animation. So something that can be used in your advertisement and your landing pages and apps is far more advantageous to a brand than having 2D somewhere and 3D elsewhere.”

But designers with 3D animation skills are “some of the rarest talents you can find,” observed Cyril Nie, co-founder and CTO at PixCap. Even when creators want to step up their careers by acquiring 3D skills, many are daunted by the complexity of legacy software like Adobe. Gojek spent “close to $200,000 on a branding agency just to create 3D icons for their apps, landing pages and social media,” Looi recalled his recent conversation with an executive from the Southeast Asian ride hailing giant.

Image: 3D templates from PixCap

The costs of adopting 3D are too prohibitive for most startups. PixCap’s vision is to make the transition to 3D cheaper in the way Canva made 2D designs more accessible. Instead of spending tens of thousands of dollars on hiring a designer for a one-off campaign, marketers can quickly put together a 3D social media graphic on PixCap using its library of templates. With a few clicks, those with no prior 3D knowledge can adjust the lighting and after-effects of objects, rotate them and change the colors to match their brands’ palettes.

The platform has over 30,000 users so far with around a third in North America, followed by top markets like India, Indonesia, and the U.K.

PixCap’s main differentiation from legacy players is its web-based and drag-and-drop interface; compared to younger online solutions, such as Y Combinator-backed Spline, it boasts a greater number of editable templates and “robust” 3D animation capabilities, which Looi argued is the natural next step after static 3D images.

To enhance its moat in templates, PixCap is working on a contributor marketplace that will eventually allow creators to easily sell their works, which will keep the platform replenished with new 3D assets.

Typical of many SaaS startups today, PixCap’s team of 15 members is located across the world — India, Pakistan, the U.K., France and Russia. It plans to spend the proceeds from its new round on global expansion, hiring for its engineering and marketing teams, product development and community building.

Singapore’s PixCap draws $2.8M to power web-based 3D design by Rita Liao originally published on TechCrunch

Amazon Sagemaker Ground Truth can now create virtual objects for AI model training

It takes massive amounts of data to train AI models. But sometimes, that data simply isn’t available from real-world sources, so data scientists use synthetic data to make up for that. In machine vision applications, that means creating different environments and objects to train robots or self-driving cars, for example. But while there are quite a few tools out there to create virtual environments, there aren’t a lot of tools for creating virtual objects.

At its re:Mars conference, Amazon today announced synthetics in Sagemaker Ground Truth, a new feature for creating a virtually unlimited number of images of a given object in different positions and under different lighting conditions, as well as different proportions and other variations.

With WorldForge, the company already offers a tool to create synthetic scenes. “Instead of generating whole worlds for the robot to move around, this is specific to items or individual components,”  AWS VP of Engineering Bill Vass told me. He noted that the company itself needed a tool like this because even with the millions of packages that Amazon itself ships, it still didn’t have enough images to train a robot.

“What Ground Truth Synthetics does is you start with the 3D model in a number of different formats that you can pull it in and it’ll synthetically generate photorealistic images that match the resolution of the sensors you have,” he explained. And while some customers today purposely distress or break the physical parts of a machine, for example, to take pictures of them to train their models — which can quickly become quite expensive — they can now distress the virtual parts instead and do that millions of times if needed.

He cited the example of a customer who makes chicken nuggets. That customer used the tool to simulate lots of malformed chicken nuggets to train their model. 

Vass noted that Amazon is also partnering with 3D artists to help companies that may not have access to that kind of in-house talent to get started with this service, which uses the Unreal Engine by default, though it also supports Unity and the open-source Open 3D Engine. Using those engines, users can then also start simulating the physics of how those objects would behave in the real world, too.

Tripolygon, a 3D modeling software developer, lets metaverse creators make their own 3D assets

There are a number of ways to make money in the metaverse by participating in play-to-earn games, buying virtual real estate, or creating 3D assets on metaverse platforms like Roblox, AltspaceVR and VRChat. A South Korean 3D modeling software developer called Tripolygon enables metaverse creators to make their own 3D assets using the Unity plugins.

“Creating 3D modeling is no longer a field for experts or software developers only,” Tryipolgon CEO Jae-sik Hwang said in an interview with TechCrunch.

The Seoul-based startup recently has raised a $5.2 million (6.5 billion KRW) round to accelerate its growth. The Series A funding brings its total raised to $5.7 million and values the company at approximately $25 million, Hwang said.

Tripolygon was founded in 2018 by Hwang, who previously worked as a senior engineer at Crytek, a German video game developer.

The 3D modeling software is like a set of tools to make 3D assets, Hwang explained, adding that users can make any kinds of 3D assets like buildings, props, items, and avatars using the 3D modeling software tools. Tripolygon offers 3D modeling software (UModeler), plugins (UModeler X) and more as a subscription service for the metaverse creators.

If users install Tripolygon’s UModeler in Unity, the 3D modeling tools will be added to the Unity, Hwang noted.

Most Unity engine users like programmers, game designers and visual effects (VFX) artists need to make 3D assets. Still, it’s not easy for them to learn 3D modeling software programs like 3ds Max, Maya, and blender, developed for 3D modeling experts and professional software developers. According to Hwang, with Tryipolygon’s service, they would be able to create 3D assets with the familiar UX/UI in Unity without learning complicated software programs.

The startup’s main clients are Unity developers who use the UModeler in the Unity Asset Store, Hwang said. To date, about 17,000 Unity developers are using the UModeler, and its monthly active users are close to 2K, he noted.

Last October, Tripolygon partnered with Unity to be a Unity verified solutions partner. Its product UModeler, a real-time 3D model production plugin, has been confirmed by Unity that its software development kit is optimized for Unity editor and developers.

The latest funding was led by KB Investment and We Ventures, with participation from Naver Z and Strong Ventures.

Gravity Sketch draws $33M for a platform to design, collaborate on and produce 3D objects

Platforms like Figma have changed the game when it comes to how creatives and other stakeholders in the production and product team conceive and iterate around two-dimensional designs. Now, a company called Gravity Sketch has taken that concept into 3D, leveraging tools like virtual reality headsets to let designers and others dive into and better visualize a product’s design as it’s being made; and the London-based startup is today announcing $33 million in funding to take its own business to the next dimension.

The Series A is coming as Gravity Sketch passes 100,000 users, including product design teams at firms like Adidas, Reebok, Volkswagen and Ford.

The funding will be used to continue expanding the functionality of its platform, with special attention going to expanding LandingPad, a collaboration feature it has built to support “non-designer” stakeholders to be able to see and provide feedback on the design process earlier in the development cycle of a product.

The round is being led by Accel, with GV (formerly known as Google Ventures) and previous backers Kindred Capital, Point Nine and Forward Partners (all from its seed round in 2020) also participating, along with unnamed individual investors. The company has now raised over $40 million.

Co-founded Oluwaseyi Sosanya (CEO), Daniela Paredes Fuentes (CXO) and Daniel Thomas (CTO), Sosanya and Fuentes met when they were both doing a joint design/engineering degree across the Royal College of Art and Imperial College in London. They also went on to work together in industrial design at Jaguar Land Rover. Across those and other experiences, the two found that they were encountering the same problems in the process of doing their jobs.

Much design in its earliest stages is often still sketched by hand, Sosanya noted, “but machines for tooling run on digital files.” That is just one of the steps when something is lost or complicated in translation: “From sketches to digital files is a very arduous process,” he said, involving perhaps seven or eight versions of the same drawing. Then technical drawings need to be produced, and then modeling for production, all complicated by the fact that the object is three-dimensional.

“There were so many inefficiencies, that the end result was never that close to the original intent,” he said. It wasn’t just design teams being involved, either, but marketing and manufacturing and finance and executive teams as well.

One issue is that we think in 3D, but skills need to be learned, and most digital drawing is designed to cater to, translating that into a 2D surface. “People sketch to bring ideas into the world, but the problem is that people need to learn to sketch, and that leads to a lot of miscommunication,” Paredes Fuentes added.

Even sketches that a designer makes may not be true to the original idea. “Communications and conversations were happening too late in the process,” she said. The idea, she noted, is to bring in collaboration earlier so that input and potential changes can be snagged earlier, too, making the whole design and manufacturing process less expensive overall.

Gravity Sketch’s solution is a platform that tapped into innovations in computer vision and augmented and virtual reality to let teams of people collaborate and work together in 3D from day one.

The approach that Gravity Sketch takes is to be “agnostic” in its approach, Sosanya said, meaning that it can be used from first sketch through to manufacturing; or files can be imported from it into whatever tooling software a company happens to be using; or designs might not go into a physical realm at any point at all: more recently, designers have been building NFT objects on Gravity Sketch.

One thing that it’s not doing is providing stress tests or engineering calculations, instead making the platform as limitless as possible as an engine for creativity. Bringing that too soon into the process would be “forcing boundaries,” Sosanya said. “We want to be as unrestricted as a piece of paper, but in the third dimension. We feed in engineering tools but that comes after you’ve proposed a solution.”

Although there are plenty of design software makers in the market today, there’s been relatively little built to address what Paredes Fuentes described as “spatial thinkers,” and so although companies like Adobe have made acquisitions like Allegorithmic to bring in 3D expertise, it has yet to bring out a 3D design engine.

“It’s highly difficult to build a geometry engine from the ground up,” Sosanya said. “A lot haven’t dared to step in because it’s a very complex space because of the 3D aspect. The tech enables a lot of things but taking the approach we have is what has brought us success.” That approach is not just to make it possible to “step into” the design process from the start through a 3D virtual reality environment (it provides apps for iOS, Steam and Oculus Quest and Rift), but also to use computers and smartphones to collaborate together as well.

While a lot of the target is to bring tools to the commercial world, Gravity Sketch has also found traction in education, with around 170 schools and universities also using the platform to complement their own programs. It said that revenues in the last year have grown four-fold, although it doesn’t disclose actual revenue numbers. Some 70% of its customers are in the U.S.

The investment will be used to continue developing Gravity Sketch’s LandingPad collaboration features to better support the non-designer stakeholders essential to the design process — a reflection of Gravity Sketch’s belief that greater diversity in the design industry and more voices in the development process will result in better performing products on the market. Companies – including the likes of Miro and Figma – have already disrupted the 2D space, enabling teams to co-create and collaborate quickly and inclusively in online workspaces, and now Gravity Sketch’s inclusive features are set to shake up the 3D environment. The funds will also be used to enhance the platform’s creative tools and scale the company’s sales, customer success and onboarding teams.

“In today’s climate, online collaboration tools have emerged as a necessity for businesses that want to stay agile and connect their teams in the most interactive, authentic and productive way possible,” said Harry Nelis, a partner at Accel, in a statement. “Design is no different, and we’ve been blown away by Gravity Sketch’s innovative, forward-looking suite of collaboration design tools that are already revolutionising workflows across numerous industries. Moreover, we expect that 3D design – coupled with the advent of virtual reality – will only grow in importance as brands race to build the emerging metaverse. The early organic traction and tier one brands that Oluwaseyi, Daniela and the Gravity Sketch team have already secured as customers are extremely impressive. We’re excited to partner with them and help them realise their dream of a more efficient, sustainable and democratic design world.”

Sneaker fit startup Neatsy.ai gets $1M seed after b2b pivot

Sneaker fit startup, Neatsy.ai, has snagged $1 million in seed funding after a b2b pivot. Investors in the round include Cabra VC, Flyer One VC and some unnamed business angels.

The US startup, which was founded back in March 2019 by Artem Semyanov (the former head of the machine learning team at Prism Labs), is now fully focused on selling its fit-tech to e-tailers via an SDK.

Neasty’s approach relies on 3D scanning technology found in the iPhone X or later (aka, the TruthDepth camera for FaceID) — so (currently) it only works for a subset of iOS users.

The AI has also only been trained to provide personalized fit recommendations for a handful of sneaker brands at this stage: Namely Nike, Jordan, Converse, Adidas, Reebok, Yeezy, Puma, New Balance, Asics, Under Armor and Vans.

But it’s eyeing expanding its fit recommendations with the seed funding.

For those who can tap into its tool, Neatsy claims the tech measures the foot with an accuracy of 1-2 millimetres (it also flags that all scanning takes place on the user’s device itself to maintain privacy).

For its b2b model, Neasty is monetizing SDK usage based on the number of users — setting a price per Monthly Active User.

And for e-tailers it touts an average drop of 39% in shoe returns as online shoppers are able to get a virtual gauge on the best fit and so will be able to pick a better sized pair of kicks for their feet — which it also claims translates into a 20% increase in ARPU (or an additional $0.5/month per user).

So far it has two ecommerce marketplaces/fashion industry enterprises signed up to use its tech — driving its annual revenue (ARR) to hit $120k over the past year.

It also slates a number of ongoing pilots that it’s hoping will convert into paying customers.

“All the paying clients are currently in Europe, we also see great potential in the further rollout to European markets… as Germany, the UK, France are home for great sportswear and fashion companies, as well as for large online fashion marketplaces. As well as thousands of SMB shops and brands for whom we are developing an entry-tier product,” says Semyanov.

“We are also looking at the US market, as we currently have several brands testing our tech (yet for free) but we are optimistic here. Probably you remember Nike was announcing the launch of the Nike Fit app a couple of years ago but somehow it never saw the light of day. We would be happy to help them with our tech (kidding, or not).”

As well as pushing to sign up more e-tailers, Neasty plans to use the seed funding to dial up its product development. On that front, Semyanov says it’s looking to diversify the range of supported footwear — to include formal shoes and children’s footwear.

He says it’s also trying to hone its AI to be able to pick up on specific foot conditions — such as detecting flat feet and hyperpronation.

Launching versions of the tech for the web and Android is also on the cards, as 3D depth cameras have proliferated over the past few years — with a number of Android smartphone also packing this kind of camera hardware (such as Samsung’s DepthVision tech).

The seed fund will also go on simplifying the product integration process and building a customer success team, per Semyanov.

Asked about potential orthopaedics use-cases, Semyanov says Neasty got interested in how the 3D measurements could help with conditions like Pronation/Supination after being contacted by the NFL’s San Francisco 49ers last year.

“[The 49ers were] interested in whether we could test the newcomers of pronation and supination, because it’s extremely important for future players. Turns out that Pronation is the main factor in injuries in running sports. We were unable to help SF49ers at the time, and never thought of this feature before. We started looking into the topic and after some testing and research we concluded that we actually could make it work,” he says. 

“If you look at the feet from the side, you could distinguish whether it is pronated or supinated, and we ‘look’ at the feet with the camera measuring depth and distance therefore we could teach the algorithm to identify that.”

“This is for sure going to be one of a few next developments for Neatsy’s product,” Semyanov adds. “We are currently consulting with the orthopediologist and it seems that this is a huge problem not only for professional sportsmen but for everybody.”

Smartphone cameras are already powering plenty of activity (and investor activity) in the digital health space — with a number of startups focused on using camera-based tracking to help with musculoskeletal disorders for example (e.g. Kaia Health).

Returning to retail use-cases, Semyanov says Neatsy.ai is also considering expanding to support fit for specialist kit such as ski and snowboard equipment — given that ski shoes tend to be pricey (“and it’s always painful for people to misunderstand the size”).

But in the near term he says the focus will be on adding fit recommendations for kids’ shoes, given the size/fit issue can be especially tough for little growing feet.

It does also have ecommerce-related ambitions beyond feet too — if it can get 3D-scanning to work for other retail use-cases.

“I always say that in the future all the sizes or size charts will become obsolete, and only ‘Your Size’ will exist for each customer. With the help of our algorithm, it is gonna happen with the shoes and we’ll see if it could be applied to clothes and eyewear as well,” he adds.

Apple and Snap partner JigSpace, the ‘Canva for 3D,’ raises a $4.7M Series A

When former Art Director Zac Duff started teaching a game development course online in 2015, he faced the same challenges that teachers around the globe have become all too familiar with after a pandemic-induced lockdown. So, he used his experience in 3D design to build a virtual reality classroom to make remote learning more engaging for his students. Instead of entering yet another Zoom lecture, the school gave students VR headsets to transport themselves to the Ancient Greek-inspired classroom that Duff built.

Still, Duff knew that this learning model couldn’t be easily scaled — most schools don’t have VR headsets to send out, and most teachers don’t have over a decade of game design experience to whip up a classroom with green fields and butterflies (yes, Duff made that). But he saw that there was potential for a user-friendly program that lets anyone create 3D presentations and share information in AR.

“Right at the center of it is knowledge transfer. It’s about one person giving knowledge to another person in a really effective way,” Duff told TechCrunch. He referenced products like Microsoft Powerpoint and Canva, which make it easy for the average user to create presentations and graphics that communicate their ideas. “We have those systems in 2D, but in 3D, we just didn’t have it, and it was a really complex, expensive technical process that you had to go through to build anything, and that stuck with me.”

Image Credits: JigSpace

Soon after, Duff took a Friday off from work to outline the company that would become JigSpace, which is poised to set the standard for knowledge-sharing in 3D. After launching in 2017, the JigSpace platform now has over 4 million users with a 4.8 average rating on the App Store. When you download the JigSpace app, you can interact in AR with 3D models that show how to fix a leaky sink, repair a dry wall, or even build a Lego Star Wars spacecraft. There are also educational models, or Jigs, that show how a piano works, the anatomy of the human eye, and even how the coronavirus spreads. The potential use cases for JigSpace are expansive — Duff says he hopes to work with manufacturing companies to have them make Jigs of their products. That way, let’s say you want to replace your AC filter, you can look at a 3D model in AR, rather than a black and white 2D drawing in an instruction booklet.

Today, JigSpace announced that it raised $4.7 million in Series A funding led by Rampersand, with Investible and new investors including Vulpes, and Roger Allen AM, also participating. The JigSpace app is free to use, and anyone can combine presets and templates of 3D modeled objects to create their own Jigs — the more tech savvy among us can upload up to 30 MB of files to make more customized Jigs on the free version. But the money-maker for Jigspace is its Jig Pro platform, which is designed for commercial businesses and manufacturers. Jig Pro‘s subscription for individuals is $49 per month, while the price of the enterprise offering isn’t listed online.

Image Credits: JigSpace

“The best area for us has been in durable manufacturing, because almost all manufacturing products have CAD files, so the 3D already exists,” said Duff. “Then, we’re able to work with those companies to give them the tools to create knowledge material around their products.”

Right after JigSpace launched its Pro version, it was featured in Apple’s iPhone 12 Keynote, demonstrating how the iPhone 12’s LiDAR scanner and 5G capabilities could be used to save time and money in manufacturing. JigSpace also partnered with Snapchat to create a Lens that allows you to scan kitchen items to reveal 3D Jigs that show how stuff works, from your microwave to your coffee maker.

Jig Pro’s customer base has grown 40% month-on-month since it launched in mid-2020, with the average user logging into the app at least once per day. Companies like Verizon, Volkswagen, Medtronic, and Thermo Fisher Scientific use JigSpace to develop 3D models to present to stakeholders, customers, and remote colleagues. Especially as products like Apple’s Capture emerge, it will become even easier for people to import their own 3D models into JigSpace.

Despite its commercial potential, it’s important to Duff that JigSpace always retains a free version that makes learning through AR easy.

“We want to make sure that all of the people with information they want to share, those are the people we serve, not just the technical people at the top,” Duff says. “From the beginning, my co-founder Numa Bertron and I always wanted to have a free version. Knowledge should be accessible to people in the best way possible, and there’s no reason why it shouldn’t be.”

Epic Games buys photogrammetry software maker Capturing Reality

Epic Games is quickly becoming a more dominant force in gaming infrastructure M&A after a string of recent purchases made to bulk up their Unreal Engine developer suite. Today, the company announced that they’ve brought on the team from photogrammetry studio Capturing Reality to help the company improve how it handles 3D scans of environments and objects.

Terms of the deal weren’t disclosed.

Photogrammetry involves stitching together multiple photos or laser scans to create 3D models of objects that can subsequently be exported as singular files. As the computer vision techniques have evolved to minimize manual fine-tuning and adjustments, designers have been beginning to lean more heavily on photogrammetry to import real world environments into their games. 

Using photogrammetry can help studio developers create photorealistic assets in a fraction of the time it would take to create a similar 3D asset from scratch. It can be used to quickly create 3D assets of everything from an item of clothing, to a car, to a mountain. Anything that exists in 3D space can be captured and as game consoles and GPUs grow more capable in terms of output, the level of detail that can be rendered increases as does the need to utilize more detailed 3D assets.

The Bratislava-based studio will continue operating independently even as its capabilities are integrated into Unreal. Epic announced some reductions to the pricing rates for Capturing Reality’s services, dropping the price of a perpetual license fee from €15,000 to $3,750 USD. In FAQs on the studio’s site, the company notes that they will continue to support non-gaming use clients moving forward.

In 2019, Epic Games acquired Quixel which hosted a library of photogrammetry “mega scans” that developers could access.

 

3D model provider CGTrader raises $9.5M Series B led by Evli Growth Partners

3D model provider CGTrader, has raised $9.5M in a Series B funding led by Finnish VC fund Evli Growth Partners, alongside previous investors Karma Ventures and LVV Group. Ex-Rovio CEO Mikael Hed also invested and joins as Board Chairman. We first covered the Vilnius-based company when it raised 200,000 euro from Practica Capital.

Founded in 2011 by 3D designer Marius Kalytis (now COO), CGTrader has become a signifiant 3D content provider – it even claims to be the world’s largest. In its marketplace are 1.1M 3D models and 3.5M 3D designers, service 370,000 businesses including Nike, Microsoft, Made.com, Crate & Barrel, and Staples.

Unlike photos, 3D models can also be used to create both static images as well as AR experiences, so that users can see how a product might fit in their home. The company is also looking to invest in automating 3D modeling, QA, and asset management processes with AI. 

Dalia Lasaite, CEO and co-founder of CGTrader said in a statement: “3D models are not only widely used in professional 3D industries, but have become a more convenient and cost-effective way of generating amazing product visuals for e-commerce as well. With our ARsenal enterprise platform, it is up to ten times cheaper to produce photorealistic 3D visuals that are indistinguishable from photographs.”

CGTrader now plans to consolidate its position and further develop its platform.

The company competes with TurboSquid (which was recently acquired for $75 million by Shutterstock) and Threekit.

Google shutting down Poly 3D content platform

Google is almost running out of AR/VR projects to kill off.

The company announced today in an email to Poly users that they will be shutting 3D-object creation and library platform “forever” next year. The service will shut down on June 30, 2021 and users won’t be able to upload 3D models to the site on April 30, 2021.

Poly was introduced as a 3D creation tool optimized for virtual reality. Users could easily create low-poly objects with in-VR tools. The software was designed to serve as a lightweight way to create and view 3D assets that could in turn end up in games and experiences, compared to more art and sculpting-focused VR tools like Google’s Tilt Brush and Facebook’s (now Adobe’s) Medium software.

Google has already discontinued most of the company’s AR/VR plays, including most notably their Daydream mobile VR platform.

The AR/VR industry’s initial rise prompted plenty of 3D-centric startups to bet big on creating or hosting a library of digital objects. As investor enthusiasm has largely faded and tech platforms hosting AR/VR content have shuttered those products, it’s less clear where the market is for this 3D content for the time being.

Users that have uploaded objects to Poly will be able to download their data and models ahead of the shutdown.

DroneDeploy teams with Boston Dynamics to deliver inside-outside view of job site

DroneDeploy, a cloud software company that uses drone footage to help industries like agriculture, oil and gas and construction get a birds-eye view of a site to build a 3D picture, announced a new initiative today that combines drone photos with cameras on the ground or even ground robots from a company like Boston Dynamics for what it is calling a 360 Walkthrough.

Up until today’s announcement, DroneDeploy could use drone footage from any drone to get a picture of what a site looked like outside, uploading those photos and stitching them together into a 3D model that is accurate within an inch, according to DroneDeploy CEO Mike Winn.

Winn says that while there is great value in getting this type of view of the outside of a job site, customers were hungry for a total picture that included inside and out, and the platform which is simply processing photos transmitted from drones could be adapted fairly easily to accommodate photos coming from cameras on other devices.

“Our customers are also looking to get data from the interiors, and they’re looking for one digital twin, one digital reconstruction of their entire site to understand what’s going on to share across their company with the safety team and with executives that this is the status of the job site today,” Winn explained.

He adds that this is even more important during COVID when access to job sites has been limited, making it even more important to understand the state of the site on a regular basis.

“They want fewer people on those job sites, only the essential workers doing the work. So for anyone who needs information about the site, if they can get that information from a desktop or the 3D model or a kind of street view of the job site, it can really help in this COVID environment, but it also makes it much more efficient,” Winn said.

He said that while companies could combine this capability with fixed cameras on the inside of a site, they don’t give the kind of coverage a ground robot could, and the Boston Dynamics robot is capable of moving around a rough job site with debris scattered around.

DroneDeploy bird's eye view of job site showing path taken through the site.

Image Credits: DroneDeploy

While Winn sees the use of the Boston Dynamics robot as more of an end goal, he says that more likely for the immediate future, you will have a human walking through the job site with a camera to capture the footage to complete the inside-outside picture for the DroneDeploy software.

“All customers already want to adopt robots to collect this data, and you can imagine a Boston Dynamics robot [doing this], but that’s the end state of course. Today we’re supporting the human walk-through as well, a person with a 360 camera walking through the job site, probably doing it once a week to documents the status of the job sites,” he said.

DroneDeploy launched in 2013 and has raised over $100 million, according to Winn. He reports his company has over 5000 customers with drone flight time increasing by 2.5x YoY this year as more companies adopt drones as a way to cope with COVID.