Hexa raises $20.5M to turn images into 3D objects for VR, AR and more

Hexa, a 3D asset visualization and management platform, today announced that it closed a $20.5 million Series A round from Point72 Ventures, Samurai Incubate, Sarona Partners and HTC. CEO and co-founder Yehiel Atias said that the cash will be put toward product development and expanded customer acquisition efforts well into 2023.

HTC’s participation in the round might seem curious. After all, the company was once one of the world’s largest smartphone manufacturers — not exactly entrenched in the 3D modeling space. But HTC’s focus has increasingly shifted over the years from mobile to VR, and it evidently sees Hexa as aligned with its current — and perhaps even future — lines of business.

“The new funding will be used to support our existing customer expansion and keep up with the flow of new customers that are being onboarded. We conducted an early round due to tripling our customer base in 2023,” Atias told TechCrunch in an email interview.

Hexa’s roots can be traced back to 2015, when Atias was working in the retail industry for brands like Walmart and H&M. He — like most people — quickly came to realize that the dressing room experience translated poorly to e-commerce. Atias co-launched Hexa with Ran Buchnik and Jonathan Clark first as a virtual dressing room platform aimed at bridging the massive disconnect. But he later pivoted the business into a general-purpose tech stack for VR, AR and 3D-model-viewing experiences.

“With a combination of AI-powered technology and human artistry, Hexa can help brands and retailers to create, manage and distribute 3D models that can be used for a variety of use cases, including 3D models, AR experiences, lifestyle photos, 360-degree views and promotional videos,” Atias said. “The major value for our client is that they gain the ability to scale quality 3D projects in a short amount of time. They also can manage and assess the impact of their 3D content through our platform.”


Image Credits: Hexa

Lest you think it’s a new idea, there’s an entire cohort of companies out there developing platforms for 3D asset management. Mark Cuban and former Oculus CEO Brendan Iribe recently backed VNTANA, whose product allows users to view shoppable objects in AR and try on items virtually. South Korea’s RECON Labs helps shoppers visualize products by creating 3D models in AR. Emperia helps brands like Bloomingdale’s build shopping experiences in VR. Even Snap’s gotten in the game recently, launching an AR toolkit to turn photos into 3D assets.

So what differentiates Hexa? Atias says it’s the expertise on — and robustness of — its service. Hexa customers can upload an image or have Hexa’s API automatically fetch images from a website. Then the company’s engineers, using AI-assisted tools, create 3D assets and models from the images.

Throughout the process, customers can provide feedback directly on the models, ask questions of Hexa’s engineers and prep the models for use on the web or in AR and VR experiences. Hexa also provides a range of 3D viewer apps for customers to use, including ones for the web and AR, plus code that can be used to insert models into social media posts and video games.

“Since we need to comply with the clients’ server requirements and verify our 3D assets are identical to the source imagery we’ve been provided, a lot of manpower needs to be invested to answer the scale of Hexa’s production,” Clark said via email. “A lot of effort has been done to solve this aspect, as well, and today, Hexa is able to align the 3D asset with the source imagery and thus ensure the asset complies at a pixel and voxel level.”

AR and VR shopping experiences might not have reached most people (at least according to one survey), but Atias believes there’s a large market to be won. Already, he says, 60-employee Hexa has managed to win the business of over 40 brands, including Amazon, Macy’s, Logitech and Crate & Barrel — and raise $27.2 million in total capital.

There might indeed be a growing interest in virtual retail venues, particularly those of the AR variety. Some 48% of respondents to a McKinsey survey said they’re interested in using “metaverse” technology (i.e., AR and VR) to shop in the next five years. In turn, 38% of marketer respondents said they are using AR in 2022, up 15 percentage points from 2017’s 23%.

“Our main competition is animation and graphics studios that use a manual and outdated tech stack,” Atias said. “Much like the gaming industry, the 3D and e-commerce space enjoyed a strong tailwind, becoming a must-have for any organization … Hundreds of millions of users use our technology and engage with our content on a daily basis.”

Hexa raises $20.5M to turn images into 3D objects for VR, AR and more by Kyle Wiggers originally published on TechCrunch

OpenAI releases Point-E, an AI that generates 3D models

The next breakthrough to take the AI world by storm might be 3D model generators. This week, OpenAI open sourced Point-E, a machine learning system that creates a 3D object given a text prompt. According to a paper published alongside the code base, Point-E can produce 3D models in one to two minutes on a single Nvidia V100 GPU.

Point-E doesn’t create 3D objects in the traditional sense. Rather, it generates point clouds, or discrete sets of data points in space that represent a 3D shape — hence the cheeky abbreviation. (The “E” in Point-E is short for “efficiency,” because it’s ostensibly faster than previous 3D object generation approaches.) Point clouds are easier to synthesize from a computational standpoint, but they don’t capture an object’s fine-grained shape or texture — a key limitation of Point-E currently.

To get around this limitation, the Point-E team trained an additional AI system to convert Point-E’s point clouds to meshes. (Meshes — the collections of vertices, edges and faces that define an object — are commonly used in 3D modeling and design.) But they note in the paper that the model can sometimes miss certain parts of objects, resulting in blocky or distorted shapes.

OpenAI Point-E

Image Credits: OpenAI

Outside of the mesh-generating model, which stands alone, Point-E consists of two models: a text-to-image model and an image-to-3D model. The text-to-image model, similar to generative art systems like OpenAI’s own DALL-E 2 and Stable Diffusion, was trained on labeled images to understand the associations between words and visual concepts. The image-to-3D model, on the other hand, was fed a set of images paired with 3D objects so that it learned to effectively translate between the two.

When given a text prompt — for example, “a 3D printable gear, a single gear 3 inches in diameter and half inch thick” — Point-E’s text-to-image model generates an synthetic rendered object that’s fed to the image-to-3D model, which then generates a point cloud.

After training the models on a data set of “several million” 3D objects and associated metadata, Point-E could produce colored point clouds that frequently matched text prompts, the OpenAI researchers say. It’s not perfect — Point-E’s image-to-3D model sometimes fails to understand the image from the text-to-image model, resulting in a shape that doesn’t match the text prompt. Still, it’s orders of magnitude faster than the previous state-of-the-art — at least according to the OpenAI team.

OpenAI Point-E

Converting the Point-E point clouds into meshes. Image Credits: OpenAI

“While our method performs worse on this evaluation than state-of-the-art techniques, it produces samples in a small fraction of the time,” they wrote in the paper. “This could make it more practical for certain applications, or could allow for the discovery of higher-quality 3D object.”

What are the applications, exactly? Well, the OpenAI researchers point out that Point-E’s point clouds could be used to fabricate real-world objects, for example through 3D printing. With the additional mesh-converting model, the system could — once it’s a little more polished — also find its way into game and animation development workflows.

OpenAI might be the latest company to jump into the 3D object generator fray, but — as alluded to earlier — it certainly isn’t the first. Earlier this year, Google released DreamFusion, an expanded version of Dream Fields, a generative 3D system that the company unveiled back in 2021. Unlike Dream Fields, DreamFusion requires no prior training, meaning that it can generate 3D representations of objects without 3D data.

While all eyes are on 2D art generators at the present, model-synthesizing AI could be the next big industry disruptor. 3D models are widely used in film and TV, interior design, architecture and various science fields. Architectural firms use them to demo proposed buildings and landscapes, for example, while engineers leverage models as designs of new devices, vehicles and structures.

OpenAI Point-E

Point-E failure cases. Image Credits: OpenAI

3D models usually take a while to craft, though — anywhere between several hours to several days. AI like Point-E could change that if the kinks are someday worked out, and make OpenAI a respectable profit doing so.

The question is what sort of intellectual property disputes might arise in time. There’s a large market for 3D models, with several online marketplaces including CGStudio and CreativeMarket allowing artists to sell content they’ve created. If Point-E catches on and its models make their way onto the marketplaces, model artists might protest, pointing to evidence that modern generative AI borrow heavily from its training data — existing 3D models, in Point-E’s case. Like DALL-E 2, Point-E doesn’t credit or cite any of the artists that might’ve influenced its generations.

But OpenAI’s leaving that issue for another day. Neither the Point-E paper nor GitHub page make any mention of copyright.

To their credit, the researchers do mention that they expect Point-E to suffer from other problems, like biases inherited from the training data and a lack of safeguards around models that might be used to create “dangerous objects.” That’s perhaps why they’re careful to characterize Point-E as a “starting point” that they hope will inspire “further work” in the field of text-to-3D synthesis.

OpenAI releases Point-E, an AI that generates 3D models by Kyle Wiggers originally published on TechCrunch

Epic Games acquires Sketchfab, a 3D model sharing platform

New York-based startup Sketchfab has been acquired by Epic Games, the company behind Fortnite and Unreal Engine. Sketchfab has been building a platform to upload, download, view, share, sell and buy 3D assets. Essentially, it is the leading repository for 3D files on the web.

Epic Games isn’t disclosing the terms of the deal. Sketchfab will still operate as a separate brand and offering. Epic Games also says that all integrations with third-party tools will remain available, including with Unity.

The deal makes a ton of sense as Epic Games has been developing — and acquiring — some of the most popular creation tools. Unreal Engine has been one of the most popular video game engines of the past couple of decades.

More recently, Unreal Engine has been used for different use cases beyond video games, such as special effects, 3D explorations of virtual worlds, mixed reality projects and more.

But an engine without assets is pretty useless. That’s why creators either design their own 2D and 3D assets, outsource this process or buy assets directly. It led to the creation of an entire ecosystem of assets and creators.

Epic Games has its own Unreal Engine marketplace, but Sketchfab has been working on building the definitive 3D marketplace for many years with three important pillars — technology, reach and collaboration.

On the technology front, Sketchfab lets you view 3D models on any platform. The Sketchfab viewer works with all major browsers on both desktop and mobile — you can see an example on Sketchfab. It also works with VR headsets. You can upload 3D models from your favorite 3D modeling app, such as Blender, 3ds Max, Maya, Cinema 4D and Substance Painter.

Sketchfab can also convert any format into glTF and USDZ file formats. Those formats work particularly well on Android and iOS.

When it comes to reach, Sketchfab has grown tremendously over the years. In 2018, the company shared some metrics — 1 billion views, 2 million members and 3 million 3D models. Around the same time, the company launched a store so that creators can buy and sell assets directly on the platform.

Finally, Sketchfab launched an interesting feature for companies that work with 3D models all the time — Sketchfab for Teams. It’s a software-as-a-service play that lets you share a Sketchfab account with the rest of the team. Essentially, it works a bit like a shared Google Drive folder — but for 3D models.

With today’s acquisition, Epic Games is making some immediate changes. Starting today, store fees have been reduced from 30% to 12% — just like on the Epic Games Store. The company lowered commissions on ArtStation immediately after acquiring ArtStation as well.

As for Sketchfab users paying a monthly subscription fee, everything is a bit cheaper now. All features in the Plus plan are now available for free, all features in the Pro plan are available to Plus subscribers, etc.

“We built Sketchfab with a mission to empower a new era of creativity and provide a service for creators to showcase their work online and make 3D content accessible,” Sketchfab co-founder and CEO Alban Denoyel said in the announcement. “Joining Epic will enable us to accelerate the development of Sketchfab and our powerful online toolset, all while providing an even greater experience for creators. We are proud to work alongside Epic to build the Metaverse and enable creators to take their work even further.”

With the acquisitions of ArtStation and Capturing Reality, Epic Games has been on an acquisition spree. It’s clear that the company wants to build an end-to-end developer suite for the gaming industry.