Hidden forces and sliding screens trick the senses to make VR feel more real

The next big thing in VR might not be higher resolution or more immersive sound, but an experience augmented by physical sensations or moving parts that fool your senses into mistaking virtual for reality. Researchers at SIGGRAPH, from Meta to international student groups, flaunted their latest attempts to make VR and AR more convincing.

The conference on computer graphics and associated domains is taking place this week in Los Angeles, and everyone from Meta to Epic to universities and movie studios were demonstrating their wares.

It’s the 50th SIGGRAPH, so a disproportionate amount of the event was dedicated to retrospectives and such like, though the expo hall was full of the latest VFX, virtual production, and motion capture hardware and software.

In the “emerging technologies” hall, or cave as the darkened, black-draped room felt, dozens of experimental approaches at the frontiers of VR seemed to describe the state of the art: visually impressive, but with immersion relying almost entirely on that. What could be done to make the illusion more complete? For many, the answer lies not in the virtual world with better sound or graphics, but in the physical one.

Meta’s varifocal VR headset shifts your perspective, literally

Meta was a large presence in the room, with its first demonstration of two experimental headsets, dubbed Butterscotch and Flamera. Flamera takes an interesting approach to “passthrough” video, but it’s Butterscotch’s “varifocal” approach that really changes things in the virtual world.

VR headsets generally comprise a pair of tiny, high-resolution displays fixed to a stack of lenses that make them appear to fill the wearer’s field of vision. This works fairly well, as anyone who has tried a recent headset can attest. But there’s a shortcoming in the simple fact that moving things closer doesn’t really allow you see them better. They remain at the same resolution, and while you might be able to make out a little more, it’s not like picking up an object and inspecting it closely in real life.

Meta’s Butterscotch prototype headset, in pieces.

Meta’s Butterscotch prototype, which I tested and grilled the researchers about, replicates that experience by tracking your gaze within the headset, and when your gaze falls on something closer, physically sliding the displays closer to your eyes. The result is shocking to anyone who has gotten used to the poor approximation of “looking up close” at something in VR.

The display only moves over a span of about 14 millimeters, a researcher at the Meta booth told me, and that’s more than enough at that range not just to create a clearer image of the up-close item — remarkably clear, I must say — but to allow the eyes to more naturally change their “accommodation” and “convergence,” the ways they naturally track and focus on objects.

While the process worked extremely well for me, it totally failed for one attendee (whom I suspect was a higher-up at Sony’s VR division, but his experience seemed genuine) who said that the optical approach was at odds with his own vision impairment, and turning the feature on actually made everything look worse. It’s an experiment, after all, and others I spoke to found it more compelling. Sadly the shifting displays may be somewhat impractical on a consumer model, making the feature quite unlikely to come to Quest any time soon.

Rumble (and tumble) packs

Elsewhere on the demo floor, others are testing far more outlandish physical methods of fooling your perception.

One from Sony researchers takes the concept of a rumble pack to extremes: a controller mounted to a sort of baton, inside which is a weight that can be driven up and down by motors to change the center of gravity or simulate motion.

In keeping with the other haptic experiments I tried, it doesn’t feel like much outside of the context of VR, but when paired with a visual stimulus it’s highly convincing. A rapid-fire set of demos first had me opening a virtual umbrella — not an game you would play for long, obviously, but an excellent way to show how a change in center of gravity can make a pretend item seem real. The motion of of the umbrella opening felt right, and then the weight (at its farthest limit) made it feel like the mass had indeed moved to the end of the handle.

Next, a second baton was affixed to the first in perpendicular fashion, forming a gun-like shape, and indeed the demo had me blasting aliens with a shotgun and pistol, each of which had a distinct “feel” due to how they programmed the weights to move and simulate recoil and reloading. Last, I used a virtual light saber on a nearby monster, which provided tactile feedback when the beam made contact. The researcher I spoke to said there are no plans to commercialize it, but that the response has been very positive and they are working on refinements and new applications.

An unusual and clever take on this idea of shifting weights was SomatoShift, on display at a booth from University of Tokyo researchers. There I was fitted with a powered wristband, on which two spinning gyros opposed one another, but could have their orientation changed in order to produce a force that either opposed or accelerated the movement of the hand.

Image Credits: Devin Coldewey / TechCrunch

The mechanism is a bit hard to understand, but spinning weights like this essentially want to remain “upright” and by changing their orientation relative to gravity or the object on which they are mounted, that tendency to right themselves can produce quite precise force vectors. The technology has been used in satellites for decades, where they are known as “reaction wheels,” and the principle worked here as well, retarding or aiding my hand’s motions as it moved between two buttons. The forces involved are small but perceptible, and one can imagine clever usage of the gyros creating all manner of subtle but convincing pushes and pulls.

The concept was taken to a local extreme a few meters away at University of Chicago’s booth, where attendees were fitted with a large powered backpack with a motorized weight that could move up and down quickly. This was used to provide the illusion of a higher or lower jump, as by shifting the weight at the proper moment one seems to be lightened or accelerated upwards, or alternately pushed downwards — if a mistake in the associated jumping game is made.

Our colleagues at Engadget wrote up the particulars of the tech ahead of its debut last week.

While the bulky mechanism and narrow use case mark it like the others as a proof of concept, it shows that the perception of bodily motion, not just of an object or one appendage, can be affected by judicious use of force.

String theory

When it comes to the sensation of holding things, current VR controllers also fall short. While the motion tracking capabilities of the latest Quest and PlayStation VR2 headsets are nothing short of amazing, one never feels one truly interacting with the objects in a virtual environment. Tokyo Institute of Technology team created an ingenious — and hilariously fiddly — method of simulating the feeling of touching or holding an object with your fingertips.

The user is fitted with four tiny rings on each hand, one for each finger excepting the pinky. Each ring is fitted with a tiny little motor on top, and from each motor depends a tiny little loop of thread, which is fitted around the pad of each fingertip. The positions of the hands and fingers are tracked with a depth sensor attached (just barely) to the headset.

In a VR simulation, a tabletop is covered in a variety of cubes and other shapes. When the tracker detects that your virtual hand intersects with the edge of a virtual block, the motor spins a bit and tugs on the loop — which feels quite a lot like something touching the pads of your fingers!

Image Credits: Devin Coldewey / TechCrunch

It all sounds very janky, and it definitely was — but the basic idea and sensation was worth experiencing and the setup was clearly not too expensive. Haptic gloves that can simulate resistance are few and far between, and quite complicated to boot (in fact another researcher present worked on this device, a more complex version of a similar principle). A refined version of this system might be made for under $100 and provide a basic experience that is still transformative.


SIGGRAPH and this hall in particular were full of these and more experiences that rode the line between the physical and digital. While VR has yet to take off in the mainstream, many have taken that to mean that they should redouble efforts to improve and expand it, rather than give it up as a dead platform.

The conference also showcased a great deal of overlap between gaming, VFX, art, virtual production, and numerous other domains — the brains behind these experiments and the more established products on the expo floor clearly feel that the industry is converging while diversifying, and a multi-modal, multi-medium, multi-sensory experience is the future.

But it isn’t inevitable — someone has to make it. So they’re getting to work.

Nvidia CEO: We bet the farm on AI and no one knew it

Nvidia founder and CEO Jensen Huang said today that the company had made an existential business decision in 2018 that few realized would redefine its future, and help redefine an evolving industry. It’s paid off enormously, of course, but Huang said this is only the beginning of an AI-powered near future — a future powered primarily by Nvidia hardware. Was this successful gambit lucky or smart? The answer, it seems, is “yes.”

He made these remarks and reflections during a keynote at SIGGRAPH in Los Angeles. That watershed moment five years ago, Huang said, was the choice to embrace AI-powered image processing in the form of ray tracing and intelligent upscaling: RTX and DLSS respectively. (Quotes are from my notes and may not be verbatim, some minor corrections may take place after checking the transcript.)

“We realized rasterization was reaching its limits,” he said, referring to the traditional, widely used method of rendering a 3D scene. “2018 was a ‘bet the company’ moment. It required that we reinvent the hardware, the software, the algorithms. And while we were reinventing CG with AI, we were reinventing the GPU for AI.”

While ray-tracing and DLSS are still in the process of being adopted across the diverse and complex world of consumer GPUs and gaming, the architecture that they had created to enable it was found to be a perfect partner for the growing machine learning development community.

The massive amount of calculation required to train larger and larger generative models was served best not by traditional datacenters with some GPU capability, but systems like the H100 designed from the start to perform the necessary operations at scale. It would be fair to say that AI development was in some ways only limited by the availability of these computing resources. Nvidia was in possession of a Beanie Baby-scale boom and has sold about as many servers and workstations as it has been able to make.

But Huang asserted that this has just been the beginning. The new models not only need to be trained, but run in real time by millions, perhaps billions of users on a regular basis.

“The future is an LLM at the front of just about everything: “Human” is the new programming language,” he said. Everything from visual effects to a rapidly digitizing manufacturing market, factory design, and heavy industry will adopt in some degree a natural language interface, Huang hazarded.

“Entire factories will be software defined, and robotic, and the cars they’ll be building will themselves be robotic. So it’s robotically designed robots building robots,” he said.

Some may not share his outlook, which while plausible also happens to be extremely friendly to Nvidia’s interests.

But while the degree of reliance on LLMs may be unknown, few would say it will not be adopted at all, and even a conservative estimate of who will use it and for what will necessitate a serious investment in new computing resources.

Investing millions of dollars in last-generation computing resources, like CPU-focused racks, is foolish when something like a GH200, the newly revealed and datacenter-dedicated AI development hardware, can do the same job for less than a tenth of the cost and power requirements.

He gleefully presented a video showing a LEGO-like assembly of multiple Grace Hopper computing units into a blade, then a rack, then a row of GH200s all connected at such high speeds that they amounted to “the world’s largest single GPU,” comprising one full exaflop of ML-specialty computing power.

Image Credits: Devin Coldewey

“This is real size, by the way,” he said, standing for dramatic effect at the center of the visualization. “And it probably even runs Crysis.”

These are going to be the basic unit of the digital, AI-dominated industry of the future, he proposed.

“I don’t know who said it, but… the more you buy, the more you save. If i could ask you to remember one thing from my talk today, that would be it,” he said, earning a laugh from the game audience here at SIGGRAPH.

No mention of AI’s many challenges, regulation, or the entire concept of AI shifting — as it already has multiple times in the last year. It’s a rose-tinted view of the world, to be sure, but when you’re selling pickaxes and shovels during a gold rush, you can afford to think that way.

Nvidia teams up with Hugging Face to offer cloud-based AI training

Nvidia is partnering with Hugging Face, the AI startup, to expand access to AI compute.

Timed to coincide with the annual SIGGRAPH conference this week, Nvidia announced that it’ll support a new Hugging Face service, called Training Cluster as a Service, to simplify the creation of new and custom generative AI models for the enterprise.

Set to roll out in the coming months, Training Cluster as a Service will be powered by DGX Cloud, Nvidia’s all-inclusive AI “supercomputer” in the cloud. DGX Cloud includes access to a cloud instance with eight Nvidia H100 or A100 GPUs and 640GB of GPU memory, as well as Nvidia’s AI Enterprise software to develop AI apps and large language models and consultations with Nvidia experts.

Companies could subscribe to DGX Cloud on its own — pricing starts at $36,999 per instance for a month. But Training Cluster as a Service integrates DGX Cloud infrastructure with Hugging Face’s platform of more than 250,000 models and over 50,000 data sets — a helpful startup point for any AI project.

“People around the world are making new connections and discoveries with generative AI tools, and we’re still only in the early days of this technology shift,” Hugging Face co-founder and CEO Clément Delangue said. “Our collaboration will bring Nvidia’s most advanced AI supercomputing to Hugging Face to enable companies to take their AI destiny into their own hands with open source to help the open-source community easily access the software and speed they need to contribute to what’s coming next.”

Hugging Face’s tie-up with Nvidia comes as the startup reportedly looks to raise fresh funds at a $4 billion valuation. Founded in 2014 by Delangue, Julien Chaumond and Thomas Wolf, Hugging Face has expanded rapidly over the past nearly-decade, evolving from a consumer app to a repository for all things related to AI models. Delangue claims that more than 15,000 organizations are using the platform today. 

The collaboration makes sense for Nvidia, which in recent years has made bigger pushes into cloud services for training, experimenting with and running AI models as the demand for such services grows. Just in March, the company launched AI Foundations, a collection of components that developers can use to build custom generative AI models for particular use cases.

Tech market research firm Tractica forecasts that AI will account for as much as 50% of total public cloud services revenue by 2025. Demand is so high for AI cloud training infrastructure, in fact, that it’s causing hardware shortages, forcing cloud providers like Microsoft to curb investors’ expectations around growth.