Finally, VR for your mouth

Teams have tried a number of different ways to get the lower half of the face into the act with VR. That includes, and I’m quoting directly here, “a tiny robotic arm that could flick a feather across the lips or spray them with water.” They’ve also largely ruled out a piece that directly covers the mouth — VR users ultimately didn’t like the sensation of having both their eyes and mouths covered at the same time, which, fair enough.

Ultimately, a team at Carnegie Mellon University settled on a much more practical method of offering added tacticity: ultrasound waves. A system developed by researchers at the school attaches a device to the bottom of a headset, sending the waves down toward the lips to create a kind of haptic sensation.

The technology operates similarly to the way hardware makers create virtual buttons on devices. The added twist here, however, is that ultrasound waves are capable of traveling through the air. They can only do so for short distances, but it’s enough to make the journey from the bottom of a VR headset to wearers’ mouths.

The device utilizes 64 transducers arrayed in a curved configuration, which mix constructive and destructive amplification to vary the effect.

So, all of this presents the big question of why. Before you go getting any ideas, the end goal here is the same as with any VR peripheral: increasing immersion. The team cites ideas like raindrops, wind and, god forbid, a virtual bug crawling across your real lips. Among other things, the team notes that the mouth is a prime target because — like the hands — there’s a lot of sensation.

“You can’t really feel it elsewhere; our forearms, our torso — those areas lack enough of the nerve mechanoreceptors you need to feel the sensation,” says co-author, Vivian Shen.

Results of immersion will vary. Turns out things like cobwebs don’t work great, because it doesn’t make sense to have that feeling localized to the face. Ditto for a drinking fountain, because feeling the sensation without the actual water isn’t particularly convincing. Nevertheless, the team is continuing to iterate on the product, and working to make it smaller and lighter.

“Our phased array strikes the balance between being really expressive and being affordable,” Shen says.

Oura’s new CEO discusses the future of the smart ring

Tom Hale gestures as speaks, revealing an Apple Watch on one arm and an Oura Ring on the other. When I point out the combination, he takes a beat. “I think it varies based on the use case and why you come to Ring,” the executive answers. “One of my first questions of the company was, ‘how many of the Ring wearers have the Apple Watch?’ And the number was surprising high.”

Oura puts the figure at around 30% — or just under a third. It’s a surprising figure at first blush, running counter to the notion of the Oura Ring as solely a standalone activity tracker. That’s how I had initially contextualized the product, as something akin to a wrist-worn fitness device in a smaller, less intrusive form factor.

There’s often a stark contrast between the expectation and realities of user adoption. You really don’t know how the world is going to interact with your product until your product is out in the world. Oura is far from the first health-focused wearable — heck, it’s not even the first health-focused ring. It has, however, bucked expectation in a number of ways.

In an overcrowded market dominated by smartwatches (and, really, one specific brand), Oura managed to carve out its own niche. A little over a month ago, the firm announced that it had sold its one millionth ring. It’s an impressive figure for a relatively new product in an unproven form factor. Much of the company’s successes have come from leaning into health studies as well as partnerships with big-name sports leagues, from the NBA to NASCAR.

Much of this impressive growth happened under the purview of Harpreet Singh Rai. A former Wall Street hedge manager, Singh Rai became a true believer in the product, citing his own weight loss journey. He became an investor and board member before stepping into the top spot in 2018. After a three-year run, he announced his exist via LinkedIn late last year.

Rattling off some key milestones for the company, Singh Rai added, “While all those accomplishments are great, I’ve come to realize that’s not the point. I remember talking to another CEO that I admire – and he once described the point of any company is really to endure, and by that, for an idea to live in the world forever, beyond anyone of us.”

Image Credits: Brian Heater

COO Michael Chapp stepped into the interim role before Hale was announced earlier this week. The new CEO brings experience from a wide range of roles at companies, including Adobe, HomeAway, Momentive AI and Second Life producer, Linden Labs. As Oura pushes more deeply into data collection and app-based actionable insights, software has increasingly become a focus for the firm. But at its heart, it’s still a hardware company — something that has, thus far, been absent from his resume.

“Hardware is hard, and it also requires discipline,” says Hale. “It requires a rigor, which is powerful, particularly if you’re trying to do something difficult. The ambition of this company is broad, big and bold. We’re trying to put people in charge of their health and give them data and insights to make better choices — and maybe their healthcare over time. That’s a huge mission. Hardware is an enabler of that and software is the key. Data science is the key. Personalization is the key. That’s the opportunity that a person like me who comes from a software background can bring.”

The company’s evolution has not come without growing pains. In particular, a shift toward a subscription service rubbed some of Oura’s fanbase the wrong way. The company has promised deeper insights by way of its app, while moving some existing offerings behind a paywall — effectively asking users to pay a monthly fee for some of the data that had previously been included as part of the hardware’s upfront $300 cost. Hale says it was an issue he focused on after being asked to join the company.

Image Credits: Brian Heater

“There is a clear value of ongoing continuous data and continuous investment,” he says. “In order to continue doing that and supporting the science that underlies it and expanding it to new adjacencies outside of sleep, I think there’s a cause for a subscription business model. Unfortunately, most people who buy wearables pay the price and want the things it does now. I think that was a miscalculation on the part of the company. The only thing I think we can do to make it different and better is to deliver the things we said we would have be a part of the gen-three lifecycle.”

Hale points to exempting earlier Oura Ring adopters, as well as the warning the company gave users ahead of the third-generation ring launch. He also notes that moving to a lower upfront cost and a hardware-as-a-service approach are models that are likely in the company’s future. He cites reports around Apple and Peloton’s explorations in the space as evidence of HaaS becoming more accepted by the mainstream.

An IPO could certainly be in Oura’s future, as well, under Hale’s watch. Though he cautions that such a move is probably still a ways away.

“The markets have been pretty choppy of late, and I think I’ve got some work to do to get there. I don’t think we’re building this company to IPO. We’re not building this company to IPO, we’re building this company to make an impact on the world of health and put preventative medicine into the hands of people who can improve their lives with it.”

Google’s Pixel Watch may finally be on its way

Everything old is new again. Someone leaves a prototype in a bar, prototype makes its way into some blogger’s hands, chaos ensues. It’s hard to imagine another device coming along soon that’s shrouded in the same layer of security as the iPhone 4. Both gadgets and tech journalism have evolved since those halcyon days, but nothing like a high-profile leak to get the blood pumping in these old typing fingers.

Companies’ protect-the-IP-at-all-cost approach has shifted, too, over the past decades. Plenty of firms have come to understand the power of well-timed leaks. That’s not to say that any of the Pixel Watch news we’ve seen in recent weeks and months has been intentional — it’s just hard to imagine any of us saying much about an upcoming Google wearable if some information hadn’t found its way online.

The weekend’s restaurant leak comes a few days after a trademark application for “Pixel Watch” popped up online. The timing of everything seems to point to the imminent announcement of the moderately anticipated wearable. Given that I/O kicks off at the Shoreline Amphitheater in just over two weeks, it seems reasonable to infer that — at the very least — we’ll be getting a taste in mid-May. Google really needs to make a splash here.

Google’s history with wrist-worn wearables has been — in a word — spotty. It’s been 7.5 years since the company launched Android Wear. The product arrived with plenty of hardware partners — companies like Motorola, Samsung, LG and HTC. Honestly, it’s a pretty good snapshot of the Android ecosystem, and fittingly two of those four have either stopped making phones or at least gotten close.

Samsung drifted away from Android Wear fairly quickly — opting instead to develop its own unique flavor of Tizen. More recently, Motorola has moved to Moto Watch OS for its latest device — it’s a lightweight RTOS (real-time operating system) similar to what OnePlus uses on its watches.

Google’s wearable operating system continued to stagnate for a number of years, as Apple — and to a lesser extent Samsung — dominated the smartwatch category. In 2018, however, it got a rebrand. Wear OS represented a shot at a free start. “We’re just scratching the surface of what’s possible with wearables and there’s even more exciting work ahead,” the company wrote in a post.

The change didn’t result in a massive shift in market share — though a number of more recent moves have helped move the needle. Most notable thus far is a reunion with Samsung. “The new Wear OS Powered by Samsung” finds the firms joining forces in a bid to go toe to toe with Apple, starting with the Galaxy Watch 4. For Samsung, it means access to a Play apps. Third-party support has long been a major sticking point for the industry. For Google, it means suddenly having their operating system on a LOT more devices.

Samsung was doing perfectly fine, thank you very much, on the wearable front without Google’s help. Granted, it wasn’t going to surpass the Apple Watch any time soon, but the company still sells a lot of devices. I suspect the onus was on Google to convince the hardware company that it was still all-in on Wear OS. One thing you can’t deny is that the software giant is more than willing to spend its way to the top here. You’d be forgiven for forgetting that Google purchased Fossil’s smartwatch technology in early 2019.

“Wearables, built for wellness, simplicity, personalization and helpfulness, have the opportunity to improve lives by bringing users the information and insights they need quickly, at a glance,” Wear OS VP Stacey Burr said at the time. “The addition of Fossil Group’s technology and team to Google demonstrates our commitment to the wearables industry by enabling a diverse portfolio of smartwatches and supporting the ever-evolving needs of the vitality-seeking, on-the-go consumer.”

I now realize I started that specific post with the words, “Rumors about a Pixel Watch have abounded for years.” That puts a fine enough point on precisely how long we’ve been talking about the damn thing.

That specific deal was soon eclipsed by Google’s $2.1 billion Fitbit acquisition. The deal passed regulatory scrutiny, in spite of some reasonable concern over what Fitbit’s new partner would be doing with the massive volumes of data these devices collect.

“This deal has always been about devices, not data, and we’ve been clear since the beginning that we will protect Fitbit users’ privacy,” Google SVP Rick Osterloh noted at the time. “We worked with global regulators on an approach which safeguards consumers’ privacy expectations, including a series of binding commitments that confirm Fitbit users’ health and wellness data won’t be used for Google ads and this data will be separated from other Google ads data.”

Now this technological turducken goes even deeper. Fossil acquired Misfit in 2015 for $260 million. Fitbit, struggling to expand into smartwatches, purchased Pebble’s assets for $20.3 million the following year. Honestly, it was a bargain, and it actually resulted in a great smartwatch with the Versa. Ultimately, however, it seems it was too little, too late as Fitbit opted to sell itself to Google.

Certainly, we’ve got the ingredients for something exceptional here. Given the recency of the Fitbit acquisition, it’s hard to know exactly how much of its tech would make it into a first-gen Pixel Watch. It stands to reason, however, that the subsidiary’s strong health focus will play a central role in Google wearables, going forward. The company’s still developing some important stuff, including a new always-on A-Fib detector.

All of this comes together as Google has revamped its first-party hardware. The Pixel division underwent a reckoning/restructure after years of middling sales. That gave us the truly excellent Pixel 6. Thing is, Pixel phones have always been pretty good, even if their sales haven’t reflected it. Google’s not starting from scratch here, exactly (not if you count all the work Timex and Fitbit, et al. have done), but we are effectively talking about a new category for the company, so it’s worth tempering your expectations accordingly.

So we’re left with a prototype left at a bar. It’s round, with a glass back and a crown that appears to function similarly to the Apple Watch. The device sports a proprietary band and a pair of buttons around the edges. It’s likely a prototype, but all of that comports with earlier rumors and the tale of a bartender who was holding onto the product for a customer who apparently never came back for the thing.

Assuming we’ve got about two and a half weeks for the announcement, that gives us all enough time to craft some think pieces about whether the Pixel Watch is too little and/or too late. Google has already bought its way into a significant portion of wearable marketshare with the Fitbit acquisition, but it’s once again entering a mature and crowded market — something it struggled to maintain with the first several generations of Pixel phones.

Mojo Vision takes another step toward AR contact lenses with new prototype

We’ve known Mojo Vision’s journey to market was going to be a long and deliberate one since we saw an early prototype in Las Vegas a number of CESes ago. You can multiply all of the talk of hardware being hard a few times over when attempting to execute something novel and tiny that’s designed to be worn on one of the more vulnerable parts of the human anatomy.

Today the Bay Area-based firm announced a new prototype of its augmented reality contact lens technology. The system is based around what Mojo calls “Invisible Computing,” its heads up display technology that overlays information onto the lens. Essentially it’s an effort to realize the technology you’ve seen in every science-fiction movie from the past 40+ years. The set-up also features an updated version of the startup’s operating system, all designed to reduce user reliance on screens by — in a sense — moving the screen directly in front of their eyes.

The system is building around a 0.5 millimeter microLED display with a remarkably dense 14,000 pixels per inch. The text overlays are highlighted through micro-optics, while data is transferred back and forth via a 5GHz band. All of that is powered by an ARM Core M0 processor. An eye-tracking system is on-board, utilizing acceleromter, gyroscope and magnetometer readings to determine the motion of the wearer’s gaze. That, in turn, forms the foundation of the system’s hands-free control.

The company writes:

Since we first revealed Mojo Lens to the world in January 2020, we’ve been innovating and building, and integrating systems that many people thought couldn’t be built, let alone operational in a contact lens form factor. The most common thing we hear as we share this latest prototype is, “I knew there would be smart contact lenses, but I thought they were 10 or 20 years out, not now.” This is happening and I’m excited about our next milestones and realizing the promise of Invisible Computing.

Of course, things are still in the prototype phase — so “now” isn’t now, exactly. The company continues to work with the FDA to help bring the tech to market as part of its Breakthrough Devices Program. The company also announced previous partnerships with fitness brands like Adidas Running to develop workout applications for the tech.

Dyson is betting you’ll want to strap an air purifier to your face

Air. I love it, you love it. We’re all out here walking around in it all day, filling our lungs and blood with the stuff. We can’t get enough of it. But that beautiful, wonderful, life-saving air that you, me and your pet chinchilla all need is bad sometimes. That’s right. The same air we rely on is sometimes filled with bad, tiny things. Things that would love nothing more than to fly into your nose and wreak havoc on your soft, unprotected insides.

Over the past two years, air purifiers have seen a massive spike in sales here in the U.S., starting with a 57% increase in 2020. The pandemic, coupled with phenomenon like the California wildfires, has driven many to install filters in their homes and offices. All the while, the engineers at Dyson have been asking themselves one important question: What if we found a way to stick the purifier on people’s faces? In a world where mask wearing has become the status quo, maybe it’s not the most out-there question?

Maybe.

Image Credits: Dyson

The Dyson Zone is a beast. It has many of the hallmarks of Dyson’s much-loved product design, with the decided (and not insignificant) difference that it’s designed to be strapped onto the wearer’s face. Or, more precisely, I suppose, strapped to a pair of headphones and worn in front of their mouth. Honestly, the basic form factor most closely resembles a football helmet.

The final product arrives after 500 prototypes over six years, according to the company. Says Dyson:

Originally a snorkel-like clean air mouthpiece paired with a backpack to hold the motor and inner workings, the Dyson Zone air-purifying headphones evolved dramatically over its six years in development. More than 500 prototypes saw one motor initially placed at the nape become two compressors, one in each ear-cup and the evolution of the snorkel mouthpiece into an effective, contact-free visor that delivers clean air without full-face contact – a brand-new clean air delivery mechanism.

Image Credits: Dyson

The removable visor shoots a pair of filtered air streams at the user’s mouth and nose without coming in direct contact with the face. It’s designed to filter out allergens, pollutants and other particulates. Dyson isn’t making any claims here about the Zone’s ability to filter out contagions like COVID. Instead, the product comes with an attachment that allows a user to wear a face covering in addition to the product. The headphones feature three noise-canceling modes, and the front piece has four air purification settings.

Exact pricing and availability are not yet available — which is too bad, because I really want to know how much this thing is going to run. Broadly, it’s arriving in select markets at some point this fall. More information on all of that is promised in the coming months.

Brain.space remakes the EEG for our modern world (and soon, off-world)

Figuring out what’s going on in the brain is generally considered to be somewhere between extremely difficult and impossible. One major challenge is that the best ways to do so are room-sized machines relegated to hospitals — but brain.space is hoping that its portable, powerful, and most importantly user-friendly EEG helmet (plus $8.5M in funding) could power new applications and treatments at home and — as a sort of cork pop for its debut — in space.

Electroencephalography, or EEG, is an established method for monitoring certain signals the brain produces, and which can indicate which areas of the cortex are active, whether the user is concentrating, agitated, and so on. It’s not nearly as precise as an MRI, but on the other hand all you need for an EEG is a set of electrical contacts on the scalp, while an MRI machine is huge, loud, and incredibly expensive.

There’s been precious little advancement in EEG tech, though, and it’s often done more or less the same way it was done decades ago. Recently that’s begun to change with devices like Cognixion’s, which uses re-engineered EEG to interpret specific signals with a view to allowing people with motor impairments to communicate.

The Israel-based brain.space (styled in lowercase, with a period in it, specifically to vex reporters) has its own take on EEG that it claims not only provides superior readings to traditional ones, but is wireless and can be set up without expert help.

“It was designed to be the most effective, cheapest, easiest to use EEG acquisition headset in the world. One headset, for multiple people, that automatically configures itself perfectly to each one’s head,” said brain.space CEO and co-founder Yair Levy. In development for four years, the headset has 460 sensors and is “fully automated” in that it can be set up and run very simply.

A person wearing the brain.space headset working at a computer.

Not exactly stylish, but other EEG setups are even worse. The armband is an ISS-related power regulator. Image Credits: brain.space

As it is only just emerging from stealth, the company has no peer-reviewed documentation on the headset’s efficacy and resolution. “But we recently kicked off research activities with several academic institutes, including the Department of Cognitive and Brain Science of Ben Gurion University, as well as a medical center in Israel,” Levy said.

The fact is it would be hard not to improve on the EEG setups being used in many labs — if it did more or less what they did in a portable, user-friendly form, that would be enough to celebrate.

The science of EEG is well understood, but the company has improved on existing designs by including more densely packed electrodes, and ones that fortunately do not require any kind of conductive gel or oil on the skin — anyone who’s had their head oiled up to take part in an experiment can testify that this is not fun.

Because of the nature of EEG signals, these sensors will overlap somewhat, but Levy explained that their internal studies have found that these signal overlaps follow a power law, meaning they can be computationally disambiguated. That means a clean data output that can be interpreted by and used as training material for machine learning systems.

Although the headset is obviously a big piece of the puzzle, the company won’t only be making and distributing it: “Our vision is to provide a comprehensive software end-to-end stack that makes working and integrating brain activity as easy as integrating GPS or fitness data,” said Levy.

Image Credits: brain.space

Of course, wearing a helmet that makes you look like Marvin the Martian isn’t something you’ll do on your morning run, or even while riding your stationary bike or standing at your desk. It’s still very much a situational medical device. But like other advances in technology that have brought medical monitoring devices to the home, this can still be transformational.

“We see this as asking what putting a cheap GPS in an iPhone would be good for,” Levy explained. “The obvious answer was mapping, but the reality was that developers did far more innovative things with it than just road directions. That’s how we see our job, to allow innovation to occur around brain activity, not build out the use-cases ourselves.”

Of course if they didn’t have any use cases in mind, they would never have been able to fund four years of R&D. But they’re looking into things like tracking learning disabilities, markers for cognitive declines from diseases like Alzheimers, and also athletic performance. The cost of the headset will vary depending on the application and requirements, the company told me, though they would not provide further details. For reference, bargain-bin setups go for under a grand, while medical-research-grade ones run into the $10K range, and brain.space would likely fall in between.

The first public demonstration of the tech is about as flashy as you could imagine: an experiment set on the International Space Station. Brain.space is taking part in Axiom-1, the first fully privately funded mission to the ISS, which will have a host of interesting experiments and projects on board.

Participants in the study will use the headset on the surface while performing a number of tasks, then repeat those tasks with variations while aboard the ISS. The company described the reasoning for the experiment as follows:

brain.space has set itself the goal to become the standard for monitoring neuro-wellness in space.

While there is data collection being carried out for various physiological measurements, such as heart rate, galvanic skin resistance, and muscle mass, there is currently no high-quality longitudinal data regarding the neural changes in prolonged space missions. Such information can be vital in assessing day-to-day plastic changes in the brain and predicting how the brain will adapt to long-term space travel.

Naturally they’re not the first to think of this — NASA and other space agencies have done similar experiments for years, but as brain.space points out, those were with pretty old-school gear. This is not only potentially a test of cognitive function in space, but a proof of the idea that cognitive function in space can be tested with relatively little trouble. No one wants to grease up their scalp for a weekly cognitive load test on a 3-month trip to Mars.

In addition to the headset and experiment, brain.space announced it has raised an $8.5 million seed round led by Mangrove Capital Partners (no other participants named). It isn’t cheap doing medical device R&D, but there’s almost certainly a market for this in and beyond telehealth and performance monitoring. We should hear more about the headset’s specific advantages as it enters more public testing.

Oura sells its millionth ring

We love big, round numbers here in hardware land. Hitting one million of anything is an impressive feat, let alone one million $399 smart rings. It’s too soon to suggest that Oura has permanently transformed the wearables space, but in a time when things have calcified around the smartwatch form factor – and one specific smartwatch in particular – it’s worth noting when a startup arrives on the scene to shake things up.

It’s safe to say that Oura was among those startups that managed to get a boost from the pandemic, in spite of a high price tag. The device is less of an active fitness tracker than it is a health monitor. Its vitals tracking and unobtrusive form factor earned the company deals with a number of sports leagues, ranging from NBA to NASCAR. There are few higher profile fingers you’d want your product on.

Those partnerships coupled well with some research studies around things like the product’s built-in temperature tracking. At the end of 2020, Nature published a study titled “Feasibility of continuous fever monitoring using wearable devices,” highlighting how the device might be used to detect changes in body temperature – and potentially spot a Covid infection early on.

The company also used today’s news to highlight the device’s sleep tracking – another thing many of us have no doubt spent a lot of time thinking about a little over two years after the pandemic changed the world and our ability to sleep through the night.

“Oura was the first wearable to focus on sleep because we knew from Day 1 how much it impacts other aspects of our health,” says COO Michael Chapp in a blog post. “And what we measure, we can improve. Research shows a direct correlation between chronic sleep deprivation and disease. Good sleep improves nearly all aspects of life, including immunity, performance, and mental health.”

Oura wasn’t the first smart ring, nor is it the last. It was beat to the punch by Motiv, though that company has been largely silent since shifting from health tracking to biometrics at the height of the pandemic (rough timing, to say the least). Since then, companies like Movano and Circular have merged, in hopes of capturing some of the newfound interest in a form factor that had initially failed to gain traction (for the record, I’m still not a ring guy). Google-owned Fitbit is rumored to be working on its own ring, per the recent discovery of some published patents.

Oura has also courted some negative feedback. As I noted in a largely positive review of the Ring 3, the company recently moved some key metrics behind a monthly subscription paywall. It’s an understandably hard pill for some to swallow after paying the steep upfront fee for the hardware. Though – at least thus far – such pushback doesn’t appear to have impact solid growth on the back of good reviews and general hype.

Snap buys mind-controlled headband maker, NextMind

Snap this morning confirmed that it has acquired NextMind for an undisclosed sum. The Paris-based startup is best known for its self-titled controller, which utilizes brain signals to move images on a PC interface. After announcing a $399 dev kit at CES, the company began shipping in Q2 2020. We took it for a spin at the end of that year and called the hard a “rare ‘wow’ factor.”

“NextMind has joined Snap to help drive long-term augmented reality research efforts within Snap Lab,” the company wrote in a blog post. “Spectacles are an evolving, iterative research and development project, and the latest generation is designed to support developers as they explore the technical bounds of augmented reality.”

The news finds the firm integrating into Snap Lab, the social media company’s hardware research wing. It also marks the end of NextMind’s dev kit as a standalone. Pieces of the technology will almost certainly make their way into future Snap products, including AR plays like Camera and Spectacles.

Founded in 2017 by a team of neuroscientists hardware engineers, the company’s technology utilizes a wearable headband with a built-in electroencephalogram to detect and read neural activity in the cortex. As the wearer views an image on a display, the headset can determine they want to move it. Mind-controlled interfaces like this make a lot of sense for augmented realty. Head-mounted displays, in particular, have long suffered from a controller problem, which such technologies could go a ways towards solving.

“This technology monitors neural activity to understand your intent when interacting with a computing interface, allowing you to push a virtual button simply by focusing on it,” Snap adds. “This technology does not ‘read’ thoughts or send any signals towards the brain.”

NextMind’s raised a $4.6 million seed round in mid-2018. The team will continue to work out of Paris, with 20 of its employees (largely technical) joining Snap Labs, and focusing on longer-term research and development. Last May, Snap purchased WaveOptics, which makes components used in AR headsets. That same month, the company previewed its fourth generation Spectacles, which it called the “first pair of glasses that bring augmented reality to life.”

Injectsense collects $1.7M grant for its eye implant smaller than a grain of rice

If you were to accidentally drop the eye sensor developed by Injectsense you’d have little chance of finding it. Ariel Cao, the founder and CEO, admits as much. But once it’s been implanted into the back of your eye, it can remain there, basically immobile for as long as 80 years – all the while transmitting data.

Injectsense, a startup founded in 2014, has developed an ocular implant smaller than a grain of rice. That device can measure intraocular pressure – a measure of how much tension is building within your eyeball. Intraocular pressure is a significant risk factor for glaucoma, a disease that causes damage to the optic nerve, and eventually blindness.

You’ve probably had your intraocular pressure measured before, and it’s not particularly pleasant. The procedure involves your eye doctor giving you numbing drops, placing your head into a bright microscope, and touching your eye with a device called a tonometer.

Injectsense’s implant, by comparison, is designed to wirelessly transmit that data continuously once inserted.

“It will collect all the info so you have nothing to do,” Cao told TechCrunch. “You can sit around. You can skydive, hike, do whatever you want.”

Injectsense’s device would be delivered into the body using a short, non-surgical procedure. It’s something like an intravitreal injection, when a small needle is used to deliver medicine to the back of the eye – you feel pressure, but no puncture pain.

The device can be recharged by putting on a pair of accompanying glasses for 5 minutes each week, which also allows the device to download its intraocular pressure readings to the cloud where an ophthalmologist can review it. The battery, says Cao, can continue in this pattern for 80 years.

Based on animal studies and in-vitro data, Injectsense was awarded a two-year small business innovation research (SBIR) grant of $1.7 million by the National Eye Institute in March. That comes on the back of an FDA breakthrough device designation achieved in 2020. (Breakthrough device designations allow for a slightly faster review process). That combination is suggestive that regulators want to at least see more data on Injectsense’s device.

The Injectsense device has only been tested in rabbits so far. A study reviewed by TechCrunch suggested that the devices performed well, though the data hasn’t been peer reviewed. There were no ocular issues in the animals, and the devices were successfully implanted.

This new grant will pave the way for more animal and bench testing at the Johns Hopkins Wilmer Eye Institute this year.

Those tests will also inform a human pilot study in Chile also scheduled for this year. Cao said the team selected Chile for human trials for three reasons: lower overall cost, an experienced review board at the Centro de la Visión in Santiago, and specifically to work with Juan Jose Mura Castro, an ophthalmologist there.

Measuring intraocular pressure might not feel like an especially flashy application of injectable technology when the likes of Neuralink is in the headlines. But the device’s simplicity is both personal and pragmatic.

Cao’s inspiration for working in the glaucoma space comes from his own experience with his late father, who suffered from the disease. It’s not an uncommon story. Glaucoma is the second leading cause of blindness worldwide, and the leading cause of blindness in the US, where it affects about 3 million people. Worldwide, glaucoma cases are projected to increase from 57.5 million to over 111.8 million by 2040.

When it comes to combating glaucoma, measuring pressure is useful. Numerous scientific studies have shown that intraocular pressure is a major risk factor for glaucoma. It’s not the only risk factor, and not all people with glaucoma have elevated intraocular pressure, but it is still considered the most important one.

The Injectsense leadership team.

The big question with any implantable device is: what’s to be gained by actually putting a sensor in the body? If we can already simply measure intraocular pressure with tools eye doctors already have, why upgrade to something so technical?

Cao’s argument is that measuring intraocular pressure during clinic visits misses key fluctuations in pressure that scientists know happen within the eye. But because we don’t measure those fluctuations routinely in most people, we could be missing potential avenues for care.

That argument, to some extent, has been echoed by research. While it’s possible, though cumbersome, to measure intraocular pressure regularly during the day, measuring these changes at night is difficult. And, data has suggested that at night, intraocular pressure does fluctuate, perhaps even peak.

For instance, one study measured intraocular pressure in 24 patients with early stage glaucoma every two hours. The study says that patients were “awakened if necessary” but it’s hard to imagine not being awakened by someone opening and touching your eye. The study found that the glaucoma patients had different patterns of intraocular pressure depending on the time of night compared to healthy controls. For instance, between 5:30 and 7 AM their intraocular pressure increased, while the control group’s pressure declined.

The authors go on to state that this “phase delay” could be relevant to their glaucoma diagnoses, but they don’t expand on why that might be the case. And, they advise that intraocular pressure measurement in the clinician’s office “is probably not adequate for the optimal management of glaucoma.”

Cao argues that continuous sensing could provide a picture into how these changes affect glaucoma progression.

“We keep looking at clinical studies and research, and they keep telling everybody that the fluctuations [in pressure], or night pressure is important,” he said. “The night pressure is important because when you lower the blood pressure, the intraocular pressure goes up.

“So say you will have a heart condition and glaucoma, you never want to take your drug before going to bed because you lower your [blood] pressure and you spike your [intraocular pressure] in the middle of the night.”

Injectsense’s technology already exists in a viable form factor, but there’s still a lot of work to do. Remember, Injectsense is still in animal trials, so these big ideas still have a ways to go before they’re ready for FDA review.

The company has raised $15 million so far and is in the process of raising a Series C round. Investors include Large Ophthalmic Strategic and Revelation partners, as well as several undisclosed investors.

Microsoft’s PeopleLens project helps blind kids learn social cues in conversation

Among the challenges of growing up with a visual impairment is learning and participating in the social and conversational body language used by sighted people. PeopleLens is a research project at Microsoft that helps the user stay aware of the locations and identities of the people around them, promoting richer and more spontaneous interactions.

A sighted person looking around a room can quickly tell who is where, who’s talking to whom and other basic information useful for lots of social cues and behaviors. A blind person, however, may not know who has just entered a room, or whether someone has just looked at them to prompt them to speak. This can lead to isolation and antisocial behaviors, like avoiding groups.

Researchers at Microsoft wanted to look into how technology could help a child blind since birth access that information and use it in a way that made sense for them. What they built was PeopleLens, a clever set of software tools that run on a set of AR glasses.

Using the glasses’ built-in sensors, the software can identify known faces and indicate their distance and position by providing audio cues like clicks, chimes and spoken names. For instance, a small bump noise will sound whenever the user’s head points in anyone’s direction, and if that person is within 10 feet or so, it will be followed by their name. Then a set of ascending tones helps the user direct their attention toward the person’s face. Another notification will sound if someone nearby looks at the user, and so on.

The PeopleLens software and its 3D view of the environment.

The PeopleLens software and its 3D view of the environment. Image Credits: Microsoft Research

The idea isn’t that someone would wear a device like this for life, but use it as a learning aid to improve their awareness of other cues and how to respond to them in a prosocial way. This helps a kid build the same kinds of non-verbal skills that others learn with the benefit of sight.

Right now PeopleLens is very much an experiment, though the team has been working on it for quite a while. The next step is to assemble a cohort of learners in the U.K. between the ages of 5 and 11 who can test out the device over a longer period. If you think your kid might be a good match, consider signing up at Microsoft Partner the University of Bristol’s study page.