No one has done AR or VR well. Can Apple?

On Monday, Apple is more than likely going to reveal its long-awaited augmented or mixed reality ‘Reality Pro’ headset during the keynote of its annual WWDC developer conference in California. It’s an announcement that has been tipped or teased for years now, and reporting on the topic has suggested that at various times, the project has been subject to delays, internal skepticism and debate, technical challenges and more. Leaving anything within Apple’s sphere of influence aside, the world’s overall attitude towards AR and VR has shifted considerably — from optimism, to skepticism.

Part of that trajectory is just the natural progression of any major tech hype cycle, and you could easily argue that the time to make the most significant impact in any such cycle is after the spike of undue optimism and energy has subsided. But in the case of AR and VR, we’ve actually already seen some of the tech giants with the deepest pockets take their best shots and come up wanting — not for lacking of trying, but because of limitations in terms of what’s possible even at the bleeding edge of available tech. Some of those limits might actually be endemic to AR and VR, too, because of variances in the human side of the equation required to make mixed reality magic happen.

The virtual elephant in the room is, of course, Meta. The name itself pretty much sums up the situation: Facebook founder Mark Zuckerberg read a bad book and decided that VR was the inevitable end state of human endeavor — the mobile moment he essentially missed out on, but even bigger and better. Zuckerberg grew enamored by his delusion, first acquiring crowdfunded VR darling Oculus, then eventually commandeering the sobriquet for a shared virtual universe from the dystopian predictions of a better book and renaming all of Facebook after it.

Meta has had its kick at the can — in fact it’s been kicking furiously for the past half-decade at least. The last two efforts of note were the Meta Quest 3, which it revealed earlier this week to mild applause, and the intensely overpriced Meta Quest Pro, which landed with a thud that was anything but virtual. The best that you can say for Mark’s metaversal ambitions is that the Meta Quest and Quest 2 lured in a decent number of VR-curious casuals — but not nearly enough to build a sustainable business on at the scale of Facebook or the iPhone.

Looking around for a second-place finisher to supplement Meta’s thin dossier in support of AR/VR being the platform of the future, we come up pretty short on candidates. HTC ended up going all-in on VR when it offloaded its smartphone division to Google, but that’s hardly made it a household name. Sony launched a second generation of its PSVR this year, but that seems, by most accounts, to have been less enthusiastically received than the first. Steam has a VR headset, which I mention mostly in case you forgot (for which you’d be forgiven).

But this is Apple. It’s the company that basically invented the MP3 player, and the smartphone. Except that it didn’t actually invent either of those things, it just made them better. And the things that it was working from were already actually pretty well-loved and universally adopted (any number of generic MP3 players in the former case, and the BlackBerry in the latter). Apple has never actually had to deal with a cold start problem — it’s always been a refiner, not an inventor, nor a rescuer.

AR and VR headsets are not analogs to early MP3 players or smartphones — no matter how much companies spend in developing them, no matter how advanced the technologies they offer on board (or, conversely, how many concessions they make to comfort and convenience) consumers regularly stand up more or less in unison and say ‘ neat, but no thanks.’

Apple’s entry seems unlikely to land any differently, despite what you may think of the company and its track record. AR and VR have fundamental problems when it comes to accessibility, with huge swaths of the population who find it nausea-inducing regardless of what mitigation strategies are put in place. A huge chunk of people just simply don’t like having to wear something on their face, period. In these cases, there’s probably not a value threshold that even exists that can overcome that objection — and certainly not one demonstrated by any of the existing attempts that have made it into people’s hands, well-funded and varied though they may be.

The internet is littered with blog posts penned by authors who underestimated Apple at their peril, deriding the iPhone as a “toy” or claiming the Apple Watch would become a high-profile failure. It’d be dumb not to admit the possibility that, as in those other areas, Apple might be able to come through with a surprise success that does end up striking a chord with a mass-market audience. AR and VR however, is a very different part of the technosphere, and the Apple of today is just literally quite a different company from the Apple that introduced the iPhone — or even the one that brought us the Apple Watch.

There’s a tremendous amount of anticipation around this launch, to be sure, but it’s different from the anticipation around other Apple launches. This time, the big question is ‘why’ — and for once, Apple can’t look to other examples for answers.

Read more about WWDC 2023 on TechCrunch

No one has done AR or VR well. Can Apple? by Darrell Etherington originally published on TechCrunch

Lenovo’s Yoga Book 9i realizes the full potential of a dual-screen laptop

Lenovo’s Yoga Book 9i drew both appreciative and skeptical stares at CES earlier this year when it made its official debut: With two 13-inch OLED screens attached with a central hinge, it’s one of the most unusual laptop designs to ever make it into actual production. The Lenovo Yoga Book 9i ($2,099) is building on a long tradition of dual-screen notebook and portable device concepts (along with some shipping hardware), but it’s the first that proves the paradigm can work — and work well — for a lot of people.

Basics

The Lenovo Yoga Book 9i is defined by one feature in particular: Instead of having a hardware keyboard and trackpad for its lower half, it has a second 13.3-inch OLED screen to match the one on top. These are connected by a remarkable hinge that allows for use in a number of orientations, and which also packs in a speaker array powered by Bowers & Wilkins (more on this later, but spoiler: it works really well).

In the box, you also get a separate Bluetooth keyboard, a stylus (Lenovo’s Digital Pen 3), a Bluetooth mouse, and an origami-style folding stand that also doubles as a case for the keyboard and stylus. The sticker price of $2,099 definitely can seem steep at first glance, but Lenovo has at least done right by customers by including everything they need in the box instead of making accessories like keyboard, stylus and mouse available piecemeal as add-on purchases after the fact.

Lenovo Yoga Book 9i

Image Credits: Darrell Etherington / TechCrunch

Of course, the Yoga Book 9i is powered by Windows 11 under the hood — with a layer of surprisingly low-key and unobtrusive Lenovo software included to ensure that all the dual-screen magic just works; which it mostly does, albeit with the software seams still showing a bit here and there, something that’s fully expected in a first-generation device running early software. None of these hiccups prove annoying or distracting enough to compromise the overall experience of using the Yoga Book 9i, however, which is excellent on balance.

Hardware and Design

The Lenovo Yoga Book 9i is, top to bottom, a very well-made piece of kit. Both of the screens are gorgeous, which makes sense given that they share the same panel and specs, and they’re enclosed in a very durable-feeling metal shell with rounded edges that are nice to touch and look great with shiny reflective finishes. The Yoga Book 9i only comes in one color, at least at launch, which is a teal that wouldn’t normally be my personal pick but that works very well for distinguishing the novel machine at a glance even when it’s closed. My one complaint here is that while the included keyboard is color-matched to the case, the stylus and mouse come in a plain grey that feels a little aesthetically incongruous when working with the entire combined setup.

Luckily, aside from some questionable accessory colorway choices, Lenovo gets everything else pretty much exactly right. The upper and lower halves of the Yoga Book 9i close with a satisfying sticky click, and the hinge maintains them at whatever angle you choose to use them at. This is especially important because the Yoga Book 9i can be used in quite a few different ways, including a standard laptop orientation with or without the hardware keyboard magnetically attached to the bottom display; with the screen flipped all the way around in a single screen tablet mode; with the displays stacked on top of one another and propped up on the stand in a horizontal dual-screen mode; with the displays side-by-each in a vertical dual screen orientation; or with both screens active and facing two different users on either side in tent mode.

Lenovo Yoga Book 9i

Image Credits: Darrell Etherington / TechCrunch

The hinge that allows all of that flexibility is a wonder itself — it’s entirely covered in a grill and has speakers throughout, which provide sound in whatever direction it’s needed. The hardware and the tuning by Bowers & Wilkins results in actually surprisingly great sound from notebook speakers — it’s more than capable of providing a great movie or video-watching experience, and it even works adequately well for playing back music if you’re stuck without headphones or an external speaker.

The built-in camera has a 5MP sensor and IR so that it works for Windows Hello facial recognition login, and there’s a hardware disable switch on the side of the laptop’s lower display for those who value extra privacy assurance. In use, the camera was more than adequate for video conferencing, and seemed to deal well with a range of different lighting conditions, both indoors and out.

One additional note here on the quality of the hardware — it’s very durable as well, a fact I can attest to because of two highly unusual but ultimately handy random accidents: First, the Yoga Book 9i fell from a standing desk I was using outside when my patio umbrella accidentally blew into it and toppled it off. It survived this with no discernible damage or marks whatsoever. The second accident happened when the hook keeping my massive dining room chandelier gave out overnight, allowing the all-metal light fixture to swing, wrecking ball-like, directly into the back of the Yoga Book 9i’s back top surface, knocking it from my dining room table to the ground. This last time resulted in very, very tiny (like you almost can’t see it) surface scratching to the paint finish, but had no other impact on the machine either physically (ie. no dents) or in terms of function. It’s built like a tank, which is actually a really useful feature for a laptop like this that you’ll want to handle a lot, flip around, travel with and change working orientations.

Lenovo Yoga Book 9i

Image Credits: Darrell Etherington / TechCrunch

Performance

The Yoga Book 9i is powered by a 13th generation Intel Core i7, comes with 16GB of DDR5X RAM, has integrated Intel Iris X graphics and a 1 TB SSD. There are three Thunderbolt 4 ports (one on the left and two on the right) and there’s Bluetooth 5.1 and Wifi 6E in terms of connectivity. The two OLED screens have 2.8K resolution, are HDR-capable with 400 nits max brightness and 60Hz refresh rates.

While it doesn’t have amazing specs on paper compared to some of the latest ultrabooks out there, performance in practice from the Yoga Book is more than sufficient for most people. It’s a speedy machine that feels fast and nimble, and it can handle Photoshop and Lightroom flows with relative ease. I didn’t use it for video editing, so your mileage may vary there, but it’s an excellent workhorse for anyone who works mostly in Office/Google’s work suite, email and light media.

Lenovo Yoga Book 9i

Image Credits: Darrell Etherington / TechCrunch

As for media consumption, this is definitely one of its fortes. The displays are both fantastic for watching video, and the second screen means you have built-in options for multitasking, including doing things like browsing the web or using Twitter while you’re watching something, or doing digital drawing/painting on the lower screen while viewing a reference on the top.

Aside from the performance of its components, one key to the Lenovo Yoga Book 9i’s success is just how good it is at taking advantage of its unique physical design. As you’d expect, there are compromises when it comes to using a notebook with two screens instead of just one with a hardware keyboard and trackpad permanently installed in the other.

Lenovo Yoga Book 9i

Image Credits: Darrell Etherington / TechCrunch

Lenovo has actually done an amazing job mitigating most of these, with multitouch gestures to easily call up the software keyboard and trackpad whenever you need them instantly, and a clever magnet-based docking mechanism for the included hardware keyboard that means you can easily plop it into place whenever you need to dive into a focused session of rapid WPM output. The fact that the stand works so well, but also doubles as a case for both the pen and the keyboard that makes both those and the computer an extremely portable package.

If I had one significant complaint about the Yoga Book 9i when it comes to performance, it’s battery life. The two displays obviously draw additional power vs. just one in more traditional machines, and it’s a very high-quality, fairly bright screen. In practice, I’ve been averaging about 6 hours of use per charge — and that can drop considerably if you’re doing things that are taxing like long video meetings. It’s a throwback to much earlier days amid a sea of very long-lived portable powerhouses, but given the extra screen real estate you’re working with, it does also make sense as a trade-off.

Bottom Line

The Lenovo Yoga Book 9i looked a bit like a stunt when it made its official debut earlier this year — ambitious, certainly, but practical? That didn’t seem likely. Now that it’s here, though, I can say that is actually eminently practical, and in fact ranks as my favorite notebook to use, period. That’s factoring in the relatively meager battery life I mentioned, and some other very minor concessions to the form factor like not being able to close the lid on the hardware keyboard when it’s magnetically attached to the bottom display.

I’ve eschewed most form factor shifts in the PC world — hybrids, 2-in-1s, tablets with detachable keyboards, etc. The Lenovo Yoga Book 9i has broken through with its unique hinged dual-screen approach, which works well both at home and on the road, and which offers something no other competitor, regardless of brand, can claim to match.

Lenovo Yoga Book 9i

Image Credits: Darrell Etherington / TechCrunch

Lenovo’s Yoga Book 9i realizes the full potential of a dual-screen laptop by Darrell Etherington originally published on TechCrunch

Arm launches new chips for faster smartphone performance during Computex

Arm CEO Rene Haas

Arm CEO Rene Haas

Just ahead of CEO Rene Haas’ keynote at Computex in Taipei today, Arm launched two new products designed to increase smartphone performance. The first is the Arm Cortex-X4, its fourth-generation Cortex-X core. Arm said the Cortex-X4 is the fastest CPU it is made so far and will bring 15% more performance than its predecessor, the Cortex X-3, with a focus on enabling artificial intelligence and machine learning-based apps.

The second new product is the Arm Immortalis-G720, which is based on its fifth-generation GPU architecture. Its predecessor, the Immortalis-G715 GPU, is currently inside flagship devices from OPPO and vivo through a partnership with MediaTek. Arm’s fifth-generation GPU architecture was created with high geometry games and real-time 3D apps in mind, in order to replicate the feel of console gameplay on mobile devices.

Arm said the Cortex-X4’s microarchitecture consumes 40% less power than Cortex-X3 on the same process, increasing responsiveness and app launch time.

Arm also announced a new platform called for mobile computing called Arm Total Compute Solutions 2023 (TCS23), which will include IP like the Immortalis GPU, Armv9 CPUs and software enhancements. With their packages of IP, the company’s Total Complete Solutions series were created for System on Chip (SoC) designers who are building their own compute subsystems. TCS23 is meant for premium smartphone models and build on Arm’s new Armv9.2 architecture. Its GPUs are based on fifth-generation architecture, including the newly-launched Immortalis-G720, Mali-G720 and Mali-G620. The Armv9.2 compute cluster includes the new Cortex-4, Cortex-A720 and Cortex-A520 CPUs, and the DSU-120, Arm’s latest DynamiQ shared unit.

In his keynote today, Haas said Arm has traditionally been an IP supplier, but then started to see how long it was taking for IP to integrate with other IP. So to help SoC designers, it started to build CPU, memory systems and compute blocks before integrating, configuring and validating them to deliver a full system.

Arm is continuing its partnership with TSMC by “taping out the Cortex-X4 on the TSMC N3E process,” which it calls an industry first.

Owned by the SoftBank Group Corp, Arm announced last month that it had filed in the U.S. what will be this year’s largest initial public offering. It plans to raise between $8 billion to $10 billion in its IPO on Nasdaq.

Arm’s decision to make its stock debut comes as U.S. IPOs, excluding SPACs, are down about 22% to just $2.35 billion year-to-date, reports CNN.

 

Arm launches new chips for faster smartphone performance during Computex by Catherine Shu originally published on TechCrunch

Meet the tiny, wireless sleep apnea diagnostic wearable headed for the US

UK medtech startup Acurable has gained FDA clearance for a novel wireless diagnostic device for remote detection of obstructive sleep apnea (OSA). A formal launch into the US market is slated to follow this summer. Its wearable is already being used by a number of hospitals in the UK (where it launched in 2021) and in the European Union, after obtaining local regulatory clearances in the region.

The startup, which was founded back in 2016, is the brainchild of Imperial College professor Esther Rodriguez-Villegas, director of the university’s Wearable Technologies Lab, who spent some 1.5 decades conducting research into using acoustic sensing for tracking respiratory biomarkers to diagnose cardiorespiratory conditions — work that underpins the commercial hardware.

The London-based startup raised an €11 million Series A round (~$11.8M) back in October with its eye on the US launch. Prior to that it received £1.8M (~$2.2M) in across three different grants from Innovate UK, a national body which supports product commercialization. Private investors in the medtech startup include Madrid-based Alma Mundi Ventures, London’s Kindred Capital and KHP Ventures, a healthcare-focused venture fund also in the UK which is a collaboration between two NHS Hospital Trusts (King’s College and Guy’s and St Thomas’) and King’s College London.

OSA refers to a chronic respiratory condition characterized by pauses in breathing caused by the person’s upper airway being obstructed during sleep. It’s thought to affect a small percentage of adults — around 1.5 million adults in the UK; and some 25M in the US (with many more people affected across the world) — and while not immediately life-threatening it can be linked to serious health implications since it can contribute to conditions such as cardiovascular disease, diabetes, dementia and even heart attacks, making treatment or management important. 

Healthcare services often struggle to manage chronic conditions, given the expense of long term monitoring. But Rodriguez-Villegas explains that in the case of sleep apnea there is even a challenge for healthcare services to diagnose the condition — since traditional polysomnography tests are inconvenient and/or costly. (The patient is either asked to sleep overnight at a special center, where they’re fitted out with a bunch of wired sensors. Or else they are trained how to fit the various electrodes themselves at home, with the associated risk that the test will have to be repeated if sensors are incorrectly fitted or get detached during sleep.)

Acurable’s tiny, self-applied wearable has been designed to offer a far more patient-friendly (and cost effective way) for diagnosis of the condition — allowing for the testing to be both remote (in patients’ homes) and super simple so patients can self-administer it.

One early adopter of Acurable’s product — Dr Michael Harrison, a professor of surgery and pediatrics at the Children’s Hospital at UCSF — offers strong praise, writing in a supporting statement that the device has been “game-changing for our patients, as it is a much simpler and comfortable experience”, as well as talking up how it “enables clinicians to conduct multiple night studies at a time, improving patient outcomes by giving them a much speedier diagnosis”.

For her part, Rodriguez-Villegas says she saw a role for developing technologies to solve problems with a significant social impact by addressing healthcare bottlenecks associated with chronic (and often under-diagnosed) respiratory conditions, starting with sleep apnea. So the plan is for her startup to bring more wearables to market in future, for other respiratory conditions, such as COPD and asthma — all based on the core acoustic sensing IP developed for the first device.

“What I realised early on was that [chronic cardiorespiratory conditions] will not be something that could be solved if we continue with [traditional healthcare] processes — that it’s not a matter of pumping money into the system. Because there is also human resources. So you need the clinicians, the nurses, you need to understanding. So that’s where where my journey started with tech,” she tells TechCrunch. “Deciding how do we create techs that can solve the bottlenecks and make patients’ lives better?”

While core research underpinning the product has taken well over a decade, designing, prototyping and building the actual product took around six or seven years, according to Rodriguez-Villegas — so working on things like miniaturizing the hardware and designing a UX with high accessibility so it’s easy for patients of all ages (and tech abilities) to use which she says was a huge priority for her.

“The app is designed so that there is no room for a stress or failure,” she says, explaining how she pushed her design team to avoid assuming users would know how to navigate traditional software menu structures. “I had had to have lots of conversations with my UI people in the beginning because they couldn’t understand where it was coming from.”

Female patient AcuPebble device (Photo credit - Acurable)

A patient self administering the AcuPebble with guidance from the app (Photo credit: Acurable)

As for the hardware, the startup’s one-shot sensing device, which is called the AcuPebble, looks a bit like a coffee pod that’s been colored an Apple-esque shade of shiny white. So sleek and minimalist looking is it that it resembles some kind of consumer device, rather than a medical instrument, with no utility grey plastic or scary bundles of cables in sight.

This purist look is entirely by design — reflecting Acurable’s overarching mission to rethink a convoluted diagnostic bottleneck using sensor-driven automation.

Patients use the AcuPebble at home where it’s worn overnight stuck to the the skin of their neck (using a patented adhesive). It’s also a single-use medical device — gathering and uploading enough data across one night’s tracking of the sleeper’s breathing to produce a diagnosis. (So to borrow another piece of Apple lore, you could say it’s designed to ‘just work’.)

The kit works by using tiny, high performance piezoelectric MEMS microphones to — in simple terms — listen to the patients breathing as they sleep. Although Rodriguez-Villegas is guarded with the exact details of how it works, saying the product is only partially patented so protecting IP remains a concern.

Acousing sensing as a diagnostic tool in healthcare is of course nothing new — just think of the stethoscope. But what’s novel here is the understanding of the sonic landscape associated with cardiorespiratory conditions that Acurable has been able to develop through years of research to isolate relevant biomarkers.

“The hardware is designed to detect particular biomarkers we are looking for and those biomarkers are very different to the conventional ones. And how do we know this? It’s again because it’s been almost two decades in the making,” says Rodriguez-Villegas.

The data the device captures is uploaded the cloud where it’s processed by Acurable whose algorithms produce an automated diagnosis which is sent to (human) clinicians for review. So much of the research which underpins the hardware was focused on understanding the specific ‘signal in the noise’ of the human body by winnowing down noisy human biology into the respiratory biomarkers of interest for diagnosing the particular cardiorespiratory conditions it’s focusing on.

The algorithms it’s using for diagnosis of sleep apnea are not machine learning or any other form of artificial intelligence. Nor is its approach data driven, per Rodriguez-Villegas, who emphasizes it’s using algorithms that are “fully traceable”. Although she does not entirely rule out using AI in the future — but is categorical that AI is unnecessary for this product and, indeed, that explainability in healthcare is an essential component; that there must be no black boxes for medical diagnostics.

“In this product — and the product in the market now — there is no AI. This is physiological signal processing based on very unique physiological modelling that we are experts on,” she says. “Everything in the algorithms happens for a known reason so the algorithms are fully traceable… Again, this is based on the research that we did in respiration for many years. That led us to that. It is not data driven. It’s really not data driven. I cannot really tell you exactly what it is. Because that’s part of the computational IP.

“But I do understand that everybody nowadays because AI is in everybody’s mind it’s almost like that is the default thought, right, that things are AI or they are data driven. No, no. We know why every single thing is happening. So in the same way, as you might know, you know, why your heart beats and [the steps in the cardiac cycle that take place around that] this is gonna be like that.”

Acurable founder professor Esther Rodriguez-Villegas (Photo credit - Acurable)

Founder professor Esther Rodriguez-Villegas (Photo credit: Acurable)

Demonstrating the efficacy of its diagnostic algorithms was a core part of obtaining regulatory clearance for AcuPebble. And details of one clinical trial of the device, which was carried out at the Royal Free NHS Trust with a sample size of 150 patients — comparing usage to at-home multi-channel polygraphy followed by sleep specialist manual signal interpretation — can be found here.

Acurable says it was the first wearable medical device to obtain the CE mark in Europe for the automated testing of OSA at home. So, in its home region, it has regulatory clearance for fully automated diagnoses. But, in practice, the product has been set up so that the data (and diagnosis) are sent to a clinician — which helps keep these essential users comfortable with a novel tool — so there’s still a human in the loop.

Over in the US — where Acurable’s device will be officially launching at some point this summer — it’s obtained 512K clearance from the FDA for OSA evaluation in adults for two variants of the device customised for the American healthcare market. (Rodriguez-Villegas explains it did not file for de novo clearance since there is no existing device on the market that does automated diagnosis for OSA.)

The US versions of the product send data to a clinician to review and provide a diagnosis. So, in that market, it’s being strictly positioned as a clinical support tool. But that’s down to differences in the regulatory environment, rather than any technical difference in capability in the different per-market versions of the product.

It’s still early days for Acurable — with “tens” of hospitals using the AcuPebble at this stage. But it’s expecting usage to step up as it launches in the US and predicts it will be expanding its team by around 300%.

Rodriguez-Villegas also says it intends to expand into selling consumer products too “eventually” — but not before clinicians have been able to get comfortable with using the device and the data it provides.

She’s dismissive of current-gen consumer wearables — which can pack a range of health-tracking claim and even include sleep apnea detection type features, such as by tracking nighttime SPO2 — saying a lot of these consumer wearables generate data that’s “very, very misleading” and creates “enormous amount of stress” for consumers. And indeed for the doctors faced with patients bringing in their own unreliable, non-medical grade health data.

“So that’s a situation that we totally want to avoid,” she adds. “Anyone can check our results. Is it is very, very good. It’s very, very reliable. But there is a lot of scepticism in the medical community when it comes to wearables. And that’s why we decided to go down [this regulated medical device] route.”

Meet the tiny, wireless sleep apnea diagnostic wearable headed for the US by Natasha Lomas originally published on TechCrunch

Meet the tiny, wireless sleep apnea diagnostic wearable headed for the US

UK medtech startup Acurable has gained FDA clearance for a novel wireless diagnostic device for remote detection of obstructive sleep apnea (OSA). A formal launch into the US market is slated to follow this summer. Its wearable is already being used by a number of hospitals in the UK (where it launched in 2021) and in the European Union, after obtaining local regulatory clearances in the region.

The startup, which was founded back in 2016, is the brainchild of Imperial College professor Esther Rodriguez-Villegas, director of the university’s Wearable Technologies Lab, who spent some 1.5 decades conducting research into using acoustic sensing for tracking respiratory biomarkers to diagnose cardiorespiratory conditions — work that underpins the commercial hardware.

The London-based startup raised an €11 million Series A round (~$11.8M) back in October with its eye on the US launch. Prior to that it received £1.8M (~$2.2M) in across three different grants from Innovate UK, a national body which supports product commercialization. Private investors in the medtech startup include Madrid-based Alma Mundi Ventures, London’s Kindred Capital and KHP Ventures, a healthcare-focused venture fund also in the UK which is a collaboration between two NHS Hospital Trusts (King’s College and Guy’s and St Thomas’) and King’s College London.

OSA refers to a chronic respiratory condition characterized by pauses in breathing caused by the person’s upper airway being obstructed during sleep. It’s thought to affect a small percentage of adults — around 1.5 million adults in the UK; and some 25M in the US (with many more people affected across the world) — and while not immediately life-threatening it can be linked to serious health implications since it can contribute to conditions such as cardiovascular disease, diabetes, dementia and even heart attacks, making treatment or management important. 

Healthcare services often struggle to manage chronic conditions, given the expense of long term monitoring. But Rodriguez-Villegas explains that in the case of sleep apnea there is even a challenge for healthcare services to diagnose the condition — since traditional polysomnography tests are inconvenient and/or costly. (The patient is either asked to sleep overnight at a special center, where they’re fitted out with a bunch of wired sensors. Or else they are trained how to fit the various electrodes themselves at home, with the associated risk that the test will have to be repeated if sensors are incorrectly fitted or get detached during sleep.)

Acurable’s tiny, self-applied wearable has been designed to offer a far more patient-friendly (and cost effective way) for diagnosis of the condition — allowing for the testing to be both remote (in patients’ homes) and super simple so patients can self-administer it.

One early adopter of Acurable’s product — Dr Michael Harrison, a professor of surgery and pediatrics at the Children’s Hospital at UCSF — offers strong praise, writing in a supporting statement that the device has been “game-changing for our patients, as it is a much simpler and comfortable experience”, as well as talking up how it “enables clinicians to conduct multiple night studies at a time, improving patient outcomes by giving them a much speedier diagnosis”.

For her part, Rodriguez-Villegas says she saw a role for developing technologies to solve problems with a significant social impact by addressing healthcare bottlenecks associated with chronic (and often under-diagnosed) respiratory conditions, starting with sleep apnea. So the plan is for her startup to bring more wearables to market in future, for other respiratory conditions, such as COPD and asthma — all based on the core acoustic sensing IP developed for the first device.

“What I realised early on was that [chronic cardiorespiratory conditions] will not be something that could be solved if we continue with [traditional healthcare] processes — that it’s not a matter of pumping money into the system. Because there is also human resources. So you need the clinicians, the nurses, you need to understanding. So that’s where where my journey started with tech,” she tells TechCrunch. “Deciding how do we create techs that can solve the bottlenecks and make patients’ lives better?”

While core research underpinning the product has taken well over a decade, designing, prototyping and building the actual product took around six or seven years, according to Rodriguez-Villegas — so working on things like miniaturizing the hardware and designing a UX with high accessibility so it’s easy for patients of all ages (and tech abilities) to use which she says was a huge priority for her.

“The app is designed so that there is no room for a stress or failure,” she says, explaining how she pushed her design team to avoid assuming users would know how to navigate traditional software menu structures. “I had had to have lots of conversations with my UI people in the beginning because they couldn’t understand where it was coming from.”

Female patient AcuPebble device (Photo credit - Acurable)

A patient self administering the AcuPebble with guidance from the app (Photo credit: Acurable)

As for the hardware, the startup’s one-shot sensing device, which is called the AcuPebble, looks a bit like a coffee pod that’s been colored an Apple-esque shade of shiny white. So sleek and minimalist looking is it that it resembles some kind of consumer device, rather than a medical instrument, with no utility grey plastic or scary bundles of cables in sight.

This purist look is entirely by design — reflecting Acurable’s overarching mission to rethink a convoluted diagnostic bottleneck using sensor-driven automation.

Patients use the AcuPebble at home where it’s worn overnight stuck to the the skin of their neck (using a patented adhesive). It’s also a single-use medical device — gathering and uploading enough data across one night’s tracking of the sleeper’s breathing to produce a diagnosis. (So to borrow another piece of Apple lore, you could say it’s designed to ‘just work’.)

The kit works by using tiny, high performance piezoelectric MEMS microphones to — in simple terms — listen to the patients breathing as they sleep. Although Rodriguez-Villegas is guarded with the exact details of how it works, saying the product is only partially patented so protecting IP remains a concern.

Acousing sensing as a diagnostic tool in healthcare is of course nothing new — just think of the stethoscope. But what’s novel here is the understanding of the sonic landscape associated with cardiorespiratory conditions that Acurable has been able to develop through years of research to isolate relevant biomarkers.

“The hardware is designed to detect particular biomarkers we are looking for and those biomarkers are very different to the conventional ones. And how do we know this? It’s again because it’s been almost two decades in the making,” says Rodriguez-Villegas.

The data the device captures is uploaded the cloud where it’s processed by Acurable whose algorithms produce an automated diagnosis which is sent to (human) clinicians for review. So much of the research which underpins the hardware was focused on understanding the specific ‘signal in the noise’ of the human body by winnowing down noisy human biology into the respiratory biomarkers of interest for diagnosing the particular cardiorespiratory conditions it’s focusing on.

The algorithms it’s using for diagnosis of sleep apnea are not machine learning or any other form of artificial intelligence. Nor is its approach data driven, per Rodriguez-Villegas, who emphasizes it’s using algorithms that are “fully traceable”. Although she does not entirely rule out using AI in the future — but is categorical that AI is unnecessary for this product and, indeed, that explainability in healthcare is an essential component; that there must be no black boxes for medical diagnostics.

“In this product — and the product in the market now — there is no AI. This is physiological signal processing based on very unique physiological modelling that we are experts on,” she says. “Everything in the algorithms happens for a known reason so the algorithms are fully traceable… Again, this is based on the research that we did in respiration for many years. That led us to that. It is not data driven. It’s really not data driven. I cannot really tell you exactly what it is. Because that’s part of the computational IP.

“But I do understand that everybody nowadays because AI is in everybody’s mind it’s almost like that is the default thought, right, that things are AI or they are data driven. No, no. We know why every single thing is happening. So in the same way, as you might know, you know, why your heart beats and [the steps in the cardiac cycle that take place around that] this is gonna be like that.”

Acurable founder professor Esther Rodriguez-Villegas (Photo credit - Acurable)

Founder professor Esther Rodriguez-Villegas (Photo credit: Acurable)

Demonstrating the efficacy of its diagnostic algorithms was a core part of obtaining regulatory clearance for AcuPebble. And details of one clinical trial of the device, which was carried out at the Royal Free NHS Trust with a sample size of 150 patients — comparing usage to at-home multi-channel polygraphy followed by sleep specialist manual signal interpretation — can be found here.

Acurable says it was the first wearable medical device to obtain the CE mark in Europe for the automated testing of OSA at home. So, in its home region, it has regulatory clearance for fully automated diagnoses. But, in practice, the product has been set up so that the data (and diagnosis) are sent to a clinician — which helps keep these essential users comfortable with a novel tool — so there’s still a human in the loop.

Over in the US — where Acurable’s device will be officially launching at some point this summer — it’s obtained 512K clearance from the FDA for OSA evaluation in adults for two variants of the device customised for the American healthcare market. (Rodriguez-Villegas explains it did not file for de novo clearance since there is no existing device on the market that does automated diagnosis for OSA.)

The US versions of the product send data to a clinician to review and provide a diagnosis. So, in that market, it’s being strictly positioned as a clinical support tool. But that’s down to differences in the regulatory environment, rather than any technical difference in capability in the different per-market versions of the product.

It’s still early days for Acurable — with “tens” of hospitals using the AcuPebble at this stage. But it’s expecting usage to step up as it launches in the US and predicts it will be expanding its team by around 300%.

Rodriguez-Villegas also says it intends to expand into selling consumer products too “eventually” — but not before clinicians have been able to get comfortable with using the device and the data it provides.

She’s dismissive of current-gen consumer wearables — which can pack a range of health-tracking claim and even include sleep apnea detection type features, such as by tracking nighttime SPO2 — saying a lot of these consumer wearables generate data that’s “very, very misleading” and creates “enormous amount of stress” for consumers. And indeed for the doctors faced with patients bringing in their own unreliable, non-medical grade health data.

“So that’s a situation that we totally want to avoid,” she adds. “Anyone can check our results. Is it is very, very good. It’s very, very reliable. But there is a lot of scepticism in the medical community when it comes to wearables. And that’s why we decided to go down [this regulated medical device] route.”

Meet the tiny, wireless sleep apnea diagnostic wearable headed for the US by Natasha Lomas originally published on TechCrunch

Meet the tiny, wireless sleep apnea diagnostic wearable headed for the US

UK medtech startup Acurable has gained FDA clearance for a novel wireless diagnostic device for remote detection of obstructive sleep apnea (OSA). A formal launch into the US market is slated to follow this summer. Its wearable is already being used by a number of hospitals in the UK (where it launched in 2021) and in the European Union, after obtaining local regulatory clearances in the region.

The startup, which was founded back in 2016, is the brainchild of Imperial College professor Esther Rodriguez-Villegas, director of the university’s Wearable Technologies Lab, who spent some 1.5 decades conducting research into using acoustic sensing for tracking respiratory biomarkers to diagnose cardiorespiratory conditions — work that underpins the commercial hardware.

The London-based startup raised an €11 million Series A round (~$11.8M) back in October with its eye on the US launch. Prior to that it received £1.8M (~$2.2M) in across three different grants from Innovate UK, a national body which supports product commercialization. Private investors in the medtech startup include Madrid-based Alma Mundi Ventures, London’s Kindred Capital and KHP Ventures, a healthcare-focused venture fund also in the UK which is a collaboration between two NHS Hospital Trusts (King’s College and Guy’s and St Thomas’) and King’s College London.

OSA refers to a chronic respiratory condition characterized by pauses in breathing caused by the person’s upper airway being obstructed during sleep. It’s thought to affect a small percentage of adults — around 1.5 million adults in the UK; and some 25M in the US (with many more people affected across the world) — and while not immediately life-threatening it can be linked to serious health implications since it can contribute to conditions such as cardiovascular disease, diabetes, dementia and even heart attacks, making treatment or management important. 

Healthcare services often struggle to manage chronic conditions, given the expense of long term monitoring. But Rodriguez-Villegas explains that in the case of sleep apnea there is even a challenge for healthcare services to diagnose the condition — since traditional polysomnography tests are inconvenient and/or costly. (The patient is either asked to sleep overnight at a special center, where they’re fitted out with a bunch of wired sensors. Or else they are trained how to fit the various electrodes themselves at home, with the associated risk that the test will have to be repeated if sensors are incorrectly fitted or get detached during sleep.)

Acurable’s tiny, self-applied wearable has been designed to offer a far more patient-friendly (and cost effective way) for diagnosis of the condition — allowing for the testing to be both remote (in patients’ homes) and super simple so patients can self-administer it.

One early adopter of Acurable’s product — Dr Michael Harrison, a professor of surgery and pediatrics at the Children’s Hospital at UCSF — offers strong praise, writing in a supporting statement that the device has been “game-changing for our patients, as it is a much simpler and comfortable experience”, as well as talking up how it “enables clinicians to conduct multiple night studies at a time, improving patient outcomes by giving them a much speedier diagnosis”.

For her part, Rodriguez-Villegas says she saw a role for developing technologies to solve problems with a significant social impact by addressing healthcare bottlenecks associated with chronic (and often under-diagnosed) respiratory conditions, starting with sleep apnea. So the plan is for her startup to bring more wearables to market in future, for other respiratory conditions, such as COPD and asthma — all based on the core acoustic sensing IP developed for the first device.

“What I realised early on was that [chronic cardiorespiratory conditions] will not be something that could be solved if we continue with [traditional healthcare] processes — that it’s not a matter of pumping money into the system. Because there is also human resources. So you need the clinicians, the nurses, you need to understanding. So that’s where where my journey started with tech,” she tells TechCrunch. “Deciding how do we create techs that can solve the bottlenecks and make patients’ lives better?”

While core research underpinning the product has taken well over a decade, designing, prototyping and building the actual product took around six or seven years, according to Rodriguez-Villegas — so working on things like miniaturizing the hardware and designing a UX with high accessibility so it’s easy for patients of all ages (and tech abilities) to use which she says was a huge priority for her.

“The app is designed so that there is no room for a stress or failure,” she says, explaining how she pushed her design team to avoid assuming users would know how to navigate traditional software menu structures. “I had had to have lots of conversations with my UI people in the beginning because they couldn’t understand where it was coming from.”

Female patient AcuPebble device (Photo credit - Acurable)

A patient self administering the AcuPebble with guidance from the app (Photo credit: Acurable)

As for the hardware, the startup’s one-shot sensing device, which is called the AcuPebble, looks a bit like a coffee pod that’s been colored an Apple-esque shade of shiny white. So sleek and minimalist looking is it that it resembles some kind of consumer device, rather than a medical instrument, with no utility grey plastic or scary bundles of cables in sight.

This purist look is entirely by design — reflecting Acurable’s overarching mission to rethink a convoluted diagnostic bottleneck using sensor-driven automation.

Patients use the AcuPebble at home where it’s worn overnight stuck to the the skin of their neck (using a patented adhesive). It’s also a single-use medical device — gathering and uploading enough data across one night’s tracking of the sleeper’s breathing to produce a diagnosis. (So to borrow another piece of Apple lore, you could say it’s designed to ‘just work’.)

The kit works by using tiny, high performance piezoelectric MEMS microphones to — in simple terms — listen to the patients breathing as they sleep. Although Rodriguez-Villegas is guarded with the exact details of how it works, saying the product is only partially patented so protecting IP remains a concern.

Acousing sensing as a diagnostic tool in healthcare is of course nothing new — just think of the stethoscope. But what’s novel here is the understanding of the sonic landscape associated with cardiorespiratory conditions that Acurable has been able to develop through years of research to isolate relevant biomarkers.

“The hardware is designed to detect particular biomarkers we are looking for and those biomarkers are very different to the conventional ones. And how do we know this? It’s again because it’s been almost two decades in the making,” says Rodriguez-Villegas.

The data the device captures is uploaded the cloud where it’s processed by Acurable whose algorithms produce an automated diagnosis which is sent to (human) clinicians for review. So much of the research which underpins the hardware was focused on understanding the specific ‘signal in the noise’ of the human body by winnowing down noisy human biology into the respiratory biomarkers of interest for diagnosing the particular cardiorespiratory conditions it’s focusing on.

The algorithms it’s using for diagnosis of sleep apnea are not machine learning or any other form of artificial intelligence. Nor is its approach data driven, per Rodriguez-Villegas, who emphasizes it’s using algorithms that are “fully traceable”. Although she does not entirely rule out using AI in the future — but is categorical that AI is unnecessary for this product and, indeed, that explainability in healthcare is an essential component; that there must be no black boxes for medical diagnostics.

“In this product — and the product in the market now — there is no AI. This is physiological signal processing based on very unique physiological modelling that we are experts on,” she says. “Everything in the algorithms happens for a known reason so the algorithms are fully traceable… Again, this is based on the research that we did in respiration for many years. That led us to that. It is not data driven. It’s really not data driven. I cannot really tell you exactly what it is. Because that’s part of the computational IP.

“But I do understand that everybody nowadays because AI is in everybody’s mind it’s almost like that is the default thought, right, that things are AI or they are data driven. No, no. We know why every single thing is happening. So in the same way, as you might know, you know, why your heart beats and [the steps in the cardiac cycle that take place around that] this is gonna be like that.”

Acurable founder professor Esther Rodriguez-Villegas (Photo credit - Acurable)

Founder professor Esther Rodriguez-Villegas (Photo credit: Acurable)

Demonstrating the efficacy of its diagnostic algorithms was a core part of obtaining regulatory clearance for AcuPebble. And details of one clinical trial of the device, which was carried out at the Royal Free NHS Trust with a sample size of 150 patients — comparing usage to at-home multi-channel polygraphy followed by sleep specialist manual signal interpretation — can be found here.

Acurable says it was the first wearable medical device to obtain the CE mark in Europe for the automated testing of OSA at home. So, in its home region, it has regulatory clearance for fully automated diagnoses. But, in practice, the product has been set up so that the data (and diagnosis) are sent to a clinician — which helps keep these essential users comfortable with a novel tool — so there’s still a human in the loop.

Over in the US — where Acurable’s device will be officially launching at some point this summer — it’s obtained 512K clearance from the FDA for OSA evaluation in adults for two variants of the device customised for the American healthcare market. (Rodriguez-Villegas explains it did not file for de novo clearance since there is no existing device on the market that does automated diagnosis for OSA.)

The US versions of the product send data to a clinician to review and provide a diagnosis. So, in that market, it’s being strictly positioned as a clinical support tool. But that’s down to differences in the regulatory environment, rather than any technical difference in capability in the different per-market versions of the product.

It’s still early days for Acurable — with “tens” of hospitals using the AcuPebble at this stage. But it’s expecting usage to step up as it launches in the US and predicts it will be expanding its team by around 300%.

Rodriguez-Villegas also says it intends to expand into selling consumer products too “eventually” — but not before clinicians have been able to get comfortable with using the device and the data it provides.

She’s dismissive of current-gen consumer wearables — which can pack a range of health-tracking claim and even include sleep apnea detection type features, such as by tracking nighttime SPO2 — saying a lot of these consumer wearables generate data that’s “very, very misleading” and creates “enormous amount of stress” for consumers. And indeed for the doctors faced with patients bringing in their own unreliable, non-medical grade health data.

“So that’s a situation that we totally want to avoid,” she adds. “Anyone can check our results. Is it is very, very good. It’s very, very reliable. But there is a lot of scepticism in the medical community when it comes to wearables. And that’s why we decided to go down [this regulated medical device] route.”

Meet the tiny, wireless sleep apnea diagnostic wearable headed for the US by Natasha Lomas originally published on TechCrunch

The Backbone One: PlayStation Edition mobile controller is now available for Android

Backbone One – PlayStation Edition controller, made by Sony and Backbone, is now available for Android.

Last July, both companies partnered to release an iOS version of the controller with a lightning port. The idea of a Backbone controller is to slide your phone into its extended grip and play games.

Both iOS and Android versions of the Backbone One – PlayStation Edition look like the Sony-made console’s DualSense controller. This means users get to use PlayStation glyphs instead of traditional ABXY controls.

Image Credits: PlayStation

This controller lets you play some of the PlayStation titles with the PS Remote Play app. However, you can also play some compatible games like “Call of Duty: Mobile,” “Fortnite,” and “Diablo Immortal” from the Play Store (or the App Store).

Users can download the Backbone app for customized PlayStation experience and updates. Plus, the app acts like a hub for all compatible games including the ones from streaming services like Xbox Game Pass, Nvidia GeForce Now, and Amazon Luna.

The Backbone app acts like a hub for all controller supported games Image Credits: Backbone

 

The Backbone One – PlayStation Edition is available to purchase for $99 in the US, Canada, Latin America, Europe, the Middle East, Australia, and New Zealand. Sony said that it will soon make the controller available to users in Japan, Korea, Taiwan, Hong Kong, and Singapore as well.

The Backbone One: PlayStation Edition mobile controller is now available for Android by Ivan Mehta originally published on TechCrunch

The other DWI: Driving while immersed

On May 17, Meta and BMW released a video hailing a joint research breakthrough that will allow virtual reality headsets to work in moving cars.

Because the companies have figured out how to track a person’s body movement independently of the car’s motion, passengers and drivers will be able to wear VR headsets to simultaneously see the road and digital content or be totally immersed in a virtual world.

This is “the future we see coming down the road,” a Meta engineer says in the video.

I believe that putting virtual reality headsets in cars will kill people. VR is the most immersive medium ever invented — it covers your eyes and ears to replace the real world with a digital landscape. Meta — which sold 80% of all headsets worldwide last year and about 20 million in total—is facing the economic reality that VR will not soon replace videogames or Zoom meetings. So now they are turning to cars, pointing out in the video that, “Everyone spends time in cars every day.”

I believe that putting virtual reality headsets in cars will kill people. VR is the most immersive medium ever invented.

The notion that someone would drive an automobile while wearing a VR headset may sound outlandish, but twenty years ago, the notion that someone would type a memo while driving would have sounded just as improbable.

Every day, people lose loved ones because drivers choose texting over paying attention to the road. Approximately 5% of all car accidents are caused by distracted drivers, and texting has been proven to cause hundreds of deaths each year in the United States. In the Meta press release, while the narrative focuses on passengers, there is footage of a driver using the system. Moreover, their partner in this endeavor, BMW, is actively promoting VR for drivers.

The most relevant datapoint on this issue is Pokémon GO, an augmented reality video game where players see the world in real time but mediated through their smartphone or AR headset—watching a camera feed on the screen which is overlaid with video game content. The game has already contributed to many deaths. On the website PokemonGoDeathTracker.com, one can find specific news accounts of distracted drivers running over pedestrians while viewing a Pokémon-filled version of the road.

A Purdue University study quantified the phenomenon. Scholars analyzed just under 12,000 police reports of accidents in Tippecanoe County, Indiana, both before and after the release of the game in 2016, which was downloaded 100 million times during the brief study period. They found that in the months following the game’s release, crashes increased by an astonishing 48 % in locations where there were virtual Pokémon objects nearby, compared to areas where there were no virtual objects.

This game remains wildly popular; of all people who regularly play video games in the US, about a third of them currently play this AR game. At a March ethics conference I attended, we were told that the entire team at Niantic, the company that makes Pokémon GO, charged with safety was only five people.

In the Meta video, they hedged the promotion with a caption: “Professional Driver on Closed Roadway—Do Not Attempt.” They are challenging drivers to resist the temptation of using the most engaging, immersive medium ever invented. Clearly this same strategy of hoping that drivers resist the temptation of texting has failed miserably.

Most of us can recall a recent experience when we glanced at our phone while driving, and then immediately felt guilty because we lost track of the road for a moment. Now imagine the pull is not simply a typed sentence, but instead an incredibly immersive VR version of your favorite band, or a craps table in Vegas, or courtside at a Lakers game. Pedestrians won’t have a chance, and there is no reason to believe that driver education or safety settings will be more effective in VR than they have been with phones.

I spent a number of years as an advisor to Samsung, working on their AR/VR strategy. I once gave a talk to about half of their C-suite and went through a thought exercise to make them see the urgency of driving while immersed. Imagine you could go back in time and rebuild phones to have a speed switch that automatically turned off phones in moving cars. Would you do it? If you answer “no,” then you are basically killing people every day.

If you answer “yes,” then drivers get to catch up with friends on the way to the office. It was a tense moment, but not an actionable one, because of course there are no time machines. Smart phones in cars are part of life now and innocent people will continue to die every day because people feel the need to text and drive.

To the decision-makers at Meta, and to those at Apple who plan to release their own headset in June: You don’t need a time machine. VR is still in its infancy. Don’t do this.

Even better, take a leadership role here. In the video, Meta highlighted a feat of engineering — algorithmically separating body movement from car movement. So, they actually can build headsets that have the speed switch which automatically turns off in moving cars!

Just because you can make VR work in a car doesn’t mean you should. How many loved ones are going to be killed because someone wants to hit a block with a lightsaber while driving?

The other DWI: Driving while immersed by Walter Thompson originally published on TechCrunch

Apple invites media to WWDC 2023 keynote, where AR headset is expected to debut

Apple has sent out official invitations to select media to attend its WWDC 2023 keynote in person at the company’s Apple Park headquarters in Cupertino, California. The keynote is set to take place at 10 AM PT / 1 PM ET on June 5. It’ll also be streamed live at the same time for anyone to watch from home.

The WWDC keynote is the kick-off event for Apple’s annual worldwide developers conference, and typically includes a number of software announcements, including the reveal of major new updates to iPhone’s iOS, iPadOS, macOS and more. This year, the headline rumored announcement is said to be the unveiling of Apple’s augmented reality headset, a long-awaited device (which Apple has not officially acknowledged, of course) that marks the company’s first major foray into the world of ‘mixed reality,’ which is the term many use to describe both AR and VR.

The AR headset is thought to run a new version of Apple’s mobile operating system potentially known as ‘xrOS’ and could carry ‘Reality One’ branding. Rumors suggest it’ll be relatively slim and light compared to most competitors on the market, use OLED displays inside and feature a display outside for communicating with others IRL, and run a variant of the M2 processor. It should support iPad apps to some extent out of the box, and cost around $3,000 when it eventually goes on sale.

There are still many question marks around the headset — including whether it will actually break cover at this event, but the graphics around this event are suggestive of some kind of mixed reality announcement, and developers will need time to build with this new platform if Apple wants it to be ready for consumers at its eventual launch.

Apple invites media to WWDC 2023 keynote, where AR headset is expected to debut by Darrell Etherington originally published on TechCrunch

Apple partners with Broadcom to build 5G components in the United States

Apple today announced a multi-billion dollar deal with Broadcom to build some wireless components in the United States. Under the terms of the agreement, Broadcom will design and build 5G components, including FBAR filters, in its America-based facilities.

In a statement, Apple CEO Tim Cook stated “[Apple] is thrilled to make commitments that harness the ingenuity, creativity, and innovative spirit of American manufacturing. All of Apple’s products depend on technology engineered and built here in the United States, and we’ll continue to deepen our investments in the U.S. economy because we have an unshakable belief in America’s future.”

This deal is part of the previously-announced commitment to American manufacturing Apple announced in 2021 in which it committed $430 billion to the U.S. economy over a five year period. Apple says in today’s announcement it’s on track to hit that goal.

Apple already partners with component manufacturers in the United States. Each year the company publishes a supplier list, detailing the supplier and its locations where Apple components are manufactured. According to the 2022 report, the majority of the manufacturing facilities were located in Asia, but 32 were located in America — including Broadcom, which already supplied Apple with components from its Colorado and Pennsylvania facilities.

Apple, like many American gadget companies, are always under pressure to source parts and assemble products in the United States. The Mac Pro desktop was the last product to receive its final assembly in America, thus earning the right to wear a badge that says “Made in the U.S.A.”

Apple partners with Broadcom to build 5G components in the United States by Matt Burns originally published on TechCrunch