Edge computing startup Pensando comes out of stealth mode with a total of $278 million in funding

Pensando, an edge computing startup founded by former Cisco engineers, came out of stealth mode today with an announcement that it has raised a $145 million Series C. The company’s software and hardware technology, created to give data centers more of the flexibility of cloud computing servers, is being positioned as a competitor to Amazon Web Services Nitro.

The round was led by Hewlett Packard Enterprise and Lightspeed Venture Partners and brings Pensando’s total raised so far to $278 million. HPE chief technology officer Mark Potter and Lightspeed Venture partner Barry Eggers will join Pensando’s board of directors. The company’s chairman is former Cisco CEO John Chambers, who is also one of Pensando’s investors through JC2 Ventures.

Pensando was founded in 2017 by Mario Mazzola, Prem Jain, Luca Cafiero and Soni Jiandani, a team of engineers who spearheaded the development of several of Cisco’s key technologies, and founded four startups that were acquired by Cisco, including Insieme Networks. (In an interview with Reuters, Pensando chief financial offier Randy Pond, a former Cisco executive vice president, said it isn’t clear if Cisco is interested in acquiring the startup, adding “our aspirations at this point would be to IPO. But, you know, there’s always other possibilities for monetization events.”)

The startup claims its edge computing platform performs five to nine times better than AWS Nitro, in terms of productivity and scale. Pensando prepares data center infrastructure for edge computing, better equipping them to handle data from 5G, artificial intelligence and Internet of Things applications. While in stealth mode, Pensando acquired customers including HPE, Goldman Sachs, NetApp and Equinix.

In a press statement, Potter said “Today’s rapidly transforming, hyper-connected world requires enterprises to operate with even greater flexibility and choices than ever before. HPE’s expanding relationship with Pensando Systems stems from our shared understanding of enterprises and the cloud. We are proud to announce our investment and solution partnership with Pensando and will continue to drive solutions that anticipate our customers’ needs together.”

Satya Nadella looks to the future with edge computing

Speaking today at the Microsoft Government Leaders Summit in Washington DC, Microsoft CEO Satya Nadella made the case for edge computing, even while pushing the Azure cloud as what he called “the world’s computer.”

While Amazon, Google and other competitors may have something to say about that, marketing hype aside, many companies are still in the midst of transitioning to the cloud. Nadella says the future of computing could actually be at the edge where computing is done locally before data is then transferred to the cloud for AI and machine learning purposes. What goes around, comes around.

But as Nadella sees it, this is not going to be about either edge or cloud. It’s going to be the two technologies working in tandem. “Now, all this is being driven by this new tech paradigm that we describe as the intelligent cloud and the intelligent edge,” he said today.

He said that to truly understand the impact the edge is going to have on computing, you have to look at research, which predicts there will be 50 billion connected devices in the world by 2030, a number even he finds astonishing. “I mean this is pretty stunning. We think about a billion Windows machines or a couple of billion smartphones. This is 50 billion [devices], and that’s the scope,” he said.

The key here is that these 50 billion devices, whether you call them edge devices or the Internet of Things, will be generating tons of data. That means you will have to develop entirely new ways of thinking about how all this flows together. “The capacity at the edge, that ubiquity is going to be transformative in how we think about computation in any business process of ours,” he said. As we generate ever-increasing amounts of data, whether we are talking about public sector kinds of use case, or any business need, it’s going to be the fuel for artificial intelligence, and he sees the sheer amount of that data driving new AI use cases.

“Of course when you have that rich computational fabric, one of the things that you can do is create this new asset, which is data and AI. There is not going to be a single application, a single experience that you are going to build, that is not going to be driven by AI, and that means you have to really have the ability to reason over large amounts of data to create that AI,” he said.

Nadella would be more than happy to have his audience take care of all that using Microsoft products, whether Azure compute, database, AI tools or edge computers like the Data Box Edge it introduced in 2018. While Nadella is probably right about the future of computing, all of this could apply to any cloud, not just Microsoft.

As computing shifts to the edge, it’s going to have a profound impact on the way we think about technology in general, but it’s probably not going to involve being tied to a single vendor, regardless of how comprehensive their offerings may be.

Data storage company Cloudian launches a new edge analytics subsidiary called Edgematrix

Cloudian, a company that enables businesses to store and manage massive amounts of data, announced today the launch of Edgematrix, a new unit focused on edge analytics for large data sets. Edgematrix, a majority-owned subsidiary of Cloudian, will first be available in Japan, where both companies are based. It has raised a $9 million Series A from strategic investors NTT Docomo, Shimizu Corporation and Japan Post Capital, as well as Cloudian co-founder and CEO Michael Tso and board director Jonathan Epstein. The funding will be used on product development, deployment and sales and marketing.

Cloudian itself has raised a total of $174 million, including a $94 million Series E round announced last year. Its products include the Hyperstore platform, which allows businesses to store hundreds of petrabytes of data on premise, and software for data analytics and machine learning. Edgematrix uses Hyperstore for storing large-scale data sets and its own AI software and hardware for data processing at the “edge” of networks, closer to where data is collected from IoT devices like sensors.

The company’s solutions were created for situations where real-time analytics is necessary. For example, it can be used to detect the make, model and year of cars on highways so targeted billboard ads can be displayed to their drivers.

Tso told TechCrunch in an email that Edgematrix was launched after Cloudian co-founder and president Hiroshi Ohta and a team spent two years working on technology to help Cloudian customers process and analyze their data more efficiently.

“With more and more data being created at the edge, including IoT data, there’s a growing need for being able to apply real-time data analysis and decision-making at or near the edge, minimizing the transmission costs and latencies involved in moving the data elsewhere,” said Tso. “Based on the initial success of a small Cloudian team developing AI software solutions and attracting a number of top-tier customers, we decided that the best way to build on this success was establishing a subsidiary with strategic investors.”

Edgematrix is launching in Japan first because spending on AI systems there is expected to grow faster than in any other market, at a compound annual growth rate of 45.3% from 2018 to 2023, according to IDC.

“Japan has been ahead of the curve as an early adopter of AI technology, with both the governmetn and private sector viewing it as essential to boosting productivity,” said Tso. “Edgematrix will focus on the Japanese market for at least the next year, and assuming that all goes well, it would then expand to North America and Europe.”

SimShine raises $8 million for home security cameras that use edge computing

SimShine, a computer vision startup based in Shenzhen, has raised $8 million in pre-Series A funding for SimCam, its line of home security cameras that use edge computing to keep data on-device. The funding was led by Cheetah Mobile, with participation from Skychee, Skyview Fund and Oak Pacific Investment.

Earlier this year, SimShine raised $310,095 in a crowdfunding campaign on Kickstarter. It will use its pre-Series A round for product development and hiring.

SimShine’s team started off developing computer vision and edge computing software, spending five years working with enterprise clients before launching SimCam.

The company plans to release more smart home products that use edge computing with the ultimate goal of building a IoT platform to connect different devices, co-founder and chief marketing officer Joe Pham tells TechCrunch. SimCam currently integrates with Amazon Alexa and Google Assistant, with support for Apple Homekit in the works.

Pham says edge computing protects users’ privacy by keeping data, including face recognition data, on device, while also decreasing latency and false alarms, because calculations are performed continuously on the device (cameras connect to Wi-Fi so customers can watch surveillance video on their smartphones). It also means customers don’t have to sign up for the subscription plans that many cloud-based home security cameras require and reduces the price of each device since SimCam does not have to maintain cloud servers.

Microsoft brings Azure SQL Database to the edge (and Arm)

Microsoft today announced an interesting update to its database lineup with the preview of Azure SQL Database Edge, a new tool that brings the same database engine that powers Azure SQL Database in the cloud to edge computing devices, including, for the first time, Arm-based machines.

Azure SQL Edge, Azure corporate vice president Julia White writes in today’s announcement, “brings to the edge the same performant, secure and easy to manage SQL engine that our customers love in Azure SQL Database and SQL Server.”

The new service, which will also run on x64-based devices and edge gateways, promises to bring low-latency analytics to edge devices as it allows users to work with streaming data and time-series data, combined with the built-in machine learning capabilities of Azure SQL Database. Like its larger brethren, Azure SQL Database Edge will also support graph data and comes with the same security and encryption features that can, for example, protect the data at rest and in motion, something that’s especially important for an edge device.

As White rightly notes, this also ensures that developers only have to write an application once and then deploy it to platforms that feature Azure SQL Database, good old SQL Server on premises and this new edge version.

SQL Database Edge can run in both connected and fully disconnected fashion, something that’s also important for many use cases where connectivity isn’t always a given, yet where users need the kind of data analytics capabilities to keep their businesses (or drilling platforms, or cruise ships) running.

Microsoft brings Azure SQL Database to the edge (and Arm)

Microsoft today announced an interesting update to its database lineup with the preview of Azure SQL Database Edge, a new tool that brings the same database engine that powers Azure SQL Database in the cloud to edge computing devices, including, for the first time, Arm-based machines.

Azure SQL Edge, Azure corporate vice president Julia White writes in today’s announcement, “brings to the edge the same performant, secure and easy to manage SQL engine that our customers love in Azure SQL Database and SQL Server.”

The new service, which will also run on x64-based devices and edge gateways, promises to bring low-latency analytics to edge devices as it allows users to work with streaming data and time-series data, combined with the built-in machine learning capabilities of Azure SQL Database. Like its larger brethren, Azure SQL Database Edge will also support graph data and comes with the same security and encryption features that can, for example, protect the data at rest and in motion, something that’s especially important for an edge device.

As White rightly notes, this also ensures that developers only have to write an application once and then deploy it to platforms that feature Azure SQL Database, good old SQL Server on premises and this new edge version.

SQL Database Edge can run in both connected and fully disconnected fashion, something that’s also important for many use cases where connectivity isn’t always a given, yet where users need the kind of data analytics capabilities to keep their businesses (or drilling platforms, or cruise ships) running.

Docker developers can now build Arm containers on their desktops

Docker and Arm today announced a major new partnership that will see the two companies collaborate in bringing improved support for the Arm platform to Docker’s tools.

The main idea here is to make it easy for Docker developers to build their applications for the Arm platform right from their x86 desktops and then deploy them to the cloud (including the Arm-based AWS EC2 A1 instances), edge and IoT devices. Developers will be able to build their containers for Arm just like they do today, without the need for any cross-compliation.

This new capability, which will work for applications written in Javascript/Node.js, Python, Java, C++, Ruby, .NET core, Go, Rust and PHP, will become available as a tech preview next week, when Docker hosts its annual North American developer conference in San Francisco.

Typically, developers would have to build the containers they want to run on the Arm platform on an Arm-based server. With this system, which is the first result of this new partnership, Docker essentially emulates an Arm chip on the PC for building these images.

“Overnight, the 2 million Docker developers that are out there can use the Docker commands they already know and become Arm developers,” Docker EVP of Business Development David Messina told me. “Docker, just like we’ve done many times over, has simplified and streamlined processes and made them simpler and accessible to developers. And in this case, we’re making x86 developers on their laptops Arm developers overnight.”

Given that cloud-based Arm servers like Amazon’s A1 instances are often signficantly cheaper than x86 machines, users can achieve some immediate cost benefits by using this new system and running their containers on Arm.

For Docker, this partnership opens up new opportunities, especially in areas where Arm chips are already strong, including edge and IoT scenarios. Arm, similarly, is interested in strengthening its developer ecosystem by making it easier to develop for its platform. The easier it is to build apps for the platform, the more likely developers are to then run them on servers that feature chips from Arm’s partners.

“Arm’s perspective on the infrastructure really spans all the way from the endpoint, all the way through the edge to the cloud data center, because we are one of the few companies that have a presence all the way through that entire path,” Mohamed Awad, Arm’s VP of Marketing, Infrastructure Line of Business, said. “It’s that perspective that drove us to make sure that we engage Docker in a meaningful way and have a meaningful relationship with them. We are seeing compute and the infrastructure sort of transforming itself right now from the old model of centralized compute, general purpose architecture, to a more distributed and more heterogeneous compute system.”

Developers, however, Awad rightly noted, don’t want to have to deal with this complexity, yet they also increasingly need to ensure that their applications run on a wide variety of platform and that they can move them around as needed. “For us, this is about enabling developers and freeing them from lock-in on any particular area and allowing them to choose the right compute for the right job that is the most efficient for them,” Awad said.

Mesina noted that the promise of Docker has long been to remove the dependence of applications from the infrastructure they run on. Adding Arm support simply extends this promise to an additional platform. He also stressed that the work on this was driven by the company’s enterprise customers. These are the users who have already set up their systems for cloud-native development with Docker’s tools — at least for their x86 development. Those customers are now looking at developing for their edge devices, too, and that often means developing for Arm-based devices.

Awad and Messina both stressed that developers really don’t have to learn anything new to make this work. All of the usual Docker commands will just work.

 

Baidu Cloud launches its open source edge computing platform

At CES, the Chinese tech giant Baidu today announced OpenEdge, its open source edge computing platform. At its core, OpenEdge is the local package component of Baidu’s existing Intelligent Edge (BIE) commercial offering and obviously plays well with that service’s components for managing edge nodes and apps.

Since this is obviously a developer announcement, I’m not sure why Baidu decided to use CES as the venue for this release, but there can be no doubt that China’s major tech firms have become quite comfortable with open source. Companies like Baidu, Alibaba, Tencent and others are often members of the Linux Foundation and its growing stable of projects, for example, and virtually ever major open source organization now looks to China as its growth market. It’s no surprise then that we’re also now seeing a wider range of Chinese companies that open source their own projects.

“Edge computing is a critical component of Baidu’s ABC (AI, Big Data and Cloud Computing) strategy,” says Baidu VP and GM of Baidu Cloud Watson Yin. “By moving the compute closer to the source of the data, it greatly reduces the latency, lowers the bandwidth usage and ultimately brings real-time and immersive experiences to end users. And by providing an open source platform, we have also greatly simplified the process for developers to create their own edge computing applications.”

A company spokesperson tells us that the open source platform will include features like data collection, message distribution and AI inference, as well as tools for syncing with the cloud.

Baidu also today announced that it has partnered with Intel to launch the BIE-AI-Box and with NXP Semiconductors to launch the BIE-AI-Board. The box is designed for in-vehicle video analysis while the board is small enough for cameras, drones, robots and similar applications.

How backups, backups, backups protect NYC’s cellular infrastructure

The infrastructure that underpins our lives is not something we ever want to think about. Nothing good has come from suddenly needing to wonder “where does my water come from?” or “how does electricity connect into my home?” That pondering gets even more intense when we talk about cellular infrastructure, where a single dropped call or a choppy YouTube video can cause an expletive-laden tirade.

Recently, I visited Verizon’s cellular switch for the New York City metro area (disclosure: TechCrunch is owned by Oath, and Oath is part of Verizon). It’s a completely nondescript building in a nondescript suburb north of the city, so nondescript that it took Verizon’s representative about 15 minutes of circling around just to find it (frankly, the best security through obscurity I have seen in some time).

This switch, along with its sister, powers all cellular service in New York City, including three million voice or voice over LTE (VoLTE) calls and 708 million data connections a day. High-reliability and redundancy is a must for the facility, where dropping even one in 100,000 connections would create more than 7,000 angry customers a day. As Christine Williams, the senior operations manager who oversees the facility, explained, “It doesn’t matter what percentage of dropped calls you have if you are that person.”

As we walked through the server rows that processed those hundreds of millions of connections, I was surprised by just how little digital equipment was actually in the switch itself. “Software-defined networking” has taken full hold here, according to Michele White, who is Verizon’s Executive Director for Network Assurance in the U.S. northeast. As the team has replaced older equipment, the actual physical footprint has continued to downsize, even today. All of New York City’s traffic is run from a handful of feet of server racks.

The key to network assurance is two-fold. First is multiple levels of redundancy at every level of the infrastructure. Inside the switch, independent server racks can take over from other servers that fail, providing redundancy at the machine level. If the air conditioning — which is critical for machine performance — were to fail, mobile AC units can be deployed to pick up the burden.

All equipment in the building is serviced by DC power, and in the event of an external power loss, two diesel generators connected to a large fuel storage tank will take over. The facility is also equipped with battery backups that can sustain the facility for eight hours if the generators themselves don’t function appropriately.

Diesel generators can sustain power to the switch in the event of an external power outage

At a higher level, the switch and its sister share all New York City cellular traffic, but either one could handle the full load if necessary. In short, the goal of the switch’s design is to ensure that that no matter how small or large a problem it might experience, there is an instant backup ready to go to keep those cellular connections alive.

The other half of network assurance is centralization, something that I was surprised to hear in this supposed era of decentralization. Cellular sites in an urban area like New York are often placed on buildings, as anyone looking at roof lines can see from the street. Given those locations, it can be hard to provide backup generators and other failover infrastructure, and servicing them can also be challenging. With centralization, increasingly only the antenna is located at the site, with almost all other operations handled in central control offices and switches where Verizon has greater control of the environment.

Even with intense focus on redundancy, natural disasters can overwhelm even the best laid plans. The telecom company has an additional layer of redundancy with its mobile units, which are placed in a “barnyard” owing to the names of the equipment stored there. There are GOATs (generator on a truck), and COWs (cell on wheels), and BATs (bi-directional amplifier on a truck). These units get deployed to areas of the network that either are experiencing unusually strong demand (think the U.S. Open or a presidential inauguration) or where a natural disaster has stuck (like Hurricane Harvey).

A barnyard filled with animal-named mobile cell infrastructure, including COWs, COLTs, HORSEs, and others

That said, both White and Williams noted that mobile cell deployment is much rarer than people would guess. One reason is that cell sites are increasingly being installed with Remote Electrical Tilt, which allows nearby cell sites to adjust their antennas so as to provide some signal to an area formerly covered by an out-of-commission cell. That process I was told is increasingly automated, allowing the network to essentially self-heal itself in emergencies.

The other reason their deployment is rare is that network assurance already has to handle a remarkable amount of surging traffic throughout the normal ebb and flow of a dense urban city. “Rush hour in Times Square is pretty heavy,” noted Williams. Even something as heavy as a parade through Midtown Manhattan won’t typically exceed the network’s surge capacity.

One other redundancy that Verizon has been exploring is using drones to provide more adaptive coverage. The company has been testing “femto-cell” drone aircraft designed by American Aerospace Technologies that can provide one square mile of coverage for about sixteen hours. A drone capability could be particularly useful in cases like hurricanes, where roads are often littered with debris, making it hard for network engineers to deploy ground-based mobile cells.

I asked about 5G, which I have been covering more heavily this year as telecom deployments pick up. Given the current design of 5G, White and Williams didn’t expect too much change to happen at the switch level, where most of the core technology was likely to remain unchanged.

The trend that is changing things though is edge computing, which is in vogue due to the need for computing to be located closer to users to power applications like virtual reality and autonomous cars. That’s critical, because 50 milliseconds of extra latency could be the difference between an autonomous car hitting another vehicle or a new support pylon and swerving out of the way just in time.

Edge computing in many ways is decentralizing, and therefore there is a tension with the increasingly centralized nature of mobile communications infrastructure. Switches like this one are getting outfitted with edge technology, and more installations are expected in the coming years. 5G and edge are also deeply connected at the antenna level, and that will likely affect cell deployments far more than the switch infrastructure itself.

Edge, internet of things, 5G — all will increase the quantity and scale of the connections flowing through these networks. In the future, a cellular outage may not just inconvenience that YouTube user, but could also prevent an automobile from successfully navigating to a hospital during a natural disaster. It takes backups, backups, and backups to prevent us from ever having to ask, “where does that signal come from?”