Akamai extends its edge-computing platform as it looks to challenge AWS, Azure and GCP

Akamai today announced the launch of its Gecko “Generalized Edge Compute” platform. This new initiative will increase the company’s cloud-computing network with an additional 10 regions worldwide in the first quarter of this year and then another 75 throughout the rest of the year. Ever since it acquired Linode in 2022, Akamai has made it […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Memfault raises $24M to help companies manage their growing IoT device fleets

At the same time internet of things (IoT) devices and embedded software are becoming more complex, manufacturers are looking for ways to effectively manage the increasing volume of edge hardware. According to Statista, the number of consumer edge-enabled IoT devices is forecast to grow to almost 6.5 billion by 2030, up from 4 billion in 2020.

Capitalizing on the trends, Memfault, a platform that allows IoT device manufacturers to find issues in their edge products over the cloud, has closed a $24 million Series B funding round led by Stripes with participation from the 5G Open Innovation Lab, Partech and Uncork. The investment brings Memfault’s total raised to more than $35 million following a $8.5 million cash infusion in April 2021.

“We sharpened our go-to-market motion in 2022 and saw a clear acceleration in the business,” Memfault co-founder and CEO François Baldassari told TechCrunch in an email interview. “We feel confident that our playbook for sales-led growth is at a level of maturity where we can double down on our investment and accelerate growth. This was not the case a year ago; there is more talent available on the market than at any time since we started the company.”

Baldassari first conceived of Memfault while at smartwatch startup Pebble, where he worked alongside Memfault’s other two co-founders, Tyler Hoffman and Chris Coleman, for several years. At Pebble, the trio had to investigate hardware issues that were often difficult to fix remotely, which led them to create cloud-based software and performance monitoring infrastructure to improve the process.

After leaving Pebble, François joined Oculus as head of the embedded software team while Hoffman and Coleman took senior engineering roles at Fitbit. The infrastructure they created at Pebble stuck with them, though, and in 2018, the three reunited to found Memfault.

“We offer the tools to de-risk launch, prepare for the inevitability of post-launch issues and deliver a continuously improving, higher-quality product overall,” François said. “We can help companies ship more feature-rich products with continuous feature updates after the devices are in the field while helping companies stay in compliance with environmental, privacy and security regulations and avoid service-level agreement and warranty violations.”


Image Credits: Memfault

Stripping away the marketing fluff, Memfault provides software development kits (SDK) that let manufacturers upload performance data and error reports to a private cloud. There, it’s stored, analyzed and indexed so engineers can access it via a web interface to look for anomalies and troubleshoot problems as they occur.

François acknowledged that some manufacturers try to extend software reliability tools to cover hardware or build in-house teams to tackle bugs. But he argues that both approaches end up being more expensive and require more technical resources than deploying a service like Memfault.

“You can never anticipate every use case that a user might subject your device to, and there are some bugs that only surface in one in 10,000 instances. Trying to replicate that is nearly impossible,” François said. “Using Memfault, engineers react to issues in minutes rather than weeks, the majority of issues are automatically deduplicated and a clear picture of fleet health can be established at all times.”

While cybersecurity isn’t its main focus, Memfault has sometime rivals in startups like Sternum, Armis Shield-IoT and SecuriThings, whose platforms offer remote tools for monitoring security threats across IoT device fleets. More directly, Memfault competes with Amazon’s AWS IoT Device Management, Microsoft’s Azure IoT Edge, Google’s Cloud IoT and startups like Balena and Zededa, which sell utilities to seed over-the-air updates and perform high-level troubleshooting.

Memfault claims to have a sizeable market foothold regardless, with “hundreds” of companies in its customer base including Bose, Logitech, Lyft and Traeger. And it’s not resting on its laurels.

To stay ahead of the pack, Memfault plans to use the proceeds from its Series B to expand its platform’s software support (it recently announced Android and Linux SDKs) and invest in out-of-the-box integrations, adding to its existing partnerships with semiconductor manufacturers including Infineon, Nordic Semiconductors and NXP. Memfault also intends to expand its headcount, aiming to roughly double in size from 38 people to 80 by the end of the year.

François said that Memfault is also exploring ways it could build AI into future products, although that work remains in the early stages.

“We see promise in AI’s ability to help us develop sharper anomaly detection and error classification capabilities,” François said. “We’ve accumulated the largest corpus of hardware and firmware errors in the industry and hope to train AI systems on that data in the future.”

Asked about macroeconomic headwinds, François — who wouldn’t discuss revenue — admitted that the pandemic-spurred chip shortage affected Memfault’s customers and market “quite a bit.” But it turned out to be an blessing in disguise.

“In some cases, customers have been unable to find enough chips to produce the number of devices they planned on. In other cases, they’ve had to switch to new chips they’ve not previously had on their devices,” François explained. “In these cases, Memfault has been a huge help to our customers. Many engineers tell us that they aren’t sure what their firmware will look like running on these ‘Frankenstein’ devices — but with visibility into fleet data, diagnostics and debugging info from Memfault, they’ve been able to ship confidently.”

François volunteered that Memfault has maintained “high” gross margins and a low burn multiple — “burn multiple” referring to how much the company’s spending in order to generate each incremental dollar of annual recurring revenue. (The lower the multiple, the better.) Of course, it’s all tough to evaluate without firmer numbers. But when pressed, François stressed that Memfault hasn’t been growing at any cost.

“We’ve always been focused on building a long-term sustainable business,” François said. “Although there is a broader slowdown in tech, the global trend is going towards more automation. Most customers and prospects have told us how they are willing to spend on software and automation to stay ahead of competition.”

Memfault raises $24M to help companies manage their growing IoT device fleets by Kyle Wiggers originally published on TechCrunch

Akamai leads $38M round in Macrometa as the two strike partnership

Edge computing cloud and global data network Macrometa has raised $38 million led by Akamai Technologies, as the two announce a new partnership and product integrations. The funding also included participation from Shasta Ventures and 60 Degree Capital. Akamai Labs CTO Andy Champagne will join Macrometa’s board.

Macrometa founder and CEO Chetan Venkatesh told TechCrunch that its GDN enables cloud developers to run backend services closer to mobile phones, browsers, smart appliances, connected cars and users in edge regions, or points of presence (PoP). That reduces outages because if one edge region goes down, another one can take over instantly. Akamai’s edge network, meanwhile, covers 4,200 regions around the world.

The partnership between Macrometa and Akamai means the two are combining three infrastructure pieces into one platform for cloud developers: Akamai’s edge network, cloud hosting service Linode (which Akamai bought earlier this year) and Macrometa’s Global Data Network (GDN) and edge cloud. Akamai Edge Workers tech is now available through Macrometa’s GDN console, API and SDK, so developers can build a cloud app or API in Macrometa, and then quickly deploy it to Akamai’s edge locations.

Venkatesh gave some examples of how clients can use the integration between Macrometa and Akamai.

For SaaS customers, the integration means they can see speed increases and latency improvements of between 25x to 100x for their products, resulting in less user churn and better conversion rates for freemium models. Enterprise customers using the joint solution can improve the performance of streaming data pipelines and real-time data analytics. They can also deal with data residency and sovereignty issues by vaulting and tokenizing data in geo-fenced data vaults for compliance.

Video streaming clients, meanwhile, can use the integration to move their platforms to the edge, including authentication, content catalog rendering, personalization and content recommendations. Likewise, gaming companies can move servers closer to players and use the Akamai-Macrometa integration for features like player matching, leaderboards, multi-player game lobbies and anti-cheating features. For e-commerce players competing against Amazon, the joint solution can be used to connect and stream data from local stores and fulfillment centers, enabling faster delivery times.

Macrometa will use the funding for developer education, community development, enterprise event marketing and joint customer sales with Akamai (Macrometa’s products are now available through Akamai’s sales team).

In a statement about the funding and partnership, Akamai EVP and CTO Robert Blumofe said, “Developers are fundamentally changing the way they build, deploy and run enterprise applications. Velocity and scale are more important than ever, while flexibility in where to place workloads is now paramount. By partnering with and investing in Macrometa, Akamai is helping to form and foster a single platform that meets evolving needs of developers and the apps they’re creating.”

Akamai leads $38M round in Macrometa as the two strike partnership by Catherine Shu originally published on TechCrunch

Fly.io wants to change the way companies deploy apps at the edge

Fly.io co-founder and CEO Kurt Mackey says that developers don’t really understand the term edge computing. They just know they want to run their applications closer to the user to make them more responsive. He believes that the traditional way of doing this with a content delivery network (CDN) is a flawed approach, and he started Fly.io to deliver applications close to the user in a more efficient way.

Today the company announced a $25 million Series B that it closed in June, and also publicly revealed a $12 million Series A for the first time that it raised last August.

The best way to think about Fly is a new kind of public application delivery cloud that delivers applications all over the world wherever the end user happens to be. It doesn’t involve building its own data centers, at least not yet, but it does require installing hardware in different co-location facilities around the world.

“So we deploy our own hardware. We’re not built on top of other [clouds]. Developers are building applications, particularly real time applications where responsiveness to user interactions matters a lot. So basically they use us to just ship their stack in whatever country they happen to be in,” Mackey explained.

If that sounds like a CDN to you like Cloudflare or Akamai, Mackey sees a big difference between what his company is doing and this approach. “My hot take is that a CDN is kind of a misfeature for most developers building dynamic applications because what happens is you end up running your app in Virginia, and then putting some of it on the CDN that then runs close to users, rather than just putting the application itself where it needs to go,” he said.

The company launched in 2017 and it has spent a great deal of time refining the process of deploying the hardware where it’s needed, and Mackey thinks that from those lessons, he can continue to scale as the company grows. In fact, he sees this as an operational problem, rather than a financial one, getting enough money to finance the hardware.

“We have several hundred thousand apps deployed at this point. And the types of users using us today are kind of small teams of developers running a full stack app in a database. They might be using like 50 to 100 gigs of data, but it’s easy to size because the servers we buy are so enormous, we can support a lot of those customers with the original install.”

The company helps the developers deploy the applications where it’s needed. He says that the founders grew up using Heroku and they’ve tried to use that as a reference point as they built the software side of the company. “The way you deploy an app is very similar to Heroku. You download the CLI, and then you run in ‘fly launch’ and if all goes well, your app gets packaged up and deployed to our cloud,” he said.

Mackey previously founded a company called MongoHQ, which later changed its name to Compose before being sold to IBM in 2015. In spite of being an experienced founder with an exit under his belt he and his co-founders went through Y Combinator in Winter 2020. He says that he benefited more than he thought he would from the experience. “I went in a little arrogant, and then as soon as I started talking to the partners there, I realized they knew so much more than I do because [they have dealt with] thousands of companies, and their advice was incredibly helpful,” he said.

When the company came out of Y Combinator and started talking to investors in the 2020 timeframe, most thought the idea of deploying their own physical hardware was silly, but they came around to it as they saw it was less expensive than running Amazon EC2 instances, and they were building a system where they could control costs better.

The company currently has around 35 employees, and it’s hiring. Mackey says that he gives a lot of thought to how to build a more diverse workforce, while recognizing that it’s easier said than done.

“I think we’ve done a few things well to make progress, but I also think that as I’ve gotten older, I’ve realized this is a this is a priority that remains important, but never gets solved,” he said. One thing his company has done is rather than hiring the most experienced engineers he can find, he looks at people from different backgrounds and then benefits from getting these differences in perspective.

“I’d say that the one thing we’ve done really well this time around is just basically being good at engineering management and giving people a place where they could start early and then progress.”

The company’s $25 million Series B was led by Andreessen Horowitz with help from Intel Capital, Dell Technologies Capital, Initialized Capital and Planetscale CEO Sam Lambert. The Series A was led by Intel Capital.

Zededa lands a cash infusion to expand its edge device management software

Factors like latency, bandwidth, security and privacy are driving the adoption of edge computing, which aims to process data closer to where it’s being generated. Consider a temperature sensor in a shipyard or a fleet of cameras in a fulfillment center. Normally, the data from them might have to be relayed to a server for analysis. But with an edge computing setup, the data can be processed on-site, eliminating cloud computing costs and enabling processing at greater speeds and volumes (in theory).

Technical challenges can stand in the way of successful edge computing deployments, however. That’s according to Said Ouissal, the CEO of Zededa, which provides distributed edge orchestration and virtualization software. Ouissal has a product to sell — Zededa works with customers to help manage edge devices — but he points to Zededa’s growth to support his claim. The number of edge devices under the company’s management grew 4x in the past year while Zededa’s revenue grew 7x, Ouissal says.

Zededa’s success in securing cash during a downturn, too, suggests that the edge computing market is robust. The company raised $26 million in Series B funding, Zededa today announced, contributed by a range of investors including Coast Range Capital, Lux Capital, Energize Ventures, Almaz Capital, Porsche Ventures, Chevron Technology Ventures, Juniper Networks, Rockwell Automation, Samsung Next and EDF North America Ventures.

“There were two main trends that led to Zededa’s founding,” Ouissal told TechCrunch in an email interview. “First, as more devices, people and locations were increasingly being connected, unprecedented amounts of data were being generated … Secondly, the sheer scale and diversity of what was happening at the edge would be impossible for organizations to manage in a per-use case fashion. The only successful way to manage this type of environment was for organizations to have visibility across all the hardware, applications, clouds and networks distributed across their edge environments, just like they have in the data center or cloud.”

Ouissal co-founded Zededa in 2016 alongside Erik Nordmark, Roman Shaposhnik and Vijay Tapaskar. Previously, Ouissal was the VP of strategy and customer management at Ericsson and a product manager at Juniper Networks. Nordmark was a distinguished engineer at Cisco, while Shaposhnik — also an engineer by training — spent years developing cloud architectures at Sun Microsystems, Huawei, Yahoo and Cloudera.

Zededa’s software-as-a-service product, with works with devices from brands like SuperMicro, monitors edge installations to ensure they’re working as intended. It also guides users through the deployment steps, leveraging open source projects designed for Internet of Things orchestration and cyber defense. Zededa’s tech stack, for example, builds on the Linux Foundation’s EVE-OS, an open Linux-based operating system for distributed edge computing.


Image Credits: Zededa

Zededa aims to support most white-labeled devices offered by major OEMs; its vendor-agnostic software can be deployed on any bare-metal hardware or within a virtual machine to provide orchestration services and run apps. According to Ouissal, use cases range from monitoring sensors and security cameras to regularly upgrading the software in cell towers.

“The C-suite understands that digital transformation is critical to their organization’s success, particularly for organizations with distributed operations, and digital transformation cannot happen without edge computing. The ability to collect, analyze and act upon data at the distributed edge makes it possible for businesses to increase their competitive advantage, reduce costs, improve operational efficiency, open up new revenue streams and operate within safer and more secure environments,” Ouissal said. “As a result of this, edge computing projects are accelerating within organizations.”

Some research bears this out. According to a June 2021 Eclipse Foundation poll, 54% of organizations surveyed were either using or planning to use edge computing technologies within the next 12 months. A recent IDC report, meanwhile, forecasts double-digit growth in investments in edge computing over the next few years.

Zededa’s customers are primarily in the IT infrastructure, industrial automation and oil and gas industries. Ouissal wouldn’t say how many the company has currently but asserted that Zededa remains sufficiently differentiated from rivals in the edge device orchestration space.

“In terms of the ‘IT down’ trajectory, we are complementary to data solutions from the likes of VMware, SUSE, Nutanix, Red Hat and Sunlight, but these solutions are not suitable for deployments outside of secure data centers. From the ‘OT up’ standpoint, adjacent competitors include the likes of Balena, Portainer and Canonical’s Ubuntu Core. However, these solutions are more suitable for ‘greenfield’ use cases that only require containers and lack the security required for true enterprise and industrial deployments,” Ouissal argued. “Despite the economic downturn, the strategic and transformative potential of edge computing to create new business opportunities is leading investors across verticals to increase their commitment, at a time when they may be more reluctant to invest in other avenues.”

In any case, Zededa, which has a roughly 100-person team spread across the U.S., Germany and India, is actively hiring and plans to expand its R&D, sales and marketing teams within the year, Ouissal said. To date, the eight-year-old startup has raised a total of $55.4 million in venture capital.

“[We aim to increase] the use cases and integrations that we support. Within our product, we will continue to focus on innovation to improve ease of use and security. As the edge computing market evolves and matures,” Ouissal said. “We are also focused on enabling applications including updating legacy applications and bringing new solutions to the market that simplify technologies like AI and machine learning.”

Scale Computing secures $55M to help companies manage edge infrastructure

Edge computing is seeing an explosion of interest as enterprises process more data at the edge of their networks. According to a 2021 survey (albeit from an edge computing services vendor), 77% of companies said that they expect to see more spending for edge projects in 2022. But while some organizations stand to benefit from edge computing, which refers to the practice of storing and analyzing data near the end-user, not all have a handle of what it requires. Managing a fleet of edge devices across locations can be a burden on IT teams that lack the necessary infrastructure.

Jeff Ready asserts that his company, Scale Computing, can help enterprises that aren’t sure where to start with edge computing via storage architecture and disaster recovery technologies. Ready — who, among other ventures, runs a beer brewing company in Indianapolis — co-launched Scale Computing in 2007 with Jason Collier and Scott Loughmiller.

Both Loughmiller and Collier worked at Tumbleweed Communications developing a messaging and file transfer platform for enterprise and government customers. Prior to Scale, Ready, Loughmiller and Collier co-founded Corvigo, which offered a spam-filtering tool for email.

Early on, Scale focused on selling servers loaded with custom storage software targeting small- and medium-sized businesses. But the company later pivoted to “hyperconverged” infrastructure and edge computing products, which virtualizes customers’ infrastructure by combining servers, storage, a virtual machine monitor called a hypervisor and backup and data recovery into a single system.

“Scale Computing engineered an IT infrastructure platform that … eliminates the need for traditional IT silos of virtualization software, disaster recovery software, servers, and shared storage, replacing these with a fully integrated, highly available platform for running applications,” Ready said in an email interview with TechCrunch. “The self-healing platform automatically identifies, mitigates, and corrects problems in the infrastructure in real time, enabling applications to achieve maximum uptime even when local IT resources and staff are scarce.”

Those are lofty promises. But — lending Scale credibility — the company today raised $55 million in new funding led by Morgan Stanley Expansion Capital, bringing Scale’s total raised to $202 million.

“The technological advantage of Scale Computing’s edge computing platform solves endemic customer problems through enhanced resiliency, manageability and efficacy of their IT infrastructures,” Pete D. Chung, managing director and head of Morgan Stanley Expansion Capital, told TechCrunch in a statement. “It was clear from our diligence that Scale Computing’s customers benefit from material cost savings as well as increased confidence in their IT infrastructure.”

Scale’s platform allows companies to run apps close to where their users are, at the edge, centralizing management of remote sites such as branch offices in a single dashboard. The company’s device management tool gives users the ability to see a fleet of devices from a console and optionally check their health, flagging problems automatically and logging them so that IT teams can arrive at a diagnosis.

“Businesses across the spectrum of industries are eager to simplify their IT infrastructure, improve business resiliency, and drive down operational costs. The pandemic has also illustrated the importance of having edge computing capabilities outside a large centralized data center,” Ready said. “Scale’s software eliminates the need for traditional IT silos of virtualization software, disaster recovery software, servers, and shared storage, replacing these with a fully-integrated, highly-available platform for running applications.”

Ready’s language might be hyperbolic, but there’s certainly been growth in the demand for edge computing management software. According to Grand View Research, the global edge computing market — which was estimated to be worth $7.43 billion in 2021 — could climb to $155.90 billion in 2030.

Ready wouldn’t disclose revenue, and — perhaps hedging in light of the economic downswing — demurred when asked whether Scale had plans to expand its 160-person workforce by the end of the year. But he claimed that the company currently has over 6,000 customers across North America, Europe and the Middle East, and the Asia-Pacific region.

At one point in time, those customers reportedly included grocery store franchise Jerry’s Enterprises and casino company Genting Group.

“Despite the uncertainty wrought by the pandemic — or perhaps because of it — Scale Computing has seen unprecedented demand for its edge computing, virtualization, and hyperconverged solutions,” Ready added in a follow-up email. “We’ll will use the new funds to expand our leadership position in edge computing, including investments in people, R&D, and restructuring of debt.”

AI chip startup Sima.ai bags another $30M ahead of growth

As the demand for AI-powered apps grows, startups developing dedicated chips to accelerate AI workloads on-premises are reaping the benefits. A recent ZDNet piece reaffirms that the AI edge chip market is booming, fueled by “staggering” venture capital financing in the hundreds of millions of dollars. EdgeQ, Kneron, and Hailo are among the dozens of upstarts vying for customers, the last of which nabbed $136 million in October as it doubles down on new opportunities.

Another company competing in the increasingly saturated segment is Sima.ai, which is developing a system-on-chip platform for AI applications — particularly computer vision applications. After emerging from stealth in 2019, Sima.ai began demoing an accelerator chipset that combines “traditional compute IP” from Arm with a custom machine learning accelerator and dedicated vision accelerator, linked via a proprietary interconnect,

To lay the groundwork for future growth, Sima.ai today closed a $30 million additional investment from Fidelity Management & Research Company with participation from Lip-Bu Tan (who’s joining the board) and previous investors, concluding the startup’s Series B. It brings Sima.ia’s total capital raised to $150 million.

“The funding will be used to accelerate scaling of the engineering and business teams globally, and to continue investing in both hardware and software innovation,” founder and CEO Krishna Rangasayee told TechCrunch in an email interview. “The appointment of Lip-Bu Tan as the newest member of Sima.ai’s board of directors is a strategic milestone for the company. He has a deep history of investing in deep tech startups that have gone on to disrupt industries across AI, data, semiconductors, among others.”

Rangasaye spent most of his career in the semiconductor industry at Xilinx, where he was GM of the company’s overall business. An engineer by trade — Rangasaye was the COO at Groq and once headed product planning at Altera, which Intel acquired in 2015 — he says that he was motivated to start Sima.ai by the gap he saw in the machine learning market for edge devices. 

“I founded Sima.ai with two questions: “What are the biggest challenges in scaling machine learning to the embedded edge?” and “How can we help?,” Rangasaye said. “By listening to dozens of industry-leading customers in the machine learning trenches, Sima.ai developed a deep understanding of their problems and needs — like getting the benefits of machine learning without a steep learning curve, preserving legacy applications along with future proof ML implementations, and working with a high performance, low-power solution in an user-friendly environment.”

Sima.ai aims to work with companies developing driverless cars, robots, medical devices, drones, and more. The company claims to have completed several customer engagements and last year announced the opening of a design center in Bengaluru, India, as well as a collaboration with the University of Tübingen to identify AI hardware and software solutions for “ultra-low” energy consumption.

As over-100-employee Sima.ai works to productize its first-generation chip, work is underway on the second-generation architecture, Rangasaye said.

“Sima.ai’s software and hardware platform can be used to enable scaling machine learning to [a range of] embedded edge applications. Even though these applications will use many diverse computer vision pipelines with a variety of machine learning models, Sima.ai’s software and hardware platform has the flexibility to be used to address these,” Rangasaye added. “Sima.ai’s platform addresses any computer vision application using any model, any framework, any sensor, any resolution … [We as a company have] seized the opportunity to disrupt the burgeoning edge computing space in pursuit of displacing decades old technology and legacy incumbents.”

Sima.ai’s challenges remain mass manufacturing its chips affordably — and beating back the many rivals in the edge AI computing space. (The companys says that it plans to ship “mass-produced production volumes” of its first chip “sometime this year.”) But the startup stands to profit handsomely if it can capture even a sliver of the sector. Edge computing is forecast to be a $6.72 billion market by 2022, according to Markets and Markets. Its growth will coincide with that of the deep learning chipset market, which some analysts predict will reach $66.3 billion by 2025.

“Machine learning has had a profound impact on the cloud and mobile markets over the past decade and the next battleground is the multi-trillion-dollar embedded edge market,” Tan said in a statement. “Sima.ai has created a software-centric, purpose-built … platform that exclusively targets this large market opportunity. Sima.ai’s unique architecture, market understanding and world-class team has put them in a leadership position.”

Google goes all in on hybrid cloud with new portfolio of edge and managed on-prem solutions

Today at Google Cloud Next, the company’s annual customer conference, Google announced a broad portfolio of hybrid cloud services designed to deliver computing at the edge of Google’s network of data centers, in a partner facility or in a customer’s private data center — all managed by Anthos, the company’s cloud native management console.

There’s a lot to unpack here, but Sachin Gupta, Google’s GM and VP of Product for IaaS, says the strategy behind the announcement was to bring customers along who might have specialized workloads that aren’t necessarily well suited to the public cloud — a need that he says they were continually hearing about from potential customers.

That means providing them with some reasonable alternatives. “What we find is that there are various factors that prevent customers from moving to the public cloud right away,” Gupta said. For instance, they might have low latency requirements or large amounts of data that they need to process and moving this data to the public cloud and back again may not be efficient. They also may have security, privacy, data residency or other compliance requirements.

All of this led Google to design a set of solutions that work in a variety of situations that might not involve the pure public cloud. A solution could be installed on the edge in one of Google’s worldwide data centers, in a partner data center like a telco or a colo facility like Equinix, or as part of a managed server inside a company’s own data center.

With that latter component, it’s important to note that these are servers from partner companies like Dell and HPE, as opposed to a server manufactured and managed by Google as Amazon does with its Outposts product. It’s also interesting to note that these machines won’t be connected directly to the Google cloud in any way, but Google will manage all of the software and provide a way for IT to manage cloud and on-prem resources in a single way. More on that in a moment.

The goal with a hosted solution is a consistent and modern approach to computing using either containers and Kubernetes or virtual machines. Google provides updates via a secure download site and customers can check these themselves or let a third party vendor handle all of that for them.

The glue here that really holds this approach together is Anthos, the control software the company introduced a couple of years ago. With Anthos, customers can control and manage software wherever it lives, whether that’s on premise, in a data center or on public clouds, even from competitors like Microsoft and Amazon.

Google Cloud hybrid portfolio architecture diagram.

Google Cloud hybrid portfolio architecture diagram Image Credits: Google Cloud

The whole approach signals that Google is attempting to carve out its own share of the cloud by taking advantage of a hybrid market opening. While this is an area that both Microsoft and IBM are also trying to exploit, taking this comprehensive platform approach while using Anthos to stitch everything together could give Google some traction, especially in companies that have specific requirements that prevent them from moving certain workloads to the cloud.

Google reached 10% market share in the cloud infrastructure market for the first time in the most recent quarterly report from August with a brisk growth rate of 54%, showing that they are starting to gain a bit of momentum, even though they remain far behind Amazon with 33% and Microsoft with 20% market share.

Edge computing startup Macrometa gets $20M Series A led by Pelion Venture Partners

Macrometa, the edge computing cloud and global data network for app developers, announced today it has raised a $20 million Series A. The round was led by Pelion Venture Partners, with participation from returning investors DNX Ventures (the Japan and US-focused enterprise fund that led Macrometa’s seed round), Benhamou Global Ventures (BGV), Partech Partners, Fusion Fund, Sway Ventures and Shasta Ventures.

The startup, which is headquartered in Palo Alto with operations in Bulgaria and India, plans to use its Series A on feature development, acquiring more enterprise customers and integrating with content delivery networks (CDN), cloud and telecom providers. It will hire for its engineering and product development centers in the United States, Eastern Europe and India, and add new centers in Ukraine, Portugal, Greece, Mexico and Argentina.

The company’s last round of funding, an $7 million seed, was announced just eight months ago. Its Series A brings Macrometa’s total raised since its was founded in 2017 to $29 million.

As part of the new round, Macrometa expanded its board of directors, adding Pelion general partner Chris Cooper as a director, and Pelion senior associate Zain Rizavi and DNX Ventures principal Eva Nahari as board observers.

Macrometa’s global data network combines a globally distributed noSQL database and a low-latency stream data processing engine, enabling web and cloud develops to run and scale data-heavy, real-time cloud applications. The network allows developers to run apps concurrently across its 175 points of presence (PoPs), or edge regions, around the world, depending on which one is closest to an end user. Macrometa claims that the mean roundtrip time (RTT) for users on laptops or phones to its edge cloud and back is less than 50 milliseconds globally, or 50x to 100x faster than cloud platforms like DyanmoDB, MongoDB or Firebase.

A photo of Macrometa co-founder and CEO Chetan Venkatesh

Macrometa co-founder and CEO Chetan Venkatesh

Since its seed round last year, the company has accelerated its customer acquisition, especially among large global enterprises and web scale players, co-founder and chief executive officer Chetan Venkatesh told TechCrunch. Macrometa also made its self-service platform available to developers, who can try its serverless database, pub/sug, event processing and stateful compute runtime for free.

Macrometa recently became one of two distributed data companies (the other one is Fauna) partnered with Cloudflare for developers building new apps on Workers, its serverless application platform. Venkatesh said the combination of Macrometa and Cloudflare Workers enables data-driven APIs and web services to be 50x to 100x faster in performance and lower latency compared to the public cloud.


The COVID-19 pandemic accelerated Macrometa’s business significantly, said Venkatesh, because its enterprise and web scale customers needed to handle the unpredictable data traffic patterns created by remote work. The pandemic also “resulted in several secular and permanent shifts in cloud adoption and consumption,” he added, changing how people shop, consume media, content and entertainment. That has “exponentially increased the need for handling dynamic bursts of demands for application infrastructure securely,” he said.

One example of how enterprise clients use Macrometa is e-commerce providers who implemented its infrastructure with their existing CDN and cloud backends to provide more data and AI-based personalization for shoppers, including real-time recommendations, regionalized search at the edge and local data geo-fencing to comply with data and privacy regulations.

Some of Macrometa’s SaaS clients use its global data network as a global data cache for handling surges in usage and keep regional copies of data and API results across its regional data centers. Venkatesh added that several large telecom operators have used Macrometa’s data stream ingestion and complex event processing platform to replace legacy data ingest platforms like Splunk, Tibco and Apache Kafka.

In a statement, Pelion Venture Partners, general partner Chris Cooper said, “We believe the next phase of computing will be focused on the edge, ultimately bringing cloud-based workloads closer to the end user. As more and more workloads move away from a centralized cloud model, Macrometa is becoming the de facto edge provider to run data-heavy and compute-intensive workloads for developers and enterprises alike, globally.”

Deeplite raises $6M seed to deploy ML on edge with fewer compute resources

One of the issues with deploying a machine learning application is that it tends to be expensive and highly compute intensive.  Deeplite, a startup based in Montreal, wants to change that by providing a way to reduce the overall size of the model, allowing it to run on hardware with far fewer resources.

Today, the company announced a $6 million seed investment. Boston-based venture capital firm PJC led the round with help from Innospark Ventures, Differential Ventures and Smart Global Holdings. Somel Investments, BDC Capital and Desjardins Capital also participated.

Nick Romano, CEO and co-founder at Deeplite, says that the company aims to take complex deep neural networks that require a lot of compute power to run, tend to use up a lot of memory, and can consume batteries at a rapid pace, and help them run more efficiently with fewer resources.

“Our platform can be used to transform those models into a new form factor to be able to deploy it into constrained hardware at the edge,” Romano explained. Those devices could be as small as a cell phone, a drone or even a Raspberry Pi, meaning that developers could deploy AI in ways that just wouldn’t be possible in most cases right now.

The company has created a product called Neutrino that lets you specify how you want to deploy your model and how much you can compress it to reduce the overall size and the resources required to run it in production. The idea is to run a machine learning application on an extremely small footprint.

Davis Sawyer, chief product officer and co-founder, says that the company’s solution comes into play after the model has been built, trained and is ready for production. Users supply the model and the data set and then they can decide how to build a smaller model. That could involve reducing the accuracy a bit if there is a tolerance for that, but chiefly it involves selecting a level of compression — how much smaller you can make the model.

“Compression reduces the size of the model so that you can deploy it on a much cheaper processor. We’re talking in some cases going from 200 megabytes down to on 11 megabytes or from 50 megabytes to 100 kilobytes,” Davis explained.

Rob May, who is leading the investment for PJC, says that he was impressed with the team and the technology the startup is trying to build.

“Deploying AI, particularly deep learning, on resource-constrained devices, is a broad challenge in the industry with scarce AI talent and know-how available. Deeplite’s automated software solution will create significant economic benefit as Edge AI continues to grow as a major computing paradigm,” May said in a statement.

The idea for the company has roots in the TandemLaunch incubator in Montreal. It launched officially as a company in mid-2019 and today has 15 employees with plans to double that by the end of this year. As it builds the company, Romano says the founders are focused on building a diverse and inclusive organization.

“We’ve got a strategy that’s going to find us the right people, but do it in a way that is absolutely diverse and inclusive. That’s all part of the DNA of the organization,” he said.

When it’s possible to return to work, the plan is to have offices in Montreal and Toronto that act as hubs for employees, but there won’t be any requirement to come into the office.

“We’ve already discussed that the general approach is going to be that people can come and go as they please, and we don’t think we will need as large an office footprint as we may have had in the past. People will have the option to work remotely and virtually as they see fit,” Romano said.