F5 snags Volterra multi-cloud management startup for $500M

Applications networking company F5 announced today that it is acquiring Volterra, a multi-cloud management startup, for $500 million. That breaks down to $440 million in cash and $60 million in deferred and unvested incentive compensation.

Volterra emerged in 2019 with a $50 million investment from multiple sources, including Khosla Ventures and Mayfield, along with strategic investors like M12 (Microsoft’s venture arm) and Samsung Ventures. As the company described it to me at the time of the funding:

Volterra has innovated a consistent, cloud-native environment that can be deployed across multiple public clouds and edge sites — a distributed cloud platform. Within this SaaS-based offering, Volterra integrates a broad range of services that have normally been siloed across many point products and network or cloud providers.

The solution is designed to provide a single way to view security, operations and management components.

F5 president and CEO François Locoh-Donou sees Volterra’s edge solution integrating across its product line. “With Volterra, we advance our Adaptive Applications vision with an Edge 2.0 platform that solves the complex multi-cloud reality enterprise customers confront. Our platform will create a SaaS solution that solves our customers’ biggest pain points,” he said in a statement.

Volterra founder and CEO Ankur Singla, writing in a company blog post announcing the deal, says the need for this solution only accelerated during 2020 when companies were shifting rapidly to the cloud due to the pandemic. “When we started Volterra, multi-cloud and edge were still buzzwords and venture funding was still searching for tangible use cases. Fast forward three years and COVID-19 has dramatically changed the landscape — it has accelerated digitization of physical experiences and moved more of our day-to-day activities online. This is causing massive spikes in global Internet traffic while creating new attack vectors that impact the security and availability of our increasing set of daily apps,” he wrote.

He sees Volterra’s capabilities fitting in well with the F5 family of products to help solve these issues. While F5 had a quiet 2020 on the M&A front, today’s purchase comes on top of a couple of major acquisitions in 2019, including Shape Security for $1 billion and NGINX for $670 million.

The deal has been approved by both companies’ boards, and is expected to close before the end of March, subject to regulatory approvals.

Amazon announces a bunch of products aimed at industrial sector

One of the areas that is often left behind when it comes to cloud computing is the industrial sector. That’s because these facilities often have older equipment or proprietary systems that aren’t well suited to the cloud. Amazon wants to change that, and today the company announced a slew of new services at AWS re:Invent aimed at helping the industrial sector understand their equipment and environments better.

For starters, the company announced Amazon Monitron, which is designed to monitor equipment and send signals to the engineering team when the equipment could be breaking down. If industrial companies can know when their equipment is breaking, it allows them to repair on it their own terms, rather than waiting until after it breaks down and having the equipment down at what could be an inopportune time.

As AWS CEO Andy Jassy says, an experienced engineer will know when equipment is breaking down by a certain change in sound or a vibration, but if the machine could tell you even before it got that far, it would be a huge boost to these teams.

“…a lot of companies either don’t have sensors, they’re not modern powerful sensors, or they are not consistent and they don’t know how to take that data from the sensors and send it to the cloud, and they don’t know how to build machine learning models, and our manufacturing companies we work with are asking [us] just solve this [and] build an end-to-end solution. So I’m excited to announce today the launch of Amazon Monotron, which is an end-to-end solution for equipment monitoring,” Jassy said.

The company builds a machine learning model that understands what a normal state looks like, then uses that information to find anomalies and send back information to the team in a mobile app about equipment that needs maintenance now based on the data the model is seeing.

For those companies who may have a more modern system and don’t need the complete package that Monotron offers, Amazon has something for these customers as well. If you have modern sensors, but you don’t have a sophisticated machine learning model, Amazon can ingest this data and apply its machine learning algorithms to find anomalies just as it can with Monotron.

“So we have something for this group of customers as well to announce today, which is the launch of Amazon Lookout for Equipment, which does anomaly detection for industrial machinery,” he said.

In addition, the company announced the Panorama Appliance for companies using cameras at the edge who want to use more sophisticated computer vision, but might not have the most modern equipment to do that. “I’m excited to announce today the launch of the AWS Panorama Appliance which is a new hardware appliance [that allows] organizations to add computer vision to existing on premises smart cameras,” Jassy told AWS re:Invent today.

In addition, it also announced a Panorama SDK to help hardware vendors build smarter cameras based on Panorama.

All of these services are designed to give industrial companies access to sophisticated cloud and machine learning technology at whatever level they may require depending on where they are on the technology journey.

Lux-backed Flex Logix announces availability of its fast and cheap X1 AI chip for the edge

In the computing world, there are probably more types of chips available than your local supermarket snack aisle. Diverse computing environments (data centers and the cloud, edge, mobile devices, IoT, and more), different price points, and varying capabilities and performance requirements are scrambling the chip industry, resetting who has the lead right now and who might take the lead in new and emerging niches.

While there has been a spate of new chip startups like Cerebras, SiFive, and Nuvia funded by venture capitalists in the past two years, Flex Logix got its footing a bit earlier. The company, founded in 2014 by former Rambus founder Geoff Tate and Cheng Wang, has collectively raised $27 million from investors Lux Capital and Eclipse Ventures, along with Tate himself.

Flex Logix wants to bring AI processing workflows to the compute edge, which means it wants to offer technology that adds artificial intelligence to products like medical imaging equipment and robotics. At the edge, processing power obviously matters, but so does size and price. More efficient chips are easier to include in products, where pricing may put constraints on the cost of individual components.

In the first few years of the company, it focused on developing and licensing IP around FPGAs, or reprogrammable chips that can be changed after manufacturing through software. These flexible chips are critical in applications like AI or 5G, where standards and models change rapidly. It’s a market that is dominated by Xilinx and Altera, which was acquired by Intel for $16.7 billion back in 2015.

Flex Logix saw an opportunity to be “the ARM of FPGAs” by helping other companies develop their own chips. It built customer traction for its designs with organizations like Sandia National Laboratory, the Department of Defense and Boeing. More recently, it has been developing its own line of chips called InferX X1, creating a hybrid business model not unlike the model that Nvidia will have after its acquisition of ARM clears through regulatory hurdles.

With that background out of the way, Flex Logix unveiled the availability of its X1 chip, which is currently slated to be offered at four speeds ranging from 533Mhz to 933Mhz. CEO Tate stressed on our call that the company’s key differential is price: those chips will be priced between $99-$199 depending on chip speed for smaller orders, and $34-$69 per chip for large-scale orders.

It’s a chip, alright. Ain’t a lot of great stock art. But here is the X1. Photo via Flex Logix.

The reason those chips are cheaper is that they are significantly smaller than competing chips from Nvidia in its Jetson chip lineup according to Tate, up to 1/7 the size. Smaller chips generally have lower costs, since each wafer in a chip fab can hold more chips, amortizing the cost of manufacturing over more chips. According to the company, its chips outperform Nvidia’s Xavier module, although independent benchmarks aren’t available.

“Every customer we talk to wants more processing power per dollar, more processing power per unit of power … and with our die-size advantage we can give them more for their money,” Tate explained.

Customer samples for these new chips are expected to arrive in the first quarter next year, with scale manufacturing in the second quarter.

The company’s plan is to continue both sides of its business and continue to grow and mature its technology. “Our embedded FPGA businesses is now, as a standalone, profitable. The amount of money we’re bringing in exceeds the engineering and business. And now we’re developing this new business for inference which ultimately should be a bigger business because the market is growing very fast in the inference space,” Tate explained.

The company’s board consists of Peter Hébert and Shahin Farshchi of Lux, Pierre Lamond at Eclipse, and Kushagra Vaid, a distinguished engineer at Microsoft Azure. The company is based in Mountain View, California.

Lux-backed Flex Logix announces availability of its fast and cheap X1 AI chip for the edge

In the computing world, there are probably more types of chips available than your local supermarket snack aisle. Diverse computing environments (data centers and the cloud, edge, mobile devices, IoT, and more), different price points, and varying capabilities and performance requirements are scrambling the chip industry, resetting who has the lead right now and who might take the lead in new and emerging niches.

While there has been a spate of new chip startups like Cerebras, SiFive, and Nuvia funded by venture capitalists in the past two years, Flex Logix got its footing a bit earlier. The company, founded in 2014 by former Rambus founder Geoff Tate and Cheng Wang, has collectively raised $27 million from investors Lux Capital and Eclipse Ventures, along with Tate himself.

Flex Logix wants to bring AI processing workflows to the compute edge, which means it wants to offer technology that adds artificial intelligence to products like medical imaging equipment and robotics. At the edge, processing power obviously matters, but so does size and price. More efficient chips are easier to include in products, where pricing may put constraints on the cost of individual components.

In the first few years of the company, it focused on developing and licensing IP around FPGAs, or reprogrammable chips that can be changed after manufacturing through software. These flexible chips are critical in applications like AI or 5G, where standards and models change rapidly. It’s a market that is dominated by Xilinx and Altera, which was acquired by Intel for $16.7 billion back in 2015.

Flex Logix saw an opportunity to be “the ARM of FPGAs” by helping other companies develop their own chips. It built customer traction for its designs with organizations like Sandia National Laboratory, the Department of Defense and Boeing. More recently, it has been developing its own line of chips called InferX X1, creating a hybrid business model not unlike the model that Nvidia will have after its acquisition of ARM clears through regulatory hurdles.

With that background out of the way, Flex Logix unveiled the availability of its X1 chip, which is currently slated to be offered at four speeds ranging from 533Mhz to 933Mhz. CEO Tate stressed on our call that the company’s key differential is price: those chips will be priced between $99-$199 depending on chip speed for smaller orders, and $34-$69 per chip for large-scale orders.

It’s a chip, alright. Ain’t a lot of great stock art. But here is the X1. Photo via Flex Logix.

The reason those chips are cheaper is that they are significantly smaller than competing chips from Nvidia in its Jetson chip lineup according to Tate, up to 1/7 the size. Smaller chips generally have lower costs, since each wafer in a chip fab can hold more chips, amortizing the cost of manufacturing over more chips. According to the company, its chips outperform Nvidia’s Xavier module, although independent benchmarks aren’t available.

“Every customer we talk to wants more processing power per dollar, more processing power per unit of power … and with our die-size advantage we can give them more for their money,” Tate explained.

Customer samples for these new chips are expected to arrive in the first quarter next year, with scale manufacturing in the second quarter.

The company’s plan is to continue both sides of its business and continue to grow and mature its technology. “Our embedded FPGA businesses is now, as a standalone, profitable. The amount of money we’re bringing in exceeds the engineering and business. And now we’re developing this new business for inference which ultimately should be a bigger business because the market is growing very fast in the inference space,” Tate explained.

The company’s board consists of Peter Hébert and Shahin Farshchi of Lux, Pierre Lamond at Eclipse, and Kushagra Vaid, a distinguished engineer at Microsoft Azure. The company is based in Mountain View, California.

Edge computing startup Edgify secures $6.5M Seed from Octopus, Mangrove and semiconductor

Edgify, which builds AI for edge computing, has secured a $6.5m seed funding round backed by Octopus Ventures, Mangrove Capital Partners and an unnamed semiconductor giant. The name was not released but TechCrunch understands it nay be Intel Corp. or Qualcomm Inc.

Edgify’s technology allows ‘edge devices’ (devices at the edge of the internet) to interpret vast amounts of data, train an AI model locally, and then share that learning across its network of similar devices. This then trains all the other devices in anything from computer vision, NLP, voice recognition, or any other form of AI. 

The technology can be applied to anything from MRI machines, connected cars, checkout lanes, mobile devices and anything that has a CPU, GPU or NPU. Edgify’s technology is already being used in supermarkets, for instance.

Ofri Ben-Porat, CEO and co-founder of Edgify, commented in a statement: “Edgify allows companies, from any industry, to train complete deep learning and machine learning models, directly on their own edge devices. This mitigates the need for any data transfer to the Cloud and also grants them close to perfect accuracy every time, and without the need to retrain centrally.” 

Mangrove partner Hans-Jürgen Schmitz who will join Edgify’s Board comments: “We expect a surge in AI adoption across multiple industries with significant long-term potential for Edgify in medical and manufacturing, just to name a few.” 

Simon King, Partner and Deep Tech Investor at Octopus Ventures added: “As the interconnected world we live in produces more and more data, AI at the edge is becoming increasingly important to process large volumes of information.”

So-called ‘edge computing’ is seen as being one of the forefronts of deeptech right now.

Macrometa, an edge computing service for app developers, lands $7M seed round led by DNX

As people continue to work and study from home because of the COVID-19 pandemic, interest in edge computing has increased. Macrometa, a Palo Alto-based that provides edge computing infrastructure for app developers, announced today it has closed a $7 million seed round.

The funding was led by DNX Ventures, an investment fund that focuses on early-stage B2B startups. Other participants included returning investors Benhamou Global Ventures, Partech Partners, Fusion Fund, Sway Ventures, Velar Capital and Shasta Ventures.

While cloud computing relies on servers and data centers owned by providers like Amazon, IBM, Microsoft and Google, edge computing is geographically distributed, with computing done closer to data sources, allowing for faster performance.

Founded in 2018 by chief executive Chetan Venkatesh and chief architect Durga Gokina, Macrometa’s globally distributed data service, called Global Data Network, combines a distributed noSQL database and a low-latency stream data processing engine. It allows developers to run their cloud apps and APIs across 175 edge regions around the world. To reduce delays, app requests are sent to the region closest to the user. Macrometa claims that requests can be processed in less than 50 milliseconds globally, making it 50 to 100 times faster than cloud platforms like DyanmoDB, MongoDB or Firebase. One of the ways that Macrometa differentiates from competitors is that it enables developers to work with data stored across a global network of cloud providers, like Google Cloud and Amazon Web Services (for example), instead of a single provider.

As more telecoms roll out 5G networks, demand for globally distributed, serverless data computing services like Macrometa are expected to increase, especially to support enterprise software. Other edge computing-related startups that have recently raised funding including Latent AI, SiMa.ai and Pensando.

A spokesperson for Macrometa said the seed round was oversubscribed because the pandemic has increased investor interest in cloud and edge companies like Snowflake, which recently held its initial public offering.

Macrometa also announced today that it has added DNX managing partner Q Motiwala, former Auth0 and xnor.ai chief executive Jon Gelsey and Armorblox chief technology officer Rob Fry to its board of directors.

In a statement about the funding, Motiwala said, “As we look at the next five to ten years of cloud evolution, it’s clear to us that enterprise developers need a platform like Macrometa to go beyond the constraints, scaling limitations and high-cost economics that current cloud architecture impose. What Macrometa is doing for edge computing, is what Amazon Web Services did for the cloud a decade ago.”

Latent AI makes edge AI workloads more efficient

Latent AI, a startup that was spun out of SRI International, makes it easier to run AI workloads at the edge by dynamically managing workloads as necessary.

Using its proprietary compression and compilation process, Latent AI promises to compress library files by 10x and run them with 5x lower latency than other systems, all while using less power thanks to its new adaptive AI technology, which the company is launching as part of its appearance in the TechCrunch Disrupt Battlefield competition today.

Founded by CEO Jags Kandasamy and CTO Sek Chai, the company has already raised a $6.5 million seed round led by Steve Jurvetson of Future Ventures and followed by Autotech Ventures .

Before starting Latent AI, Kandasamy sold his previous startup OtoSense to Analog Devices (in addition to managing HPE Mid-Market Security business before that). OtoSense used data from sound and vibration sensors for predictive maintenance use cases. Before its sale, the company worked with the likes of Delta Airlines and Airbus.

Image Credits: Latent AI

In some ways, Latent AI picks up some of this work and marries it with IP from SRI International .

“With OtoSense, I had already done some edge work,” Kandasamy said. “We had moved the audio recognition part out of the cloud. We did the learning in the cloud, but the recognition was done in the edge device and we had to convert quickly and get it down. Our bill in the first few months made us move that way. You couldn’t be streaming data over LTE or 3G for too long.”

At SRI, Chai worked on a project that looked at how to best manage power for flying objects where, if you have a single source of power, the system could intelligently allocate resources for either powering the flight or running the onboard compute workloads, mostly for surveillance, and then switch between them as needed. Most of the time, in a surveillance use case, nothing happens. And while that’s the case, you don’t need to compute every frame you see.

“We took that and we made it into a tool and a platform so that you can apply it to all sorts of use cases, from voice to vision to segmentation to time series stuff,” Kandasamy explained.

What’s important to note here is that the company offers the various components of what it calls the Latent AI Efficient Inference Platform (LEIP) as standalone modules or as a fully integrated system. The compressor and compiler are the first two of these and what the company is launching today is LEIP Adapt, the part of the system that manages the dynamic AI workloads Kandasamy described above.

Image Credits: Latent AI

In practical terms, the use case for LEIP Adapt is that your battery-powered smart doorbell, for example, can run in a low-powered mode for a long time, waiting for something to happen. Then, when somebody arrives at your door, the camera wakes up to run a larger model — maybe even on the doorbell’s base station that is plugged into power — to do image recognition. And if a whole group of people arrives at ones (which isn’t likely right now, but maybe next year, after the pandemic is under control), the system can offload the workload to the cloud as needed.

Kandasamy tells me that the interest in the technology has been “tremendous.” Given his previous experience and the network of SRI International, it’s maybe no surprise that Latent AI is getting a lot of interest from the automotive industry, but Kandasamy also noted that the company is working with consumer companies, including a camera and a hearing aid maker.

The company is also working with a major telco company that is looking at Latent AI as part of its AI orchestration platform and a large CDN provider to help them run AI workloads on a JavaScript backend.

DNX Ventures launches $315 million fund for U.S. and Japanese B2B startups

DNX Ventures, an investment firm that focuses on early-stage B2B startups in Japan and the United States, announced today that it has closed a new $315 million fund. This is DNX’s third flagship fund and along with supplementary annexed funds, brings its total managed so far to $567 million.

Founded in 2011, with offices in San Mateo, California and Tokyo, Japan, DNX has invested in more than 100 startups to date, and has thirteen exits under its belt. The firm, a member of the Draper Venture Network, focuses on cloud and enterprise software, cybersecurity, edge computing, sales and marketing automation, finance and retail. The companies it invests in are usually raising “seed plus” or Series A funding and DNX’s typical check size ranges from $1 million to $5 million, depending on the startup’s stage, managing director Q Motiwala told TechCrunch.

DNX isn’t disclosing the names of its third fund’s limited partners, but Motiwala said it includes more than 30 LPs, including financial institutions, banks and large conglomerates. DNX began working on the fund last year, before the COVID-19 pandemic hit. Motiwala says DNX is optimistic about the outlook for B2B startups, because past macroeconomic crises, including the 2008 global financial crisis and the 2001 dot-com burst, showed founders continue innovating as they figure out how to make their businesses more efficient while building urgently-needed solutions.

For example, DNX has always focused on sectors like cloud computing, cybersecurity, edge computing and robotics, but the COVID-19 pandemic has made those technologies even more relevant. For example, the massive upsurge in remote work means that companies need to adapt their tech infrastructure, while robots like the ones developed by Diligent Robotics, a DNX portfolio company, can help hospitals cope with with nursing shortages.

“Our overall theme has always been the digitization of traditional industries like construction, transportation or healthcare, and we’ve always been interested in how to make the reach to the customer much better, so sales and marketing automation, for example,” said Motiwala. “Then the last piece of this is, how do you make society or businesses function better through automation, and those might take things like robotics and other technology.”

The differences and similarities between U.S. and Japanese B2B startups

A graphic featuring DNX Ventures' team members

A graphic featuring DNX Ventures’ team members

One of the reasons DNX was founded nine years ago was because “Japan has very strong spending on enterprise,” Motiwala said. The firm launched with offices in the U.S. and Japan and has continued to focus on B2B while growing the size of its funds. The firm’s debut fund was $40 million and its second one, announced in 2016, was more than $170 million. Motiwala said the $315 million DNX raised for its third fund was more than the firm expected.

U.S. B2B startups tend to think about global expansion at an earlier stage than their Japanese counterparts, but that has started to change, he said, and many Japanese B2B companies launch with an eye on expanding into different countries. Instead of the U.S. or Europe, however, they tend to focus on Southeast Asian countries like Indonesia, Malaysia and Singapore, or Taiwan. Another difference is that U.S. startups make heavier initial investments in their technology or IP, while in Japan, companies focus on getting to revenue and breaking even earlier. Motiwala said this might be because the Japanese venture capital ecosystem is smaller than in the U.S., but that attitude is also changing.

Examples of DNX portfolio companies that have successfully entered new countries include Cylance, a U.S. company that develops antivirus software using machine learning and predictive math modeling to protect devices from malware. DNX helped Cylance establish operations in Europe and Japan. On the Japan side, software testing company Shift, an investment from DNX’s first fund, has done “phenomenally well” in Southeast Asia, Motiwala said.

In terms of going global, DNX doesn’t push its portfolio companies, but encourages them to expand when the timing is right, especially if a U.S. startup wants to enter Japan, or vice versa. “We like to use the fact that we have teams in both regions. What we’ve seen more is the U.S. companies entering channel partnerships for Japanese distribution,” Motiwala said. “It has been more difficult to show the same thing to Japanese companies, but at the same time what we’ve realized is that instead of saying they should come into the U.S., they’ve done amazing stuff going into the Philippines or Singapore.”

SiMa.ai announces $30M Series A to build out lower-power edge chip solution

Krishna Rangasayeem, founder and CEO, at SiMa.ai, has 30 years of experience in the semiconductor industry. He decided to put that experience to work in a startup and launched SiMa.ai last year with the goal of building an ultra low-power software and chip solution for machine learning at the edge.

Today he announced a $30 million Series A led by Dell Technologies Capital with help from Amplify Partners, Wing Venture Capital and +ND Capital. Today’s investment brings the total raised to $40 million, according to the company.

Rangasayeem says in his years as a chip executive he saw a gap in the machine learning market for embedded devices running at the edge and he decided to start the company to solve that issue.

“While the majority of the market was serviced by traditional computing, machine learning was beginning to make an impact and it was really amazing. I wanted to build a company that would bring machine learning at significant scale to help the problems with embedded markets,” he told TechCrunch.

The company is trying to focus on efficiency, which it says will make the solution more environmentally friendly by using less power. “Our solution can scale high performance at the lowest power efficiency, and that translates to the highest frames per second per watt. We have built out an architecture and a software solution that is at a minimum 30x better than anybody else on the frames per second,” he. explained.

He added that achieving that efficiency required them to build a chip from scratch because there isn’t a solution available off the shelf today that could achieve that.

So far the company has attracted 20 early design partners, who are testing what they’ve built. He hopes to have the chip designed and the software solution in Beta in the Q4 timeframe this year, and is shooting for chip production by Q2 in 2021.

He recognizes that it’s hard to raise this kind of money in the current environment and he’s grateful to the investors, and the design partners who believe in his vision. The timing could actually work in the company’s favor because it can hunker down and build product while navigating through the current economic malaise.

Perhaps by 2021 when the product is in production, the market and the economy will be in better shape and the company will be ready to deliver.

AWS is sick of waiting for your company to move to the cloud

AWS held its annual re:Invent customer conference last week in Las Vegas. Being Vegas, there was pageantry aplenty, of course, but this year’s model felt a bit different than in years past, lacking the onslaught of major announcements we are used to getting at this event.

Perhaps the pace of innovation could finally be slowing, but the company still had a few messages for attendees. For starters, AWS CEO Andy Jassy made it clear he’s tired of the slow pace of change inside the enterprise. In Jassy’s view, the time for incremental change is over, and it’s time to start moving to the cloud faster.

AWS also placed a couple of big bets this year in Vegas to help make that happen. The first involves AI and machine learning. The second, moving computing to the edge, closer to the business than the traditional cloud allows.

The question is what is driving these strategies? AWS had a clear head start in the cloud, and owns a third of the market, more than double its closest rival, Microsoft. The good news is that the market is still growing and will continue to do so for the foreseeable future. The bad news for AWS is that it can probably see Google and Microsoft beginning to resonate with more customers, and it’s looking for new ways to get a piece of the untapped part of the market to choose AWS.

Move faster, dammit

The worldwide infrastructure business surpassed $100 billion this year, yet we have only just scratched the surface of this market. Surely, digital-first companies, those born in the cloud, understand all of the advantages of working there, but large enterprises are still moving surprisingly slowly.

Jassy indicated more than once last week that he’s had enough of that. He wants to see companies transform more quickly, and in his view it’s not a technical problem, it’s a lack of leadership. If you want to get to the cloud faster, you need executive buy-in pushing it.

Jassy outlined four steps in his keynote to help companies move faster and get more workloads in the cloud. He believes in doing so, it will not only continue to enrich his own company, it will also help customers avoid disruptive forces in their markets.

For starters, he says that it’s imperative to get the senior team aligned behind a change. “Inertia is a powerful thing,” Jassy told the audience at his keynote on Tuesday. He’s right of course. There are forces inside every company designed with good reason to protect the organization from massive systemic changes, but these forces — whether legal, compliance, security or HR — can hold back a company when meaningful change is needed.

He said that a fuller shift to the cloud requires ambitious planning. “It’s easy to go a long time dipping your toe in the water if you don’t have an aggressive goal,” he emphasized. To move faster, you also need staff that can help you get there — and that requires training.

Finally, you need a thoughtful, methodical migration plan. Most companies start with the stuff that’s easy to move to the cloud, then begin to migrate workloads that require some adjustments. They continue along this path all the way to things you might not choose to move at all.

Jassy knows that the faster companies get on board and move to the cloud, the better off his company is going to be, assuming it can capture the lion’s share of those workloads. The trouble is that after you move that first easy batch, getting to the cloud becomes increasingly challenging, and that’s one of the big reasons why companies have moved slower than Jassy would like.

The power of machine learning to drive adoption

One way to motivate folks to move faster is help them understand the power of machine learning. AWS made a slew of announcements around machine learning designed to give customers a more comprehensive Amazon solution. This included SageMaker Studio, a machine learning development environment along with notebook, debugging and monitoring tools. Finally, the company announced AutoPilot, a tool that gives more insight into automatically-generated machine learning models, another way to go faster.

The company also announced a new connected keyboard called DeepComposer, designed to teach developers about machine learning in a fun way. It joins DeepLens and DeepRacer, two tools released at previous re:Invents. All of this is designed for developers to help them get comfortable with machine learning.

It wasn’t a coincidence the company also announced a significant partnership with the NFL to use machine learning to help make players safer. It’s an excellent use case. The NFL has tons of data on its players, and it has decades of film. If it can use that data as fuel for machine learning-driven solutions to help prevent injuries, it could end up being a catalyst for meaningful change driven by machine learning in the cloud.

Machine learning provides another reason to move to the cloud. This shows that the cloud isn’t just about agility and speed, it’s also about innovation and transformation. If you can take advantage of machine learning to transform your business, it’s another reason to move to the cloud.

Moving to the edge

Finally, AWS recognizes that computing in cloud can only get you so far. In spite of the leaps it has made architecturally, there is still a latency issue that will be unacceptable for some workloads. That’s why it was a big deal that the company announced a couple of edge computing solutions including the general availability of Outposts, its private cloud in a box along with a new concept called Local Zones last week.

The company announced Outposts last year as a way to bring the cloud on prem. It is supposed to behave exactly the same way as traditional cloud resources, but AWS installs, manages and maintains a physical box in your data center. It’s the ultimate in edge computing, bringing the compute power right into your building.

For those who don’t want to go that far, AWS also introduced Local Zones, starting with one in LA, where the cloud infrastructure resources are close by instead of in your building. The idea is the same — to reduce the physical distance between you and your compute resources and reduce latency.

All of this is designed to put the cloud in reach of more customers, to help them move to the cloud faster. Sure, it’s self-serving, but 11 years after I first heard the term cloud computing, maybe it really is time to give companies a harder push.