AWS reorganizes DeepRacer League to encourage more newbies

AWS launched the DeepRacer League in 2018 as a fun way to teach developers machine learning, and it’s been building on the idea ever since. Today, it announced the latest league season with two divisions: Open and Pro.

As Marcia Villalba wrote in a blog post announcing the new league, “AWS DeepRacer is an autonomous 1/18th scale race car designed to test [reinforcement learning] models by racing virtually in the AWS DeepRacer console or physically on a track at AWS and customer events. AWS DeepRacer is for developers of all skill levels, even if you don’t have any ML experience. When learning RL using AWS DeepRacer, you can take part in the AWS DeepRacer League where you get experience with machine learning in a fun and competitive environment.”

While the company started these as in-person races with physical cars, the pandemic has forced them to make it a virtual event over the last year, but the new format seemed to be blocking out newcomers. Because the goal is to teach people about machine learning, getting new people involved is crucial to the company.

That’s why it created the Open League, which as the name suggests is open to anyone. You can test your skills and if you’re good enough, finishing in the top 10%, you can compete in the Pro division. Everyone competes for prizes, as well, such as vehicle customizations.

The top 16 in the Pro League each month race for a chance to go to the finals at AWS re:Invent in 2021, an event that may or may not be virtual, depending on where we are in the pandemic recovery.

Salesforce delivers, Wall Street doubts as stock falls 6.3% post-earnings

Wall Street investors can be fickle beasts. Take Salesforce as an example. The CRM giant announced a $5.82 billion quarter when it reported earnings yesterday. Revenue was up 20% year over year. The company also reported $21.25 billion in total revenue for the just closed FY2021, up 24% YoY. If that wasn’t enough, it raised its FY2022 guidance (its upcoming fiscal year) to over $25 billion. What’s not to like?

You want higher quarterly revenue, Salesforce gave you higher revenue. You want high growth and solid projected revenue — check and check. In fact, it’s hard to find anything to complain about in the report. The company is performing and growing at a rate that is remarkable for an organization of its size and maturity — and it is expected to continue to perform and grow.

How did Wall Street react to this stellar report? It punished the stock with the price down over 6%, a pretty dismal day considering the company brought home such a promising report card.

2/6/21 Salesforce stock report with stock down 6.31%

Image Credits: Google

So what is going on here? It could be that investors simply don’t believe the growth is sustainable or that the company overpaid when it bought Slack at the end of last year for over $27 billion. It could be it’s just people overreacting to a cooling market this week. But if investors are looking for a high growth company, Salesforce is delivering that

While Slack was expensive, it reported revenue over $250 million yesterday, pushing it over the $1 billion run rate with more than 100 customers paying over $1 million in ARR. Those numbers will eventually get added to Salesforce’s bottom line.

Canaccord Genuity analyst David Hynes Jr wrote that he was baffled by investor’s reaction to this report. Like me, he saw a lot of positives. Yet Wall Street decided to focus on the negative, and see “the glass half empty” as he put it in his note to investors.

“The stock is clearly in the show-me camp, which means it’s likely to take another couple of quarters for investors to buy into the idea that fundamentals are actually quite solid here, and that Slack was opportunistic (and yes, pricey), but not an attempt to mask suddenly deteriorating growth,” Hynes wrote.

During the call with analysts yesterday, Brad Zelnick from Credit Suisse asked how well the company could accelerate out of the pandemic-induced economic malaise, and Gavin Patterson, Salesforce’s president and chief revenue officers says the company is ready whenever the world moves past the pandemic.

“And let me reassure you, we are building the capability in terms of the sales force. You’d be delighted to hear that we’re investing significantly in terms of our direct sales force to take advantage of that demand. And I’m very confident we’ll be able to meet it. So I think you’re hearing today a message from us all that the business is strong, the pipeline is strong and we’ve got confidence going into the year,”Patterson said.

While Salesforce execs were clearly pumped up yesterday with good reason, there’s still doubt out in investor land that manifested itself in the stock starting down and staying down all day. It will be as Hynes suggested up to Salesforce to keep proving them wrong. As long as they keep producing quarters like the one they had this week, they should be just fine, regardless of what the naysayers on Wall Street may be thinking today.

DigitalOcean’s IPO filing shows a two-class cloud market

This morning DigitalOcean, a provider of cloud computing services to SMBs, filed to go public. The company intends to list on the New York Stock Exchange (NYSE) under the ticker symbol “DOCN.”

DigitalOcean’s offering comes amidst a hot streak for tech IPOs, and valuations that are stretched by historical norms. The cloud hosting company was joined by Coinbase in filing its numbers publicly today.

DigitalOcean’s offering comes amidst a hot streak for tech IPOs.

However, unlike the cryptocurrency exchange, DigitalOcean intends to raise capital through its offering. Its S-1 filing lists a $100 million placeholder number, a figure that will update when the company announces an IPO price range target.

This morning let’s explore the company’s financials briefly, and then ask ourselves what its results can tell us about the cloud market as a whole.

DigitalOcean’s financial results

TechCrunch has covered DigitalOcean with some frequency in recent years, including its early-2020 layoffs, its early-2020 $100 million debt raise and its $50 million investment from May of the same year that prior investors Access Industries and Andreessen Horowitz participated in.

From those pieces we knew that the company had reportedly reached $200 million in revenue during 2018, $250 million in 2019 and that DigitalOcean had expected to reach an annualized run rate of $300 million in 2020.

Those numbers held up well. Per its S-1 filing, DigitalOcean generated $203.1 million in 2018 revenue, $254.8 million in 2019 and $318.4 million in 2020. The company closed 2020 out with a self-calculated $357 million in annual run rate.

During its recent years of growth, DigitalOcean has managed to lose modestly increasing amounts of money, calculated using generally accepted accounting principles (GAAP), and non-GAAP profit (adjusted EBITDA) in rising quantities. Observe the rising disconnect:

Why F5 spent $2.2B on 3 companies to focus on cloud native applications

It’s essential for older companies to recognize changes in the marketplace or face the brutal reality of being left in the dust. F5 is an old-school company that launched back in the 90s, yet has been able to transform a number of times in its history to avoid major disruption. Over the last two years, the company has continued that process of redefining itself, this time using a trio of acquisitions — NGINX, Shape Security and Volterra — totaling $2.2 billion to push in a new direction.

While F5 has been associated with applications management for some time, it recognized that the way companies developed and managed applications was changing in a big way with the shift to Kubernetes, microservices and containerization. At the same time, applications have been increasingly moving to the edge, closer to the user. The company understood that it needed to up its game in these areas if it was going to keep up with customers.

Taken separately, it would be easy to miss that there was a game plan behind the three acquisitions, but together they show a company with a clear opinion of where they want to go next. We spoke to F5 president and CEO François Locoh-Donou to learn why he bought these companies and to figure out the method in his company’s acquisition spree madness.

Looking back, looking forward

F5, which was founded in 1996, has found itself at a number of crossroads in its long history, times where it needed to reassess its position in the market. A few years ago it found itself at one such juncture. The company had successfully navigated the shift from physical appliance to virtual, and from data center to cloud. But it also saw the shift to cloud native on the horizon and it knew it had to be there to survive and thrive long term.

“We moved from just keeping applications performing to actually keeping them performing and secure. Over the years, we have become an application delivery and security company. And that’s really how F5 grew over the last 15 years,” said Locoh-Donou.

Today the company has over 18,000 customers centered in enterprise verticals like financial services, healthcare, government, technology and telecom. He says that the focus of the company has always been on applications and how to deliver and secure them, but as they looked ahead, they wanted to be able to do that in a modern context, and that’s where the acquisitions came into play.

As F5 saw it, applications were becoming central to their customers’ success and their IT departments were expending too many resources connecting applications to the cloud and keeping them secure. So part of the goal for these three acquisitions was to bring a level of automation to this whole process of managing modern applications.

“Our view is you fast forward five or 10 years, we are going to move to a world where applications will become adaptive, which essentially means that we are going to bring automation to the security and delivery and performance of applications, so that a lot of that stuff gets done in a more native and automated way,” Locoh-Donou said.

As part of this shift, the company saw customers increasingly using microservices architecture in their applications. This means instead of delivering a large monolithic application, developers were delivering them in smaller pieces inside containers, making it easier to manage, deploy and update.

At the same time, it saw companies needing a new way to secure these applications as they shifted from data center to cloud to the edge. And finally, that shift to the edge would require a new way to manage applications.

Google Cloud puts its Kubernetes Engine on autopilot

Google Cloud today announced a new operating mode for its Kubernetes Engine (GKE) that turns over the management of much of the day-to-day operations of a container cluster to Google’s own engineers and automated tools. With Autopilot, as the new mode is called, Google manages all of the Day 2 operations of managing these clusters and their nodes, all while implementing best practices for operating and securing them.

This new mode augments the existing GKE experience, which already managed most of the infrastructure of standing up a cluster. This ‘standard’ experience, as Google Cloud now calls it, is still available and allows users to customize their configurations to their heart’s content and manually provision and manage their node infrastructure.

Drew Bradstock, the Group Product Manager for GKE, told me that the idea behind Autopilot was to bring together all of the tools that Google already had for GKE and bring them together with its SRE teams who know how to run these clusters in production — and have long done so inside of the company.

“Autopilot stitches together auto-scaling, auto-upgrades, maintenance, Day 2 operations and — just as importantly — does it in a hardened fashion,” Bradstock noted. “[…] What this has allowed our initial customers to do is very quickly offer a better environment for developers or dev and test, as well as production, because they can go from Day Zero and the end of that five-minute cluster creation time, and actually have Day 2 done as well.”

Image Credits: Google

From a developer’s perspective, nothing really changes here, but this new mode does free up teams to focus on the actual workloads and less on managing Kubernetes clusters. With Autopilot, businesses still get the benefits of Kubernetes, but without all of the routine management and maintenance work that comes with that. And that’s definitely a trend we’ve been seeing as the Kubernetes ecosystem has evolved. Few companies, after all, see their ability to effectively manage Kubernetes as their real competitive differentiator.

All of that comes at a price, of course, at a flat fee of $0.10 per hour and cluster (there’s also a free GKE tier that provides $74.40 in billing credits), plus, of course, the usual fees for resources that your clusters consume. Google offers a 99.95% SLA for the control plane of its Autopilot clusters and a 99.9% SLA for Autopilot pods in multiple zones.

Autopilot for GKE joins a set of container-centric products in the Google Cloud portfolio that also include Anthos for running in multi-cloud environments and Cloud Run, Google’s serverless offering. “[Autopilot] is really [about] bringing the automation aspects in GKE we have for running on Google Cloud, and bringing it all together in an easy-to-use package, so that if you’re newer to Kubernetes, or you’ve got a very large fleet, it drastically reduces the amount of time, operations and even compute you need to use,” Bradstock explained.

And while GKE is a key part of Anthos, that service is more about brining Google’s config management, service mesh and other tools to an enterprise’s own data center. Autopilot of GKE is, at least for now, only available on Google Cloud.

“On the serverless side, Cloud Run is really, really great for an opinionated development experience,” Bradstock added. “So you can get going really fast if you want an app to be able to go from zero to 1000 and back to zero — and not worry about anything at all and have it managed entirely by Google. That’s highly valuable and ideal for a lot of development. Autopilot is more about simplifying the entire platform people work on when they want to leverage the Kubernetes ecosystem, be a lot more in control and have a whole bunch of apps running within one environment.”

 

Hydrolix snares $10M seed to lower the cost of processing log data at scale

Many companies spend a significant amount of money and resources processing data from logs, traces and metrics, forcing them to make trade-offs about how much to collect and store. Hydrolix, an early stage startup, announced a $10 million seed round today to help tackle logging at scale, while using unique technology to lower the cost of storing and querying this data.

Wing Venture Capital led the round with help from AV8 Ventures, Oregon Venture Fund and Silicon Valley Data Capital.

Company CEO and co-founder Marty Kagan noted that in his previous roles, he saw organizations with tons of data in logs, metrics and traces that could be valuable to various parts of the company, but most organizations couldn’t afford the high cost to maintain these records for very long due to the incredible volume of data involved. He started Hydrolix because he wanted to change the economics to make it easier to store and query this valuable data.

“The classic problem with these cluster-based databases is that they’ve got locally attached storage. So as the data set gets larger, you have no choice but to either spend a ton of money to grow your cluster or separate your hot and cold data to keep your costs under control,” Kagan told me.

What’s more, he says that when it comes to querying, the solutions out there like BigQuery and Snowflake are not well suited for this kind of data. “They rely really heavily on caching and bulk column scans, so they’re not really useful for […] these infrastructure plays where you want to do live stream ingest, and you want to be able to do ad hoc data exploration,” he said.

Hydrolix wanted to create a more cost-effective way of storing and querying log data, while solving these issues with other tooling. “So we built a new storage layer which delivers […] SSD-like performance using nothing but cloud storage and diskless spot instances,” Kagan explained. He says that this means that there is no caching or column scales, enabling them to do index searches. “You’re getting the low cost, unlimited retention benefits of cloud storage, but with the interactive performance of fully indexed search,” he added.

Peter Wagner, founding partner at investor Wing Venture Capital, says that the beauty of this tool is that it eliminates tradeoffs, while lowering customers overall data processing costs. “The Hydrolix team has built a real-time data platform optimized not only to deliver superior performance at a fraction of the cost of current analytics solutions, but one architected to offer those same advantages as data volumes grow by orders of magnitude,” Wagner said in a statement.

It’s worth pointing out that in the past couple of weeks SentinelOne bought high speed logging platform Scalyr for $155 million, then CrowdStrike grabbed Humio, another high speed logging tool for $400 million, so this category is getting attention.

The product is currently compatible with AWS and offered through the Amazon Marketplace, but Kagan says they are working on versions for Azure and Google Cloud and expect to have those available later this year. The company was founded at the end of 2018 and currently has 20 employees spread out over six countries with headquarters in Portland, Oregon.

Acumen nabs $7M seed to keep engineering teams on track

Engineering teams face steep challenges when it comes to staying on schedule, and keeping to those schedules can have an impact on the entire organization. Acumen, an Israeli engineering operations startup announced a $7 million seed investment today to help tackle this problem.

Hetz, 10D, Crescendo and Jibe participated in the round, designed to give the startup the funding to continue building out the product and bring it to market. The company, which has been working with beta customers for almost a year, also announced it was emerging from stealth today.

As an experienced startup founder, Acumen CEO and co-founder Nevo Alva has seen engineering teams struggle as they grow due to a lack of data and insight into how the teams are performing. He and his co-founders launched Acumen to give companies that missing visibility.

“As engineering teams scale, they face challenges due to a lack of visibility into what’s going on in the team. Suddenly prioritizing our tasks becomes much harder. We experience interdependencies [that have an impact on the schedule] every day,” Alva explained.

He says this manifests itself in a decrease in productivity and velocity and ultimately missed deadlines that have an impact across the whole company. What Acumen does is collect data from a variety of planning and communications tools that the engineering teams are using to organize their various projects. It then uses machine learning to identify potential problems that could have an impact on the schedule and presents this information in a customizable dashboard.

The tool is aimed at engineering team leaders, who are charged with getting their various projects completed on time with the goal of helping them understand possible bottlenecks. The software’s machine learning algorithms will learn over time what situations cause problems, and offer suggestions on how to prevent them from becoming major issues.

The company was founded in July 2019 and the founders spent the first 10 months working with a dozen design partners building out the first version of the product, making sure it could pass muster with various standards bodies like SOC-2. It has been in closed private beta since last year and is launching publicly this week.

Acumen currently has 20 employees with plans to add 10 more by the end of this year. After working remotely for most of 2020, Alva says that location is no longer really important when it comes to hiring. “It definitely becomes less and less important where they are. I think time zones still are still a consideration when speaking of remote,” he said. In fact, they have people in Israel, the US and eastern Europe at the moment among their 20 employees.

He recognizes that employees can feel isolated working alone, so the company has video meetings every day and spend the first part just chatting about non-work stuff as a way to stay connected. Starting today, Acumen will begin its go to market effort in earnest. While Alva recognizes there are competing products out there like Harness and Pinpoint, he thinks his company’s use of data and machine learning really helps differentiate it.

How to overcome the challenges of switching to usage-based pricing

The usage-based pricing model almost feels like a cheat code — it enables SaaS companies to more efficiently acquire new customers, grow with those customers as they’re successful and keep those customers on the platform.

Compared to their peers, companies with usage-based pricing trade at a 50% revenue multiple premium and see 10pp better net dollar retention rates.

But the shift from pure subscription to usage-based pricing is nearly as complex as going from on-premise to SaaS. It opens up the addressable market by lowering the purchase barrier, which then necessitates finding new ways to scalably acquire users. It more closely aligns payment with a customer’s consumption, thereby impacting cash flow and revenue recognition. And it creates less revenue predictability, which can generate pushback from procurement and legal.

SaaS companies exploring a usage-based model need to plan for both go-to-market and operational challenges spanning from pricing to sales compensation to billing.

Selecting the right usage metric

There are numerous potential usage metrics that SaaS companies could use in their pricing. Datadog charges based on hosts, HubSpot uses marketing contacts, Zapier prices by tasks and Snowflake has compute resources. Picking the wrong usage metric could have disastrous consequences for long-term growth.

The best usage metric meets five key criteria: value-based, flexible, scalable, predictable and feasible.

  • Value-based: It should align with how customers derive value from the product and how they see success. For example, Stripe charges a 2.9% transaction fee and so directly grows as customers grow their business.
  • Flexible: Customers should be able to choose and pay for their exact scope of usage, starting small and scaling as they mature.
  • Scalable: It should grow steadily over time for the average customer once they’ve adopted the product. There’s a reason why cell phone providers now charge based on GB of data rather than talk minutes — data volumes keep going up.
  • Predictable: Customers should be able to reasonably predict their usage so they have budget predictability. (Some assistance may be required during the sales process.)
  • Feasible: It should be possible to monitor, administer and police usage. The metric needs to track with the cost of delivering the service so that customers don’t become unprofitable.

Navigating enterprise legal and procurement teams

Enterprise customers often crave price predictability for annual budgetary purposes. It can be tough for traditional legal and procurement teams to wrap their heads around a purchase with an unspecified cost. SaaS vendors must get creative with different usage-based pricing structures to give enterprise customers greater peace of mind.

tips for navigating legal and procurement teams

Image Credits: Kyle Poyar

Customer engagement software Twilio offers deeper discounts when a customer commits to usage for an extended period. AWS takes this a step further by allowing a customer to commit in advance, but still pay for their usage as it happens. Data analytics company Snowflake lets customers roll over their unused usage credits as long as their next year’s commitment is at least as large as the prior one.

Handling overages

Nobody wants to see a shock expense when they’ve unknowingly exceeded their usage limit. It’s important to design thoughtful overage policies that give customers the feeling of control over how much they’re spending.

How to overcome the challenges of switching to usage-based pricing

The usage-based pricing model almost feels like a cheat code — it enables SaaS companies to more efficiently acquire new customers, grow with those customers as they’re successful and keep those customers on the platform.

Compared to their peers, companies with usage-based pricing trade at a 50% revenue multiple premium and see 10pp better net dollar retention rates.

But the shift from pure subscription to usage-based pricing is nearly as complex as going from on-premise to SaaS. It opens up the addressable market by lowering the purchase barrier, which then necessitates finding new ways to scalably acquire users. It more closely aligns payment with a customer’s consumption, thereby impacting cash flow and revenue recognition. And it creates less revenue predictability, which can generate pushback from procurement and legal.

SaaS companies exploring a usage-based model need to plan for both go-to-market and operational challenges spanning from pricing to sales compensation to billing.

Selecting the right usage metric

There are numerous potential usage metrics that SaaS companies could use in their pricing. Datadog charges based on hosts, HubSpot uses marketing contacts, Zapier prices by tasks and Snowflake has compute resources. Picking the wrong usage metric could have disastrous consequences for long-term growth.

The best usage metric meets five key criteria: value-based, flexible, scalable, predictable and feasible.

  • Value-based: It should align with how customers derive value from the product and how they see success. For example, Stripe charges a 2.9% transaction fee and so directly grows as customers grow their business.
  • Flexible: Customers should be able to choose and pay for their exact scope of usage, starting small and scaling as they mature.
  • Scalable: It should grow steadily over time for the average customer once they’ve adopted the product. There’s a reason why cell phone providers now charge based on GB of data rather than talk minutes — data volumes keep going up.
  • Predictable: Customers should be able to reasonably predict their usage so they have budget predictability. (Some assistance may be required during the sales process.)
  • Feasible: It should be possible to monitor, administer and police usage. The metric needs to track with the cost of delivering the service so that customers don’t become unprofitable.

Navigating enterprise legal and procurement teams

Enterprise customers often crave price predictability for annual budgetary purposes. It can be tough for traditional legal and procurement teams to wrap their heads around a purchase with an unspecified cost. SaaS vendors must get creative with different usage-based pricing structures to give enterprise customers greater peace of mind.

tips for navigating legal and procurement teams

Image Credits: Kyle Poyar

Customer engagement software Twilio offers deeper discounts when a customer commits to usage for an extended period. AWS takes this a step further by allowing a customer to commit in advance, but still pay for their usage as it happens. Data analytics company Snowflake lets customers roll over their unused usage credits as long as their next year’s commitment is at least as large as the prior one.

Handling overages

Nobody wants to see a shock expense when they’ve unknowingly exceeded their usage limit. It’s important to design thoughtful overage policies that give customers the feeling of control over how much they’re spending.