AI’s role is poised to change monumentally in 2022 and beyond

The latest developments in technology make it clear that we are on the precipice of a monumental shift in how artificial intelligence (AI) is employed in our lives and businesses.

First, let me address the misconception that AI is synonymous with algorithms and automation. This misconception exists because of marketing. Think about it: When was the last time you previewed a new SaaS or tech product that wasn’t “fueled by” AI? This term is becoming something like “all-natural” on food packaging: ever-present and practically meaningless.

Real AI, however, is foundational to supporting the future of how businesses and individuals function in the world, and a huge advance in AI frameworks is accelerating progress.

As a product manager in the deep learning space, I know that current commercial and business uses of AI don’t come close to representing its full or future potential. In fact, I contend that we’ve only scratched the surface.

Ambient computing

The next generation of AI products will extend the applications for ambient computing.

  • Ambient = in your environment.
  • Computing = computational processes.

We’ve all grown accustomed to asking Siri for directions or having Alexa manage our calendar notifications, and these systems can also be used to automate tasks or settings. That is probably the most accessible illustration of a form of ambient computing.

Ambient computing involves a device performing tasks without direct commands — hence the “ambient,” or the concept of it being “in the background.” In ambient computing, the gap between human intelligence and artificial intelligence narrows considerably. Some of the technologies used to achieve this include motion tracking, wearables, speech-recognition software and gesture recognition. All of this serves to create an experience in which humans wish and machines execute.

The Internet of Things (IoT) has unlocked continuous connectivity and data transference, meaning devices and systems can communicate with each other. With a network of connected devices, it’s easy to envision a future in which human experiences are effortlessly supported by machines at every turn.

But ambient computing is not nearly as useful without AI, which provides the patterning, helping software “learn” the norms and trends well enough to anticipate our routines and accomplish tasks that support our daily lives.

On an individual level, this is interesting and makes life easier. But as professionals and entrepreneurs, it’s important to see the broader market realities of how ambient computing and AI will support future innovation.

WhatsApp ramps up revenue with global launch of Cloud API and soon, a paid tier for its Business App

WhatsApp is continuing its push into the business market with today’s news it’s launching the WhatsApp Cloud API to all businesses worldwide. Introduced into beta testing last November, the new developer tool is a cloud-based version of the WhatsApp Business API — WhatsApp’s first revenue-generating enterprise product — but hosted on parent company Meta’s infrastructure.

The company had been building out its Business API platform over the past several years as one of the key ways the otherwise free messaging app would make money. Businesses pay WhatsApp on a per-message basis, with rates that vary based on the region and number of messages sent. As of late last year, tens of thousands of businesses were set up on the non-cloud-based version of the Business API including brands like Vodafone, Coppel, Sears Mexico, BMW, KLM Royal Dutch Airlines, Iberia Airlines, Itau Brazil, iFood, and Bank Mandiri, and others. This on-premise version of the API is free to use.

The cloud-based version, however, aims to attract a market of smaller businesses, and reduces the integration time from weeks to only minutes, the company had said. It is also free.

Businesses integrate the API with their backend systems, where WhatsApp communication is usually just one part of their messaging and communication strategy. They may also want to direct their communications to SMS, other messaging apps, emails, and more. Typically, businesses would work with a solutions provider like Zendeks or Twilio to help facilitate these integrations. Providers during the cloud API beta tests had included Zendesk in the U.S., Take in Brazil, and MessageBird in the E.U.

During Meta’s messaging-focused “Conversations” live event today, Meta CEO Mark Zuckerberg announced the global, public availability of the cloud-based platform, now called the WhatsApp Cloud API.

“The best business experiences meet people where they are. Already more than 1 billion users connect with a business account across our messaging services every week. They’re reaching out for help, to find products and services, and to buy anything from big-ticket items to everyday goods. And today, I am excited to announce that we’re opening WhatsApp to any business of any size around the world with WhatsApp Cloud API,” he said.

He said the company believes the new API will help businesses, both big and small, be able to connect with more people.

In addition to helping businesses and developers get set up faster than with the on-premise version, Meta says the Cloud API will help partners to eliminate costly server expenses and help them provide customers with quick access to new features as they arrive.

Some businesses may choose to forgo the API and use the dedicated WhatsApp Business app instead. Launched in 2018, the WhatsApp Business App is aimed at smaller businesses that want to establish an official presence on WhatsApp’s service and connect with customers. It provides a set of features that wouldn’t be available to users of the free WhatsApp messaging app, like support automated quick replies, greeting messages, FAQs, away messaging, statistics, and more.

Today, Meta is also introducing new power features for its WhatsApp Business app that will be offered for a fee — like the ability to manage chats across up to 10 devices. The company will also provide new customizable WhatsApp click-to-chat links that help businesses attract customers across their online presence, including of course, Meta’s other applications like Facebook and Instagram.

These will be a part of a forthcoming Premium service for WhatsApp Business app users. Further details, including pricing, will be announced at a later date.

 

DOJ says it will no longer prosecute good-faith hackers under CFAA

The U.S. Justice Department announced Thursday it will not bring charges under federal hacking laws against security researchers and hackers who act in good faith.

The policy for the first time “directs that good-faith security research should not be charged” under the Computer Fraud and Abuse Act, a seismic shift away from its previous policy that allowed prosecutors to bring federal charges against hackers who find security flaws for the purpose of helping to secure exposed or vulnerable systems.

The Justice Department said that good-faith researchers are those who carry out their activity “in a manner designed to avoid any harm to individuals or the public,” and where the information “used primarily to promote the security or safety of the class of devices, machines, or online services to which the accessed computer belongs, or those who use such devices, machines, or online services.”

The Computer Fraud and Abuse Act, or CFAA, was enacted in law in 1986 and predates the modern internet. The federal law dictates what constitutes computer hacking — specifically “unauthorized” access to a computer system — at the federal level. But the CFAA has long been criticized for its outdated and vague language that does little to differentiate between good-faith researchers and hackers, and malicious actors who set out to extort companies or individuals or otherwise cause harm.

Last year the Supreme Court took its first look at the CFAA since the law came into force, and for the first time determined precisely what the CFAA’s reading of “unauthorized” access means under the law, and subsequently limited its scope, effectively eliminating an entire class of hypothetical scenarios — like violating a web service’s privacy policy, checking sports results from a work computer, and more recently scraping public web pages — under which federal prosecutors could have brought charges.

Now the Justice Department is ruling out, albeit a year on from the court’s ruling, bringing federal charges over these kinds of scenarios and instead focusing on cases where malicious actors deliberately break into a computer system.

The policy shift is not a legislative fix and could, just as the Justice Department did today, change in the future. It also does not protect good-faith hackers — or anyone else accused of hacking — from state computer hacking laws.

In a statement, U.S. deputy attorney general Lisa O. Monaco said: “The department has never been interested in prosecuting good-faith computer security research as a crime, and today’s announcement promotes cybersecurity by providing clarity for good-faith security researchers who root out vulnerabilities for the common good.”

Some critics may not accept that claim so willingly following the death of Aaron Swartz, who died by suicide in 2013 after he was charged under the CFAA for downloading 4.8 million articles and documents from academic subscription service JSTOR. Although JSTOR declined to pursue the case, federal prosecutors still brought charges accusing him of theft.

Since Swartz’s death, campaigners and lawmakers alike have pushed “Aaron’s Law,” to reform and codify changes to the CFAA in law to better protect good-faith hackers.

Dig emerges from stealth to help organizations secure their data in public clouds

Dig, a Tel Aviv-based cloud data security startup, has emerged from stealth with an $11 million investment to help organizations protect data stored in public cloud environments.

It’s no secret that data is often the ultimate target for some cybercriminals, yet so many organizations don’t have visibility, context or control over data stored in public cloud environments — like the ones run by Amazon, Google and Microsoft — according to Dig. That’s why the startup has developed a data detection and response (DDR) solution, which it claims can help enterprises to discover, protect and govern their cloud data in real time.

“Companies don’t know what data they hold in the cloud, where it is, or most importantly how to protect it. They have tools to protect endpoints, networks, APIs but nothing to actively secure their data in public clouds,” Dan Benjamin, Dig’s co-founder and chief executive, tells TechCrunch. Prior to founding Dig in October last year, Benjamin led multi-cloud security at Microsoft and mentored CTOs at Google Cloud for Startups.

“If you speak to data security teams in large organizations today, most of them work with manual reports and run manual scans. We help organizations analyze and understand how that data is being used,” he added.

Dig claims, like unlike existing solutions, it analyzes and responds instantly to threats to cloud data, triggering alerts on suspicious or anomalous activity, stopping attacks, data exfiltration and employee data misuse. The solution — a software-as-a-service app — discovers all data assets across public clouds and brings context to how they are used, and also tracks whether each data source supports compliance like SOC2 and HIPAA.

“Just the other week, we integrated with a large financial public American company, and after five minutes, we had alerts. What we discovered is that they had all financial reports being copied to an external AWS account that doesn’t belong to them,” Benjamin says. “We see stuff like this all of the time because no-one has real visibility into how this data is being used.”

Benjamin, who founded the startup alongside veteran entrepreneurs Ido Azran and Gad Akuka — the first letters of the co-founders’ names spell “Dig” — tells TechCrunch that Dig currently works with Microsoft Azure and AWS, with support for Google Cloud Platform coming soon. His ultimate goal, however, is to expand beyond public clouds to provide a solution to protect data wherever it sits within an organization.

“Data sits in five main locations for a typical enterprise; endpoints, email, on-premise, SaaS, and public clouds,” Benjamin says. “We only cover public clouds, but I believe that, eventually, customers will want a single platform that protects data wherever it is.”

With its $11 million seed round led by Team8, with participation from CrowdStrike, CyberArk and Merlin Ventures, Dig plans to grow its headcount from 30 to 50 by the end of the year, including in the U.S. It also plans to expand the product, with Benjamin noting that the startup “still has a lot to do” across discovery, context and threat protection.

New Relic enters the security market with its new vulnerability management service

New Relic, which has long been known for its observability platform, is entering the security market today with the launch of a new vulnerability management service. Aptly named New Relic Vulnerability Management, the service aggregates data from botth its own native vulnerability detection system and third-party tools, giving security, DevOps, SecOps and SRE teams a single service for monitoring their sotware stack for vulnerabilities.

“Minimizing security risk across the entire software development life cycle is imperative — and we are seeing more pressure on DevOps to manage risk while making sure it doesn’t become a blocker to the pace of innovation,” said New Relic CEO Bill Staples. “New Relic Vulnerability Management delivers more value to engineers harnessing the power of observability with our platform approach, and accelerates our mission to help every engineer do their best work with data, not opinions.”

The company argues that one if its major differentiators is that this new tool can integrate with third-party security tools. This in turn should help teams prioritize which security risks to focus on (because there are always more than any team can handle), with the new service also helping them to identify which actions to take to remediate those risks).

The new service is part of a series of announcement New Relic made at the CNCF’s KubeCon + CloudNativeCon conference and its own FutureStack event today. Other announcements include enhancements to the company’s application performance monitoring service (which now collects logs in context), new partners in its Instant Observability ecosystem (which now features more than 470 integrations), and a major new partnership with Microsoft, allowing Azure users to use New Relic as their default observability platform natively inside the Azure Portal.

Apollo GraphQL launches its Supergraph

The name kind of gives it away, but Apollo GraphQL has long focused on helping developers use the GraphQL query language for APIs to integrate data from a variety of services. Over the course of the last few years, it also worked with large enterprises to help them bring together data from a wide variety of sources into a single ‘supergraph,’ as the company likes to call it. Now, it is making these capabilities, which were previously the domain of large enterprises like Expedia, Walmart, and Zillow, available to anybody on its platform.

Apollo CEO and co-founder Geoff Schmidt wasn’t shy about what he thinks this announcement means when I talked to him ahead of today’s announcement. “We’ve been working on GraphQL since 2016, back when we were Meteor.js. But what we have to announce today is really why we built the company through all these years and through all these open-source projects,” he said. “It’s something that I think history will look at as being as big a deal as the database or the message bus or containerization — or maybe even the cloud itself.”

That’s a lot to live up to.

“The Super graph is a whole new way to think about GraphQL and what it’s for and what it delivers,” Schmidt continued. “I think the key idea of the Supergraph is the graph of graphs. It’s how these individual graphs that people have been building come together into a new layer of the stack — a different way of building applications — something that is as significant for how we’re all going to use the stack in the future as the database was.”

Image Credits: Apollo

Schmidt argues that as enterprises broke up their monolithic application architectures and moved to microservices, everything became so atomized that it now puts the burden on developers to piece everything back together when they want to build a new application on top of these systems.

At the core of the Supergraph are three projects. The first is the Apollo Router, a Rust-based runtime that processes GraphQL queries and then plans and executes them across federated subgraphs and returns those responses back to the client. This router, the company says, is 10x faster than the old Apollo Gateway, which the company previously used for querying federated graphs. The second piece is a set of new capabilities or the free tier of Apollo Studio, the company’s tool for managing data sources. The free tier will now include schema checks to ensure a new schema won’t break and existing applications and a launch dashboard that provides visibility into the schema-checking and launch process which was only available to enterprise users until now. And the third piece is Apollo Federation 2, which launched in April and allows users to compose their subgraphs into a single Supergraph.

Schmidt stressed that the company isn’t trying to replicate data lakes for analytical use cases here but a layer in the stack that allows developers to build new use cases.

“It’s not just how many pizzas that I sell, but can I order a pizza? You want to create something that’s almost like a virtual database — or a virtual server — that has objects that represent everything in a company: every customer, every product, every order, every like, every blog post — and you want to be able to ask questions like, ‘show me all the orders that this customer did,’ even though all that stuff lives in 1000 different services,” Schmidt explained.

It’ll be interesting to see if the Supergraph can live up to Apollo’s hype. Currently, the company’s GraphQL client, server and gateway are currently being downloaded more than 17 million times a month and the company says its products are being used in production by 30% of the Fortune 500. With the Supergraph, the company hopes to establish itself as a core part of the modern development stack.

CNCF launches a new program to help telcos adopt Kubernetes

At its KubeCon + CloudNativeCon conference this week, the Cloud Native Computing Foundation (CNCF) announced a new program that aims to help communication service providers and telcos adopt Kubernetes and other cloud native tools. Going forward, the CNCF will provide a new certification, the Cloud Native Network Function (CNF) Certification Program for Network Equipment Providers (NEPs), that certifies that these vendors follow cloud native best practices.

At first glance, that may sound like a lot of acronyms and yet another certification, but it’s an interesting move for the CNCF, which hasn’t always focused on telco users to the extend that it could have, and an acknowledgment that many of these enterprises are now starting to move away from their existing infrastructure as cloud native solutions reach the kind of maturity that’s necessary to run their networks. It also puts the CNCF back into competition with another open source foundation, the OpenStack Foundation, a project that found a lot of its recent success among telcos.

“Moving to cloud native infrastructures has long been difficult for telecom providers who have transitioned to VNFs and found themselves with siloed resources and specialized solutions not built for the cloud,” said Priyanka Sharma, Executive Director of the Cloud Native Computing Foundation. “The CNF Certification program is designed to fill this gap by creating solutions optimized for cloud native environments. Some of the world’s largest telecom organizations, including Huawei, Nokia, T-Mobile, and Vodafone, already use Kubernetes and other cloud native technologies, and this program will make it easier for others to do the same.”

The new (and open-sourced) test suite will support any product that runs in a CNCF-certified Kubernetes environment and run about 70 workload tests. Vendors will be able to self-verity their applications using this test suite and then submit the results to be considered for certification.

“Building, deploying, and operating Telco workloads across distributed cloud environments is complex,” said Tom Kivlin, Principal Cloud Architect at Vodafone. “It is important to adopt cloud native best practices as we evolve to achieve our goals for agility, automation, and optimization. The CNF Certification is a great tool with which we can measure and drive cloud native practices across our platforms and network functions. We look forward to working with our partners and the CNCF community to further develop and drive adoption of the CNF Certification Program.”

CyberConnect raises $15M Series A to put data back in the hands of users

One of the promises made by web3 entrepreneurs is putting data back in the hands of owners through decentralization. Singapore-based CyberConnect is among a handful of blockchain startups working to fulfill this vision, and it has recently closed a Series A financing round totaling $15 million.

The lead co-investor of the round is Animoca Brands, the Hong Kong-based company that has in recent years risen from an underdog in game development to an investment juggernaut in the web3 world. The other co-investor is Sky9 Capital, a Shanghai-based venture capital firm founded by Ron Cao, who is known for helping Lightspeed Venture Partners set up shop in China back in the day.

“In web2, companies with the largest social network own users’ social graphs and build walls around them to stem competition and advance corporate interests,” says CyberConnect CEO and cofounder Wilson Wei.

As such, Wei and his team are building a social graph “protocol”, the underlying rules that allow data to be shared between computers, for applications, and in web3’s case, without a centralized agent like Facebook. The end goal is that users can travel across web3 platforms with their followings and followers.

An app experience powered by CyberConnect will look like this: Users connect their crypto wallet — which has become a universal gateway to any web3 app — to a social platform, upon which they will be shown all their existing connections. They will get recommended user addresses to follow, which is based on CyberConnect’s indexing. Once they follow someone, that piece of information will be added to CyberConnect’s network and become “portable and self-sovereign.”

To date, CyberConnect has supported 23 projects including Project Galaxy and Mask Network, reaching a total of 710,000 users.

Other companies are building similar infrastructure to allow follower interropability, such as Lens, which is operated by Aave, a decentralized lending protocol backed by Blockchain Capital.

CyberConnect’s solution, Wei tells TechCrunch, consists of two components. Similar to Lens, it offers a software development kit (SDK), a piece of software for developers to create custom apps that let end-users manage their social graphs and a “social data network” that aggregates users’ behavior in web3, such as what tokens and NFTs they bought.

Rather than using smart contracts like Lens, CyberConnect’s SDK is built on top of InterPlanetary File System (IPFS), a peer to peer data storing and sharing network, and Ceramic, a network that manages mutable data without centralized servers, which Wei claims is a more “economic and gas-efficient solution.” Smart contracts are computer programs that execute automatically according to the terms of contracts and incur “gas fees”, the payments made by users to compensate for the computing power required to process transactions.

“Smart contract-based protocols are creating value from scarce items while any data stored on-chain costs a nontrivial amount of gas fee. There are only 10,000 NFTs in one collection and a limited amount of bitcoins,” Wei explains.

“In contrast, social context welcomes data abundance. There’s only an ever-increasing number of new users, new connections, and new content and that data will be by nature dynamic and need constant updates.”

CyberConnect plans to generate revenues through the social data network, which include different participatns like data contributors, indexers and recommenders, curators, and users. The network will be permissionless, meaning anyone can join, and include incentive mechanisms revolving around query fees, according to Wei.

The startup, headquartered in Palo Alto, operates with a team of 27 across the US, China, Canada and Europe.

Several venture investment firms, including Dragonfly, have recently warned web3 startups to brace for a cooling industry in the wake of the recent crypto market crash and wider macroeconomic compliactions. Wei is undeterred, saying “bear markets are a great time for us to focus on building.”

“As a serial entrepreneurial team, with more than seven years in social, Web3, and blockchain, previous experiences taught us that it is crucial to keep building during the downturns,” he says. “It will also be easier for truly visionary and value-creating projects to be properly recognized as the noise will die down together with the market hype.”

Facebook and Twitter still can’t contain the Buffalo shooting video

Ten people were murdered this weekend in a racist attack on a Buffalo, New York supermarket. The eighteen-year-old, white supremacist shooter livestreamed his attack on Twitch, the Amazon-owned video game streaming platform. Even though Twitch removed the video two minutes after the violence began, it was still too late — now, gruesome footage of the terrorist attack is openly circulating on platforms like Facebook and Twitter, even after the companies have vowed to take down the video.

On Facebook, some users who flagged the video were notified that the content did not violate its rules. The company told TechCrunch that this was a mistake, adding that it has teams working around the clock to take down videos of the shooting, as well as links to the video hosted on other sites. Facebook said that it is also removing copies of the shooter’s racist screed, and content that praises him.

But when we searched a term as simple as “footage of buffalo shooting” on Facebook, one of the first results featured a 54-second screen recording of the terrorist’s footage. TechCrunch encountered the video an hour after it had been uploaded and reported it immediately. The video wasn’t taken down until three hours after posting, when it had already been viewed over a thousand times.

In theory, this shouldn’t happen. A representative for Facebook told TechCrunch that it added multiple version of the video, as well as the shooter’s racist writings, to a database of violating content, which helps the platform identify, remove and block such content. We asked Facebook about this particular incident, but they did not provide additional details.

“We’re going to continue to learn, to refine our processes, to ensure that we can detect and take down violating content more quickly in the future,” Facebook integrity VP Guy Rosen said in response to a question about why the company struggled to remove copies of the video in an unrelated call on Tuesday.

Reposts of the shooter’s stream were also easy to find on Twitter. In fact, when we typed “buffalo video” into the search bar, Twitter suggested searches like “buffalo video full video graphic,” “buffalo video leaked” and “buffalo video graphic.”

Image Credits: Twitter, screenshot by TechCrunch

We encountered multiple videos of the attack that have been circulating on Twitter for over two days. One such video had over 261,000 views when we reviewed it on Tuesday afternoon.

In April, Twitter enacted a policy that bans individual perpetrators of violent attacks from Twitter. Under this policy, the platform also reserves the right to take down multimedia related to attacks, as well as language from terrorist “manifestos.”

“We are removing videos and media related to the incident. In addition, we may remove Tweets disseminating the manifesto or other content produced by perpetrators,” a spokesperson from Twitter told TechCrunch. The company called this “hateful and discriminatory” content “harmful for society.”

Twitter also claims that some users are attempting to circumvent takedowns by uploading altered or manipulated content related to the attack.

In contrast, video footage of the weekend’s tragedy was relatively difficult to find on YouTube. Basic search terms for the Buffalo shooting video mostly brought up coverage from mainstream news outlets. With the same search terms we used on Twitter and Facebook, we were able to identify a handful of YouTube videos with thumbnails of the shooting that were actually unrelated content once clicked through. On TikTok, TechCrunch identified some posts that directed users to websites where they could watch the video, didn’t find the actual footage on the app in our searches.

Twitch, Twitter and Facebook have stated that they are working with the Global Internet Forum to Counter Terrorism to limit the spread of the video. Twitch and Discord have also confirmed that they are working with government authorities that are investigating the situation. The shooter described his plans for the shooting in detail in a private Discord server prior to the attack.

According to documents reviewed by TechCrunch, the Buffalo shooter decided to broadcast his attack on Twitch because a 2019 anti-semitic shooting at Halle Synagogue remained live on Twitch for over thirty minutes before it was taken down. The shooter considered streaming to Facebook, but opted not to use the platform because he thought users needed to be logged in to watch livestreams.

Facebook has also inadvertently hosted mass shootings that evaded algorithmic detection. The same year as the Halle Synagogue shooting, 50 people were killed in an Islamophobic attack on two mosques in Christchurch, New Zealand, which streamed for 17 minutes. At least three perpetrators of mass shootings, including the suspect in Buffalo, have cited the livestreamed Christchurch massacre as a source of inspiration for their racist attacks.

Facebook noted the day after the Christchurch shootings that it had removed 1.5 million videos of the attack, 1.2 million of which were blocked upon upload. Of course, this begged the question of why Facebook was unable to immediately detect 300,000 of those videos, marking a 20% failure rate.

Judging by how easy it was to locate videos of the Buffalo shooting on Facebook, it seems the platform still has a long way to go.

How to evolve your DTC startup’s data strategy and identify critical metrics

Direct-to-consumer companies generate a wealth of raw transactional data that needs to be refined into metrics and dimensions that founders and operators can interpret on a dashboard.

If you’re the founder of an e-commerce startup, there’s a pretty good chance you’re using a platform like Shopify, BigCommerce or WooCommerce, and one of the dozens of analytics extensions like RetentionX, Sensai metrics or ProfitWell that provide off-the-shelf reporting.

At a high level, these tools are excellent for helping you understand what’s happening in your business. But in our experience, we’ve learned that you’ll inevitably find yourself asking questions that your off-the-shelf extensions simply can’t answer.

We’re generally big fans of plug-and-play business intelligence tools, but they won’t scale with your business. Don’t rely on them after you’ve outgrown them.

Here are a couple of common problems that you or your data team may encounter with off-the-shelf dashboards:

  • Charts are typically based on a few standard dimensions and don’t provide enough flexibility to examine a certain segment from different angles to fully understand them.
  • Dashboards have calculation errors that are impossible to fix. It’s not uncommon for such dashboards to report the pre-discounted retail amount for orders in which a customer used a promo code at checkout. In the worst cases, this can lead founders to drastically overestimate their customer lifetime value (LTV) and overspend on marketing campaigns.

Even when founders are fully aware of the shortcomings of their data, they can find it difficult to take decisive action with confidence.

We’re generally big fans of plug-and-play business intelligence tools, but they won’t scale with your business. Don’t rely on them after you’ve outgrown them.

Evolving your startup’s data strategy

Building a data stack costs much less than it did a decade ago. As a result, many businesses are building one and harnessing the compounding value of these insights earlier in their journey.

But it’s no trivial task. For early-stage founders, the opportunity cost of any big project is immense. Many early-stage companies find themselves in an uncomfortable situation — they feel paralyzed by a lack of high-fidelity data. They need better business intelligence (BI) to become data driven, but they don’t have the resources to manage and execute the project.

This leaves founders with a few options:

  • Hire a seasoned data leader.
  • Hire a junior data professional and supplement them with experienced consultants.
  • Hire and manage experienced consultants directly.

All of these options have merits and drawbacks, and any of them can be executed well or poorly. Many companies delay building a data warehouse because of the cost of getting it right — or the fear of messing it up. Both are valid concerns!

Start by identifying your critical metrics