Onboarding and automation: What fintechs can learn from big banks

When the economy is tight, financial institutions are faced with several mutually-reinforcing challenges. The temptation for bad action on the part of customers increases. This creates increased regulatory scrutiny, with the risk of massive fines for non-compliance.

The urge to reduce costs imperils continued investment in innovative financial products and services, while at the same time customers have higher expectations than ever for easy, effective, and great experiences.

On paper, this looks like a slam-dunk scenario for the burgeoning industry of new nimble fintech providers. It’s not – unless those fintechs can learn some lessons from established firms about customer onboarding. Those lessons ultimately come down to the marriage of process automation and a data fabric.

Why focus on onboarding?

The onboarding experience is the customer’s first impression of the organization and sets the tone for the relationship. It’s also the point at which the organization must accurately determine who the customer is and the true intent of their business. Fast and accurate customer onboarding is always important, but in an economic downturn, it becomes doubly so — investors rapidly lose patience for startups that can’t deliver growth and margin at the same time as regulators crack down on risk across the financial sector.

Effective onboarding is fintech’s Achilles’ heel. A data fabric that unifies information without moving it from systems of record is the answer.

Effective onboarding is fintech’s Achilles’ heel. Look at WISE, fined $360,000 by its Abu Dhabi regulator. Or, the UK’s Financial Conduct Authority fining GT Bank £7.8m for AML failures. Or, Solaris, the German Bank-as-a-Service (BaaS) provider slapped with a restriction to not onboard any future clients without government approval.

The inability of fintechs to properly manage the data and processes required for accurate onboarding may account for much of the decline in investment in 2022.

Data fabric and process automation improve onboarding

Onboarding starts with verified data, things like a name, an address, a tax ID, details of the proposed business, where the money is coming from, and where it’s going. The problem is that financial institutions are big, complicated organizations with myriad IT systems and applications holding siloed sets of data. These legacy systems across various products, customer types, and compliance programs don’t integrate well.

That means there’s an incomplete view of the matter at hand, and trying to complete that view usually means manual cutting-and-pasting between systems and spreadsheets. The opportunity for human error alone should be enough to strike fear into the heart of any bank manager.

A data fabric — a technology that unifies all enterprise data – without moving it from systems of record — is the answer. The data fabric creates a virtual data layer where mutable enterprise data, and the relationships between those data, can be managed in a simple low-code environment. The data is secured at row level, meaning only the people who should see it can see it, and only when they should see it. The data may be on-premise, in a cloud service, or in multi-cloud environments.

With a data fabric approach, you can combine business data in entirely new ways. This means you not only have a 360-degree view of the customer, their identity, history, product(s), but you can also glean new insights from seeing your enterprise data holistically.

Onboarding and automation: What fintechs can learn from big banks by Walter Thompson originally published on TechCrunch

Making foundation models accessible: The battle between closed and open source AI

The massive explosion of generative AI models for text and image has been unavoidable lately. As these models become increasingly capable, “foundation model” is a relatively new term being tossed around. So what is a foundation model?

The term remains somewhat vague. Some define it by the number of parameters, and therefore, how large a neural network is, and others by the number of unique and hard tasks that the model can perform. Is making AI models larger and larger and the model’s ability to tackle multiple tasks really that exciting? If you take away all the hype and marketing language, what is truly exciting about these new generations of AI models is this: They fundamentally changed the way we interface with computers and data. Think about companies like Cohere, Covariant, Hebbia and You.com.

We’ve now entered a critical phase of AI where who gets to build and serve these powerful models has become an important discussion point, particularly as ethical issues begin to swirl, like who has a right to what data, whether models violate reasonable assumptions of privacy, whether consent for data usage is a factor, what constitutes “inappropriate behavior” and much more. With questions such as these on the table, it is reasonable to assume that those in control of AI models will perhaps be the most important decision-makers of our time.

Is there a play for open source foundation models?

Because of the ethical issues associated with AI, the call to open source foundation models is gaining momentum. But building foundation models isn’t cheap. They require tens of thousands of state-of-the-art GPUs and many machine learning engineers and scientists. The realm of building foundation models to date has only been accessible by the cloud giants and extremely well-funded startups sitting on a war chest of hundreds of millions of dollars.

Almost all the models and services built by these few self-chosen companies have been closed source. Still, closed source entrusts an awful lot of power and decisions to a limited number of companies that will define our future, which can be quite unsettling.

We’ve entered a critical phase of AI where who gets to build and serve these powerful models has become an important discussion point.

 

The open sourcing of Stable Diffusion by Stability AI, however, posed a serious threat to the foundation model builders determined to keep all the secret sauce to themselves. Cheering from developer communities around the world has been heard regarding Stability’s open sourcing because it liberates systems, putting control in the hands of the masses vs. select companies that could be more interested in profit than what’s good for humanity. This now affects the way insiders think about the current paradigm of closed source AI systems.

Potential hurdles

The biggest obstacle to open sourcing foundation models continues to be money. For open source AI systems to be profitable and sustainable, they still require tens of millions of dollars to be properly run and managed. Though this is a fraction of what the large companies are investing in their efforts, it’s still quite significant to a startup.

We can see Stability AI’s attempt at open sourcing Neo-GPT and turning it into a real business fell flat, as it was outclassed by companies like Open AI and Cohere. Tthe company now has to ideal with the Getty Images lawsuit, which threatens to distract the company and further drain resources — both financial and human. Meta’s counter to closed source systems through LLaMA has poured gas in the open source movement, but it’s still too early to tell if they’ll continue to live up to their commitment.

Making foundation models accessible: The battle between closed and open source AI by Walter Thompson originally published on TechCrunch

Unlocking the M&A code: 5 factors that can make (or break) a deal

Mergers and acquisitions (M&A) have long been a driving force for companies seeking exponential growth, gaining market share and creating shareholder value. History has shown that well-executed M&A strategies can be transformative and yield impressive results.

For instance, Disney’s acquisition of Pixar in 2006 revitalized the animation giant’s fortunes, though market analysts were skeptical of this move when it was announced. In a conversation with CNBC 15 years later, Bob Iger stated it was perhaps the best acquisition decision during his time at Disney. “It put us on the path to achieving what I wanted to achieve, which is scale when it comes to storytelling,” were his exact words.

But the Disney-Pixar marriage isn’t the only one that proved to be a massive growth engine. Facebook’s purchase of Instagram in 2012 allowed the social media behemoth to dominate the photo-sharing space. There are many such examples in the history of businesses around the world.

But all’s not rosy in the world of M&A. It is a complex and substantially risky decision, not for the faint-hearted. It is essential to approach the decision and process with diligence and forethought.

Over the years, with experience navigating the complicated world of M&A, including eight acquisitions in just the past few years, I have built five indispensable elements to consider for a successful mergers and acquisitions journey.

Be watchful of revenue synergies 

One of the critical drivers of a successful acquisition is the ability to achieve revenue synergies. However, what’s more important is not to assume this synergy will automatically occur just because it seems feasible.

Making a decision about consolidated revenue potential after the M&A involves carefully analyzing the potential for growth and gaining clear visibility into how to maximize synergy. Consider acquiring companies with high sales velocity and exponential growth potential to maximize success. Analyze the target’s product offerings, customer base and sales channels to identify cross-selling, upselling and market expansion opportunities.

For instance, in 2015, PayPal acquired Braintree, a payments company that owned the mobile payment service, Venmo. It was a strategic and wise move at a time when digital payments were just taking off across the globe. Now in 2023, PayPal is relying on Venmo to drive the adoption and usage of the company’s digital payments services. The two operations are expected to converge by next year. This acquisition has enabled PayPal to tap into the growing peer-to-peer payments market and strengthen its revenue streams.

Don’t let refactoring throw cold water on your go-to-market strategy

Tech CEOs often make the mistake of assuming that a product will seamlessly integrate into their existing tech stack, especially in a tuck-in acquisition. However, this may not always be the case.

Before making your decision about the acquisition, take the time to evaluate the target company’s go-to-market (GTM) strategy and the ease of finding, buying and deploying their products. Focus on creating a new integrated version in the future to give yourself a longer runway to iron out any issues. This allows customers and employees to see a roadmap for future success.

Unlocking the M&A code: 5 factors that can make (or break) a deal by Walter Thompson originally published on TechCrunch

Ask Sophie: Can I apply for an EB-1A without first getting an O-1A?

Here’s another edition of “Ask Sophie,” the advice column that answers immigration-related questions about working at technology companies.

“Your questions are vital to the spread of knowledge that allows people all over the world to rise above borders and pursue their dreams,” says Sophie Alcorn, a Silicon Valley immigration attorney. “Whether you’re in people ops, a founder or seeking a job in Silicon Valley, I would love to answer your questions in my next column.”

TechCrunch+ members receive access to weekly “Ask Sophie” columns; use promo code ALCORN to purchase a one- or two-year subscription for 50% off.


Dear Sophie,

What do you think about applying for an EB-1A straight away without first using the O-1A as a stepping stone?

Thanks!

— Extraordinary Engineer in Escondido

Dear Extraordinary,

Yes, it’s definitely possible to apply for an EB-1A extraordinary ability green card without first using the O-1A extraordinary ability visa as a stepping stone! Allow me to give you some context so that you can think about whether this path could be best for you.

In general, O-1A visa holders typically have an easier time getting an EB-1A compared to people who are currently outside the U.S. or those with another type of immigration status in the U.S. such as an H-1B. To hear more about the EB-1A requirements, check out my podcast on Extraordinary Ability Bootcamp.

This is simply because the eligibility requirements for the O-1A and EB-1A overlap and those who have gone through the exercise of preparing an O-1A visa have already gathered a tremendous amount of paperwork and letters of recommendation evidencing their success.

Vote for immigration lawyer Sophie Alcorn to speak at TechCrunch Disrupt in September 2023.

Image Credits: TechCrunch

However, I know from personal experience that many individuals can go directly to an EB-1A green card while they were on visas other than the O-1A, or even outside the United States. That means the EB-1A is possible for individuals currently working temporarily in the U.S. on visas such as the E-2 treaty investor visa, the E-3 visa for Australian professionals, the H-1B specialty occupation visa, L-1 multinational transferees and the TN (Treaty National) visa for Canadian and Mexican professionals.

Please keep in mind that if you are traveling on a visa that requires nonimmigrant intent (such as an F-1 or B-1/B-2), going straight for an EB-1A could be a blocker to subsequent U.S. entries, because it would be your burden at entry to demonstrate to immigration officials abroad and at U.S. ports of entry that you intend to eventually return to your home country. Starting a green card process (especially by self-petitioning an I-140 in the EB-1A category) is likely to be considered strong evidence of immigrant intent.

The ultimate “golden ticket”

Because the EB-1A green card is a first-priority employment-based green card, the wait time for this green card category is typically the shortest compared to other employment-based green cards.

According to the Visa Bulletin for May 2023, the EB-1 category is the only one that has green card numbers currently available for all countries except China and India. Both the U.S. Department of State, which oversees consular processing outside the U.S., and U.S. Citizenship and Immigration Services (USCIS), which adjudicates visa and green card applications and oversees the process within the U.S. Most of the time, the Visa Bulletin issued by the State Department mirrors the one issued by USCIS.

Ask Sophie: Can I apply for an EB-1A without first getting an O-1A? by Walter Thompson originally published on TechCrunch

3 key metrics for cybersecurity product managers

The conventional product management wisdom suggests that one of the responsibilities of a product leader is to track and optimize metrics — quantitative measurements that reflect how people benefit from a specific solution. Anyone who has read product management books, attended workshops or even simply gone through an interview, knows that what is not measured cannot be managed.

The practice of product management is, however, much more nuanced. Context matters a lot, and the realities of different organizations, geographies, cultures and market segments heavily influence what can be measured and what actions can be taken based on these observations. In this article, I am looking at cybersecurity product management and how metrics product leaders are tempted to track and report on may not be what they seem.

Detection accuracy

Although not all cybersecurity products are designed to generate some kind of detections, many do. Detection accuracy is a metric that applies to the security tooling that does trigger alerts notifying users that a specific behavior has been detected.

Two types of metrics are useful to track in the context of detection accuracy:

  • False positives (a false alarm, when the tool triggers a detection on normal behavior).
  • False negatives (a missed attack, when the tool misidentifies an attack as normal behavior and does not trigger a detection).

Security vendors are faced with a serious, and I dare to say, an impossible-to-win challenge: how to reduce the number of false positives and false negatives and bring them as close to zero as possible.

The reason it is impossible to accomplish this is that every customer’s environment is unique and applying generic detection logic across all organizations will inevitably lead to gaps in security coverage.

Product leaders need to keep in mind that false positives make it more likely that a real, critical detection will be missed, while false negatives mean that the product is not doing the job the tool was bought to do.

Conversion rate

Conversion rate is one of the most important metrics companies, and subsequently — product teams, obsess about. This metric tracks the percentage of all users or visitors who take a desired action.

Who owns conversions in the organization will depend upon who can influence the outcome. For example:

  • If the product is fully sales-led and whether the deal gets closed is in the hands of sales, then conversion is owned by sales.
  • If the product is fully product-led and whether a free user becomes a paying customer is in the hands of product, then conversion is owned by marketing and product teams (marketing owns the sign-up on the website, product owns in-app conversion).

    3 key metrics for cybersecurity product managers by Walter Thompson originally published on TechCrunch

Generative AI and copyright law: What’s the future for IP?

In a guidance document recently released by the U.S. Copyright Office, the agency attempts to clarify its stance on AI-generated works and their eligibility for copyright protection.

The guidance emphasizes the importance of human authorship and outlines how the office evaluates works containing AI-generated content to determine whether the AI contributions are the result of “mechanical reproduction” or an author’s “own original mental conception.”

The Copyright Office will not register works whose traditional elements of authorship are produced solely by a machine, such as when an AI technology receives a prompt from a human and generates complex written, visual or musical works in response. According to the Office, in these cases, the AI technology, rather than the human user, determines the expressive elements of the work, making the generated material ineligible for copyright protection.

However, a work containing AI-generated material may still be eligible for copyright protection if it also contains sufficient human authorship. Examples include a human selecting or arranging AI-generated content in a creative way or an artist modifying AI-generated material to the extent that the modifications meet the standard for copyright protection. In these cases, copyright protection only applies to the human-authored aspects of the work.

We are seeing the emergence of competing interests come to light between authors, AI companies and the general public.

The guidance also outlines the responsibilities of copyright applicants to disclose the use of AI-generated content in their works, providing instructions on submitting applications for works containing AI-generated material and advising on correcting a previously submitted or pending application. The Copyright Office emphasizes the need for accurate information regarding AI-generated content in submitted works and the potential consequences of failing to provide such information.

In light of the Office’s guidance, AI companies are updating their policies. OpenAI’s Terms of Use grant users “all right, title and interest in and to Output,” which it defines as content “generated and returned by the Services based on the [user] Input.” However, OpenAI restricts its users from “represent[ing] that output from the Services was human-generated when it is not,” suggesting that ChatGPT’s users must comply with the Copyright Office’s requirement of honest disclosure of AI use.

Generative AI and copyright law: What’s the future for IP? by Walter Thompson originally published on TechCrunch

Hidden in plain sight: 5 red flags for investors

After viewing thousands of presentations and pitch decks over many years, even the most experienced angel investors — and VCs — can overlook red flags that are subtle and not immediately apparent. I know this from firsthand experience: Along with many wins there are some investments I wish I’d turned down. I missed the red flags.

So that you minimize the likelihood of learning the hard way, what follows are my top subtle red flags angel investors should heed when evaluating a potential investment. By staying vigilant and knowing what to look for, you can make informed decisions and stay away from opportunities that you determine aren’t worth the risk.

An out-of-balance team

A key factor when you’re considering an investment is whether business expertise and technical know-how of the founding team are in balance.

Even the most experienced angel investors and VCs can overlook red flags that are not immediately apparent.

I’ve heard pitches from many life science companies with deeply credentialed and innovative technical founders with absolutely no business expertise on the team. Conversely, I’ve seen companies filled with many superb business professionals that are deficient in their technical prowess. A founding team must always have relevant, gifted technical expertise to be viable.

Without a fine-tuned balance of business acumen and technical ingenuity, a startup may struggle to develop their game-changing product, bring it to market, scale the business and attract customers and investors.

Frequent staff turnover

Frequent and high turnover is often a red flag for investors as it usually indicates instability and internal conflicts within the founding team. Turnover disrupts the company’s operation, culture and growth trajectory. It’s a red flag that drama is consuming the company while its mission is mostly sidelined. A revolving door showing that the company can’t retain top talent will negatively impact its financial prospects, both short and longer term.

Instability in a founding team can show up in ego-driven abrasive management, favoritism and unfair compensation — issues and inequity that will cause people to leave.

Tweaks in leadership are normal and healthy as a company grows. But when these changes occur too frequently, investors should pay particular attention as it often suggests deeper issues in the business.

Hidden in plain sight: 5 red flags for investors by Walter Thompson originally published on TechCrunch

Western sanctions against Russia: Tips for tech companies managing compliance risk

As the war in Ukraine rages on, authorities are cracking down on the smuggling of U.S. technology in support of Russia’s war effort, an initiative with implications for the tech industry. One significant example of this is Russia’s drone program, with a December 2022 expose describing U.S. chips, circuit boards, and amplifiers found in downed Russian drones, and mapping part of the supply chain trafficking such items to Russia in spite of Western sanctions.

This has prompted broader concerns regarding the diversion of Western technology to Russia in support of illicit end-uses, such as, for example, the Russian government’s use of facial recognition technology to crack down on dissidents.

In response to this, the United States and its partners recently imposed new sanctions against Russia to coincide with the one-year anniversary of the invasion of Ukraine, including expanded export controls over drone components, electronics, industrial equipment, and other items. The U.S. government followed this up with an advisory warning companies of the risk of third parties diverting their products to Russia.

Suppliers of electronics, drone components, and other sanctioned items face the risk that third parties will divert their products to Russia’s defense industrial base or to the battlefield in Ukraine, given the Russian military’s continued demand for battlefield equipment. Companies can mitigate this risk by conducting due diligence on counterparties and by auditing sales channels.

Overview of Russia sanctions

The United States and its partners (including the United Kingdom, the European Union, Canada, Australia, and Japan) have imposed a range of sanctions and export controls against Russia, prohibiting, among other things:

  • dealings with restricted parties (such as major banks, oligarchs and oligarch-owned companies, and companies in Russia’s defense industrial base);
  • new investment in Russia; and
  • exports to Russia of certain items, including a broad range of electronics, drone components, software, sensors and lasers, marine equipment, aviation and aerospace equipment, power supplies, and industrial equipment.

In particular, U.S. export controls can have worldwide reach, applying to all U.S.-origin items, wherever located; non-U.S. items incorporating more than a “de minimis” level of “controlled” U.S. content; and non-U.S. items that are the “direct product” of certain U.S. technology or software.

Violations of sanctions and export controls carry stiff penalties, including civil penalties of up to the greater of $353,534 (annually adjusted for inflation) or twice the value of the transaction, and criminal penalties of up to $1 million and/or 20 years’ imprisonment.

Concern over diversion of items to Russia

U.S. officials are deeply concerned over the ongoing diversion to Russia of items restricted under sanctions, and have made it a policy focus. This concern is reflected in the March 2023 advisory noted above, in which the U.S. Department of Justice, the U.S. Department of the Treasury, and the U.S. Department of Commerce jointly warned industry of the risk of third-party intermediaries seeking to procure items on Russia’s behalf, identifying certain red flags to note.

Western sanctions against Russia: Tips for tech companies managing compliance risk by Walter Thompson originally published on TechCrunch

Are you spending too much on paid acquisition?

When scaling a paid acquisition channel, you should constantly question whether you’re spending in the most efficient way possible. If you’re scaling spending across various channels, it’s more than likely that you’re facing rising costs. But how do you know where and when to draw the line?

In this short article, I’ll discuss when to start measuring diminishing returns and how to use a simple regression analysis to find optimal spending levels.

Weekly performance trends

If you’re scaling any paid acquisition channel by 5-10x weekly, then it becomes important to maintain the pulse on the following metrics:

  • Cost per thousand impressions (CPM)
  • Customer acquisition cost (CAC)
  • Ad frequency (for paid social)

As paid costs scale, the number of impressions being served naturally increases, which causes CPMs to rise. If your CPMs are rising this usually means that your CACs and ad frequency are rising as a byproduct. A spreadsheet with those metrics laid out on a weekly basis will help you identify large upticks in costs, which can then guide your future budget allocations.

Regression analysis

If you’re looking to get analytical and have a minimum of 90 days of data at varying levels of spending, a regression analysis is your answer. What is regression analysis? In non-technical terms, it’s a way to measure the relationship of one variable to another. This empowers marketers to understand how two marketing metrics relate to one another, such as affiliates signed and conversions, or revenues and paid spend.

What’s great about this kind of analysis is that it provides a clear depiction of what your optimal expenditure is at the paid channel level. During my days at Postmates, we scaled our driver acquisition budget from less than $50,000/month to $3M/month, and had to run regression analyses on a regular cadence to ensure optimal spending per channel and geography. Below is a quick summary on how to get a regression analysis set up with the following inputs on a weekly level:

  • Spend
  • Customer acquisition cost (CAC)

Example data inputs for regression analysis

Example data inputs for regression analysis. Image: Jonathan Martinez

Once you have this data input on your spreadsheet (I’m using Google Sheets here), highlight all the data and select the Insert drop-down, followed by Chart. In the Chart type area that shows on the right pane of the screen, scroll until you find the Scatter option. Select Scatter. Add “CAC” to the X-Axis and “Spend” on your Series. Finally, go to the Customize tab, find Series, click on it, and then scroll down to find the Trendline toggle.

From there, you should have something that looks like what I’ve created below:

Are you spending too much on paid acquisition? by Walter Thompson originally published on TechCrunch

Acquisition, retention, expansion: Why SaaS founders must understand GDR and NDR

Whether it’s the coffee shop down the street, a mobile app on your phone, or software used at work, any long-term minded and customer-centric company is typically focused on:

  • Acquisition: Attracting new customers
  • Retention: Maintaining existing customer relationships and preventing churn
  • Expansion: Deepening and broadening existing customer relationships through cross- and up-sell

Companies aspire to have direct, long-term, and recurring customer relationships with high retention and expansion because these characteristics lead to more predictable revenues and profits. Predictable businesses are more durable, easier to manage, and typically rewarded with higher valuations than unpredictable ones.

Predictable businesses are more durable, easier to manage, and typically rewarded with higher valuations than unpredictable ones.

Software companies tend to have relatively high customer retention and expansion compared to other business models. For software, two metrics are commonly used to measure retention and expansion:

  1. Gross Dollar Retention (GDR)
  2. Net Dollar Retention (NDR), sometimes referred to as Net Revenue Retention, or Dollar Based Net Revenue Retention.

GDR measures retention of an existing book of revenue before expansion, whereas NDR incorporates expansion:

Gross Dollar Retention formula and benchmarks

Image Credits: Index Ventures

Net Dollar Retention formula and benchmark

Image Credits: Index Ventures

GDR and NDR are well-known and widely used metrics, often discussed in the context of growth. Software companies with NDR over 100% grow revenues organically each year just by expanding their existing book of business before adding any new customers.

While NDR is not a required disclosure, as it is a non-GAAP metric, public software companies often provide visibility to investors through a combination of shareholder presentations, public filings, and earnings calls:

Acquisition, retention, expansion: Why SaaS founders must understand GDR and NDR by Walter Thompson originally published on TechCrunch