Government is a technology, so fix it like one

Just as as tangible as airplanes, computers and contraception, The Roman Empire, the Iroquois Confederacy and the United States of America are also human inventions.

Technology is how we do things, and political institutions are how we collaborate at scale. Government is an immensely powerful innovation through which we take collective action.

Just like any other technology, governments open up new realms of opportunity. These opportunities are morally neutral: humans have leveraged political institutions to provide public education and murder ethnic minorities. Specific features like explicit protections for human rights and civil liberties are designed to help mitigate certain downside risks.

Like any tool, systems of governance require maintenance to keep working. We expect regular software updates, but forget that governance is also in constant flux, and begins to fail when it falls out of sync with the culture. Without preventative maintenance, pressure builds like tectonic forces along a fault line until a new order snaps into place, often violently. Malka Older points out that “democracy is not a unitary state that can be achieved, but a continuous process. We need to keep reinventing and refining government, to keep up with changes in society and technology and to keep it from being too easy for elites with resources to exploit.”

What might the future of governance actually look like?

Immigrant founders, smartphone growth, SEO tactics, SoftBank’s financials, and AR tech

How an immigration crackdown is hurting UK startups

Our European correspondent Natasha Lomas spent the past few weeks investigating what’s been happening to immigrant founders and tech talent in the UK, who have been receiving more scrutiny from the Home Office in recent months. Natasha zooms in on Metail, a virtual fitting room startup, and its tribulations with the immigration authorities and the damage those action are having on the broader ecosystem:

The January 31 decision letter, which TechCrunch has reviewed, shows how the Home Office is fast-tracking anti-immigrant outcomes. In a short paragraph, the Home Office says it considered and dismissed an alternative outcome — of downgrading, not revoking, the license and issuing an “action plan” to rectify issues identified during the audit. Instead, it said an immediate end to the license was appropriate due to the “seriousness” of the non-compliance with “sponsor duties”.

The decision focused on one of the two employees Metail had working on a Tier 2 visa, who we’ll call Alex (not their real name). In essence, Alex was a legal immigrant had worked their way into a mid-level promotion by learning on the job, as should happen regularly at any good early-stage startup. The Home Office, however, perceived the promotion to have been given to someone without proper qualifications, over potential native-born candidates.

In addition to reporting the story, Natasha also wrote a guide specifically for Extra Crunch members on how founders can manage their immigration matters, both for themselves and for their employees.

The state of the smartphone

TechCrunch hardware editor Brian Heater analyzed the slowdown in smartphone sales, finding few reasons to be optimistic about how smaller handset manufacturers can compete with giants like Apple and Samsung. There are slivers of good news from the developing world and also from 5G and foldable tech, but don’t expect profits to reach their zenith again any time soon.

How to see our world in a new light

Startups are ultimately vessels of speculation, of new products, new markets, and innovations the world has never seen. While data and information are important components for exploring the frontiers of the possible, perhaps the best way is through stories and fiction, and especially speculative fiction.

We’ve been fortunate at Extra Crunch to have noted novelist Eliot Peper write a guide to the novels that are and should be helping founders build startups in Silicon Valley these days. This week, Eliot published the final book in his Analog trilogy, which explores contemporary issues through a futuristic technology lens. With Breach, he brings to a close his tale of algorithmic geopolitics that started with Bandwidth (which I reviewed on TechCrunch) and continued with Borderless, all the while exploring topics of privacy, social media psychops, and the future of democracy.

I wanted to catch up with Eliot and chat not only about his latest work, but also the themes inherent in the novels as well as his process for generating new ideas and seeing the world from a new perspective, a skill critical for any creative or founder.

The following interview has been edited and condensed for clarity.

Daily Digest: Technology and tyranny, lying to ourselves, and Spotify’s $1b repurchase

Want to join a conference call to discuss more about these thoughts? Email Arman at Arman.Tabatabai@techcrunch.com to secure an invite.

Hello! We are experimenting with new content forms at TechCrunch. This is a rough draft of something new. Provide your feedback directly to the authors: Danny at danny@techcrunch.com or Arman at Arman.Tabatabai@techcrunch.com if you like or hate something here.

Harari on technology and tyranny

Yuval Noah Harari, the noted author and historian famed for his work Sapiens, wrote a lengthy piece in The Atlantic entitled “Why Technology Favors Tyranny” that is quite interesting. I don’t want to address the whole piece (today), but I do want to discuss his views that humans are increasingly eliminating their agency in favor of algorithms who make decisions for them.

Harari writes in his last section:

Even if some societies remain ostensibly democratic, the increasing efficiency of algorithms will still shift more and more authority from individual humans to networked machines. We might willingly give up more and more authority over our lives because we will learn from experience to trust the algorithms more than our own feelings, eventually losing our ability to make many decisions for ourselves. Just think of the way that, within a mere two decades, billions of people have come to entrust Google’s search algorithm with one of the most important tasks of all: finding relevant and trustworthy information. As we rely more on Google for answers , our ability to locate information independently diminishes. Already today, “truth” is defined by the top results of a Google search. This process has likewise affected our physical abilities, such as navigating space. People ask Google not just to find information but also to guide them around. Self-driving cars and AI physicians would represent further erosion: While these innovations would put truckers and human doctors out of work, their larger import lies in the continuing transfer of authority and responsibility to machines.

I am not going to lie: I completely dislike this entire viewpoint and direction of thinking about technology. Giving others authority over us is the basis of civilized society, whether that third-party is human or machine. It’s how that authority is executed that determines whether it is pernicious or not.

Harari brings up a number of points here though that I think deserve a critical look. First, there is this belief in an information monolith, that Google is the only lens by which we can see the world. To me, that is a remarkably rose-colored view of printing and publishing up until the internet age, when gatekeepers had the power (and the politics) to block public access to all kinds of information. Banned Books Week is in some ways quaint today in the Amazon Kindle era, but the fight to have books in public libraries was (and sometimes today is) real. Without a copy, no one had access.

That disintegration of gatekeeping is one reason among many why extremism in our politics is intensifying: there is now a much more diverse media landscape, and that landscape doesn’t push people back toward the center anymore, but rather pushes them further to the fringes.

Second, we don’t give up agency when we allow algorithms to submit their judgments on us. Quite the opposite in fact: we are using our agency to give a third-party independent authority. That’s fundamentally our choice. What is the difference between an algorithm making a credit card application decision, and a (human) judge adjudicating a contract dispute? In both cases, we have tendered at least some of our agency to another party to independently make decisions over us because we have collectively decided to make that choice as part of our society.

Third, Google, including Search and Maps, has empowered me to explore the world in ways that I wouldn’t have dreamed before. When I visited Paris the first time in 2006, I didn’t have a smartphone, and calling home was a $1/minute. I saw parts of the city, and wandered, but I was mostly taken in by fear — fear of going to the wrong neighborhood (the massive riots in the banlieues had only happened a few months prior) and fear of completely getting lost and never making it back. Compare that to today, where access to the internet means that I can actually get off the main tourist stretches peddled by guidebooks and explore neighborhoods that I never would have dreamed of doing before. The smartphone doesn’t have to be distracting — it can be an amazing tool to explore the real world.

I bring these different perspectives up because I think the “black box society” as Frank Pasquale calls it by his eponymous book is under unfair attack. Yes, there are problems with algorithms that need addressing, but are they worse or better than human substitutes? When eating times can vastly affect the outcome of a prisoner’s parole decisions, don’t we want algorithms to do at least some of the work for us?

Lying to ourselves

Photo: Getty Images / Siegfried Kaiser / EyeEm

Talking about humans acting badly, I wrote a review over the weekend of Elephant in the Brain, a book about how we use self-deception to ascribe better motives to our actions than our true intentions. As I wrote about the book’s thesis:

Humans care deeply about being perceived as prosocial, but we are also locked into constant competition, over status attainment, careers, and spouses. We want to signal our community spirit, but we also want to selfishly benefit from our work. We solve for this dichotomy by creating rationalizations and excuses to do both simultaneously. We give to charity for the status as well as the altruism, much as we get a college degree to learn, but also to earn a degree which signals to employers that we will be hard workers.

It’s a depressing perspective, but one that’s ultimately correct. Why do people wear Stanford or Berkeley sweatshirts if not to signal things about their fitness and career prospects? (Even pride in school is a signal to others that you are part of a particular tribe). One of the biggest challenges of operating in Silicon Valley is simply understanding the specific language of signals that workers there send.

Ultimately, though, I was nonplussed with the book, because I felt that it didn’t end up leading to a broader sense of enlightenment, nor could I see how to change either my behavior or my perception’s of others’ behaviors as a result of the book. That earned a swift rebuke from one of the author’s last night on Twitter:

Okay, but here is the thing: of course we lie to ourselves. Of course we lie to each other. Of course PR people lie to make their clients look good, and try to come off as forthright as possible. The best salesperson is going to be the person that truly believes in the product they are selling, rather than the person who knows its weaknesses and scurries away when they are brought up. This book makes a claim — that I think is reasonable — that self-deception is the key ingredient – we can’t handle the cognitive load of lying all the time, so evolution has adapted us to handle lying with greater facility by not allowing us to realize that we are doing it.

No where is this more obvious than in my previous career as a venture capitalist. Very few founders truly believe in their products and companies. I’m quite serious. You can hear the hesitation in their voices about the story, and you can hear the stress in their throats when they hit a key slide that doesn’t exactly align with the hockey stick they are selling. That’s okay, ultimately, because these companies were young, but if the founder of the company doesn’t truly believe, why should I join the bandwagon?

Confidence is ambiguous — are you confident because the startup truly is good, or is it because you are carefully masking your lack of enthusiasm? That’s what due diligence is all about, but what I do know is that a founder without confidence isn’t going to make it very far. Lying is wrong, but confidence is required — and the line between the two is very, very blurry.

Spotify may repurchase up to $1b in stock

Photo by Spencer Platt/Getty Images

Before the market opened this morning, Spotify announced plans to buy back stock starting in the fourth quarter of 2018. The company has been authorized to repurchase up to $1 billion worth of shares, and up to 10 million shares total. The exact cadence of the buybacks will depend on various market conditions, and will likely occur gradually until the repurchase program’s expiration date in April of 2021.

The announcement comes on the back of Spotify’s quarterly earnings report last week, which led to weakness in the company’s stock price behind concerns over its outlook for subscriber, revenue and ARPU (Average Revenue Per User) growth, despite the company reporting stronger profitability than Wall Street’s expectations.

After its direct-offering IPO in April, Spotify saw its stock price shoot to over $192 a share in August. However, the stock has since lost close to $10 billion in market cap, driven in part by broader weakness in public tech stocks, as well as by fears about subscription pricing pressure and ARPU growth as more of Spotify’s users opt for discounted family or student subscription plans.

Per TechCrunch’s Sarah Perez:

…The company faces heavy competition these days – especially in the key U.S. market from Apple Music, as well as from underdog Amazon Music, which is leveraging Amazon’s base of Prime subscribers to grow. It also has a new challenge in light of the Sirius XM / Pandora deal.

The larger part of Spotify’s business is free users – 109 million monthly actives on the ad-supported tier. But its programmatic ad platform is currently only live in the U.S., U.K., Canada and Australia. That leaves Spotify room to grow ad revenues in the months ahead.

The strategic rationale for Spotify is clear despite early reports painting the announcement as a way to buoy a flailing stock price. With over $1 billion in cash sitting on its balance sheet and the depressed stock price, the company clearly views this as an affordable opportunity to return cash to shareholders at an attractive entry point when the stock is undervalued.

As for Spotify’s longer-term outlook from an investor standpoint, the company’s ARPU growth should not be viewed in isolation. In the past, Spotify has highlighted discounted or specialized subscriptions, like family and student subscriptions, as having a much stickier user base. And the company has seen its retention rates improving, with churn consistently falling since the company’s IPO.

The stock is up around 1.5% on the news on top of a small pre-market boost.

What’s next

  • We are still spending more time on Chinese biotech investments in the United States (Arman previously wrote a deep dive on this a week or two ago).
  • We are exploring the changing culture of Form D filings (startups seem to be increasingly foregoing disclosures of Form Ds on the advice of their lawyers)
  • India tax reform and how startups have taken advantage of it

Reading docket