Why convertible notes are safer than SAFEs

As the saying goes, where you stand on an issue often rests on where you sit. Translated into startup law and finance, your views on how to approach fundraising are often heavily influenced by where your company and your investors are located. As a startup lawyer at Egan Nelson LLP (E/N), a leading boutique firm focused on tech markets outside of Silicon Valley — like Austin, Seattle, NYC, Denver, etc. — that’s the perspective I bring to this post. 

At a very high level, the three most common financing structures for startup seed rounds across the country are (i) equity, (ii) convertible notes and (iii) SAFEs. Others have come and gone, but never really achieved much traction. As to which one is appropriate for your company’s early funding, there’s no universal answer. It depends heavily on the context; not just of what the company’s own priorities and leverage are, but also the expectations and norms of the investors you plan to approach. Maintaining flexibility, and not getting bogged down by a rigid one-approach-fits-all mindset is important in that regard.

Here’s the TL;DR: When a client comes to me suggesting they might do a SAFE round, my first piece of advice is that a convertible note with a long maturity (three years) and low interest rate (like 2 percent or 3 percent) will give them functionally the same thing — while minimizing friction with more traditional investors.

Why? Read on for more details.

Convertible notes for smaller seed rounds

Convertible securities (convertible notes and SAFEs) are often favored, particularly for smaller rounds (less than $2 million), for their simplicity and speed to close. They defer a lot of the heavier terms and negotiation to a later date. The dominant convertible security (when equity is not being issued) across the country for seed funding is a convertible note, which is basically a debt instrument that is intended to convert into equity in the future when you close a larger round (usually a Series A). The note’s conversion economics are more favorable than what Series A investors pay, due to the greater risk the seed investors took on.

In big tech’s future expansion plans, public good should be the corporate incentive

The cancellation of Amazon’s planned expansion in New York exposes the truth about its HQ2 promises. In the end, the company seemed mainly interested in tax incentives and being allowed to make a corner of NYC once set aside for public housing and schools its own.

Meanwhile, Arlington county officials are revisiting plans to deliver locally-funded financial incentives to Amazon in exchange for the development of DC-adjacent “National Landing.”

Increasingly, communities are demanding that tech companies bring more to the table than they take. Locals want them to stimulate the local economy and fortify startup ecosystems rather than hire away people and raise housing prices. After all, talent is the most precious resource in the digital economy, and if cities nourish it, the tech firms will come naturally.

There’s plenty to dislike about the way Amazon approached its HQ2 search. Encouraging cities to throw sweeteners at a multinational run by the world’s richest man was spectacularly tone deaf at a time of growing anxiety about inequality. But there is an upside: the HQ2 experiment can still serve as a watershed moment that brings into focus the need for greater corporate responsibility with big tech expansions.

Toronto was one of only two HQ2 bid “finalists” – along with Austin, Texas – that did not offer any tax incentives to Amazon leading up to the reveal. The reasoning by the city’s bid leader, Toronto Global CEO Toby Lennox, was simple: Toronto should win Amazon’s presence, job creation potential and global connectivity thanks to the city’s well-educated and culturally diverse workforce, connected economy and excellent quality of life.

Perhaps it’s no surprise that the HQ2 bid was not widely embraced by Toronto’s ecosystem. Subsidies aside, startup CEOs worried that the presence of Amazon might come at the expense of local companies already fighting for top engineers. Amazon offered little in return to the community – upskilling and expanding the talent pool was not part of its playbook.

It’s notable that in Virginia, where the other half of Amazon’s HQ2 has faced less controversy than New York, the state’s incentive package included investing $1.1 billion in its higher education system to build its talent pipeline. That creates benefits for everyone, not just Amazon.

Amazon’s move will, I suspect, come to be seen as the high-water mark for Big Tech hubris. There are enormous benefits to a city in attracting major tech firms – the name-recognition alone acts as a signal to investors and engineers that interesting things are happening there. But, in the future, I believe we will see far fewer attempts to create vast corporate campuses with helipads, and more effort to integrate into the existing ecosystem.

Apple iPad event

Apple and Google are both expanding in New York without a similar backlash because their plans are smaller scale and seen to complement the city rather than ignore it.

Last fall, when Uber CEO Dara Khosrowshahi announced plans to build a new engineering center in Toronto, he stressed the importance of ramping up operations responsibly. He underscored that the company will pace its hiring and staff relocation to avoid stifling the ecosystem of artificial intelligence startups and researchers that had attracted Uber to the city in the first place.

Investments in local economies – like Microsoft’s recent $500 million promise to deliver more affordable housing in Seattle – are also going to become increasingly important for Big Tech companies if they want to retain public support.

There are lessons here for both civic leaders and tech executives. It’s easy to see why cities jump at the chance of bringing in tens of thousands of jobs, but that has to be balanced against the interests of the city’s homegrown tech companies and existing communities.

Tech companies need to accept that their social license to operate is being called into question as never before and that they need to put more emphasis on public engagement, job creation for local people and infrastructure improvements in their future plans. The aim should be for new players to grow local economies that benefit every stakeholder – not arrive with a splash that destabilizes the communities we need to build.

Amazon and its fellow tech giants must embrace that public good is the best corporate incentive.

How to build The Matrix

Released this month 20 years ago, “The Matrix” went on to become a cultural phenomenon. This wasn’t just because of its ground-breaking special effects, but because it popularized an idea that has come to be known as the simulation hypothesis. This is the idea that the world we see around us may not be the “real world” at all, but a high-resolution simulation, much like a video game.

While the central question raised by “The Matrix” sounds like science fiction, it is now debated seriously by scientists, technologists and philosophers around the world. Elon Musk is among those; he thinks the odds that we are in a simulation are a billion to one (in favor of being inside a video-game world)!

As a founder and investor in many video game startups, I started to think about this question seriously after seeing how far virtual reality has come in creating immersive experiences. In this article we look at the development of video game technology past and future to ask the question: Could a simulation like that in “The Matrix” actually be built? And if so, what would it take?

What we’re really asking is how far away we are from The Simulation Point, the theoretical point at which a technological civilization would be capable of building a simulation that was indistinguishable from “physical reality.”

[Editor’s note: This article summarizes one section of the upcoming book, “The Simulation Hypothesis: An MIT Computer Scientist Shows Why AI, Quantum Physics and Eastern Mystics All Agree We Are in a Video Game.“] 

From science fiction to science?

But first, let’s back up.

“The Matrix,” you’ll recall, starred Keanu Reeves as Neo, a hacker who encounters enigmatic references to something called the Matrix online. This leads him to the mysterious Morpheus (played by Laurence Fishburne, and aptly named after the Greek god of dreams) and his team. When Neo asks Morpheus about the Matrix, Morpheus responds with what has become one of the most famous movie lines of all time: “Unfortunately, no one can be told what The Matrix is. You’ll have to see it for yourself.”

Even if you haven’t seen “The Matrix,” you’ve probably heard what happens next — in perhaps its most iconic scene, Morpheus gives Neo a choice: Take the “red pill” to wake up and see what the Matrix really is, or take the “blue pill” and keep living his life. Neo takes the red pill and “wakes up” in the real world to find that what he thought was real was actually an intricately constructed computer simulation — basically an ultra-realistic video game! Neo and other humans are actually living in pods, jacked into the system via a cord into his cerebral cortex.

Who created the Matrix and why are humans plugged into it at birth? In the two sequels, “The Matrix Reloaded” and “The Matrix Revolutions,” we find out that Earth has been taken over by a race of super-intelligent machines that need the electricity from human brains. The humans are kept occupied, docile and none the wiser thanks to their all-encompassing link to the Matrix!  

But the Matrix wasn’t all philosophy and no action; there were plenty of eye-popping special effects during the fight scenes. Some of these now have their own name in the entertainment and video game industry, such as the famous “bullet time.” When a bullet is shot at Neo, the visuals slow down time and manipulate space; the camera moves in a circular motion while the bullet is frozen in the air. In the context of a 3D computer world, this make perfect sense, though now the camera technique is used in both live action and video games.  AI plays a big role too: in the sequels, we find out much more about the agents pursuing Neo, Morpheus and the team. Agent Smith (played brilliantly by Hugo Weaving), the main adversary in the first movie, is really a computer agent — an artificial intelligence meant to keep order in the simulation. Like any good AI villain, Agent Smith (who was voted the 84th most popular movie character of all time!) is able to reproduce itself and overlay himself onto any part of the simulation.

“The Matrix” storyboard from the original movie. (Photo by Jonathan Leibson/Getty Images for Warner Bros. Studio Tour Hollywood)

The Wachowskis, creators of “The Matrix,” claim to have been inspired by, among others, science fiction master Philip K. Dick. Most of us are familiar with Dick’s work from the many film and TV adaptations, ranging from Blade Runner, Total Recall and the more recent Amazon show, The Man in the High Castle.  Dick often explored questions of what was “real” versus “fake” in his vast body of work. These are some of the same themes we will have to grapple with to build a real Matrix: AI that is indistinguishable from humans, implanting false memories and broadcasting directly into the mind.

As part of writing my upcoming book, I interviewed Dick’s wife, Tessa B. Dick, and she told me that Philip K. Dick actually believed we were living in a simulation. He believed that someone was changing the parameters of the simulation, and most of us were unaware that this was going on. This was of course, the theme of his short story, “The Adjustment Team” (which served as the basis for the blockbuster “The Adjustment Bureau,” starring Matt Damon and Emily Blunt).

A quick summary of the basic (non-video game) simulation argument

Today, the simulation hypothesis has moved from science fiction to a subject of serious debate because of several key developments.

The first was when Oxford professor Nick Bostrom published his 2003 paper, “Are You Living in a Simulation?” Bostrom doesn’t say much about video games nor how we might build such a simulation; rather, he makes a clever statistical argument. Bostrom theorized that if a civilization ever got the Simulation Point, it would create many ancestor simulations, each with large numbers (billions or trillions?) of simulated beings. Since the number of simulated beings would vastly outnumber the number of real beings, any beings (including us!) were more likely to be living inside a simulation than outside of it!

Other scientists, like physicists and Cosmos host Neil deGrasse Tyson and Stephen Hawking weighed in, saying they found it hard to argue against this logic.

Bostrom’s argument implied two things that are the subject of intense debate. The first is that if any civilization every reached the Simulation Point, then we are more likely in a simulation now. The second is that we are more likely all AI or simulated consciousness rather than biological ones. On this second point, I prefer to use the “video game” version of the simulation argument, which is a little different than Bostrom’s version.

Video games hold the key

Let’s look more at the video game version of the argument, which rests on the rapid pace of development of video game and computer graphics technology over the past decades. In video games, we have both “players” who exist outside of the video game, and “characters” who exist inside the game. In the game, we have PCs (player characters) that are controlled (you might say mentally attached to the players), and NPCs (non-player characters) that are the simulation artificial characters.

Fifty years of the internet

When my team of graduate students and I sent the first message over the internet on a warm Los Angeles evening in October, 1969, little did we suspect that we were at the start of a worldwide revolution. After we typed the first two letters from our computer room at UCLA, namely, “Lo” for “Login,” the network crashed.

Hence, the first Internet message was “Lo” as in “Lo and behold” – inadvertently, we had delivered a message that was succinct, powerful, and prophetic.

The ARPANET, as it was called back then, was designed by government, industry and academia so scientists and academics could access each other’s computing resources and trade large research files, saving time, money and travel costs. ARPA, the Advanced Research Projects Agency, (now called “DARPA”) awarded a contract to scientists at the private firm Bolt Beranek and Newman to implement a router, or Interface Message Processor; UCLA was chosen to be the first node in this fledgling network.

By December, 1969, there were only four nodes – UCLA, Stanford Research Institute, the University of California-Santa Barbara and the University of Utah. The network grew exponentially from its earliest days, with the number of connected host computers reaching 100 by 1977, 100,000 by 1989, a million by the early 1990’s, and a billion by 2012; it now serves more than half the planet’s population.

Along the way, we found ourselves constantly surprised by unanticipated applications that suddenly appeared and gained huge adoption across the Internet; this was the case with email, the World Wide Web, peer-to-peer file sharing, user generated content, Napster, YouTube, Instagram, social networking, etc.

It sounds utopian, but in those early days, we enjoyed a wonderful culture of openness, collaboration, sharing, trust and ethics. That’s how the Internet was conceived and nurtured.  I knew everyone on the ARPANET in those early days, and we were all well-behaved. In fact, that adherence to “netiquette” persisted for the first two decades of the Internet.

Today, almost no one would say that the internet was unequivocally wonderful, open, collaborative, trustworthy or ethical. How did a medium created for sharing data and information turn into such a mixed blessing of questionable information? How did we go from collaboration to competition, from consensus to dissention, from a reliable digital resource to an amplifier of questionable information?

The decline began in the early 1990s when spam first appeared at the same time there was an intensifying drive to monetize the Internet as it reached deeply into the world of the consumer. This enabled many aspects of the dark side to emerge (fraud, invasion of privacy, fake news, denial of service, etc.).

It also changed the nature of internet technical progress and innovations as risk aversion began to stifle the earlier culture of “moon shots”. We are currently still suffering from those shifts. The internet was designed to promote decentralized information, democracy and consensus based upon shared values and factual information. In this it has disappointed to fully achieve the aspirations of its founding fathers.

As the private sector gained more influence, their policies and goals began to dominate the nature of the Internet.  Commercial policies gained influence, companies could charge for domain registration, and credit card encryption opened the door for e-commerce. Private firms like AOL, CompuServe and Earthlink would soon charge monthly fees for access, turning the service from a public good into a private enterprise.

This monetization of the internet has changed it flavor. On the one hand, it has led to valuable services of great value. Here one can list pervasive search engines, access to extensive information repositories, consumer aids, entertainment, education, connectivity among humans, etc.  On the other hand, it has led to excess and control in a number of domains.

Among these one can identify restricted access by corporations and governments, limited progress in technology deployment when the economic incentives are not aligned with (possibly short term) corporate interests, excessive use of social media for many forms of influence, etc.

If we ask what we could have done to mitigate some of these problems, one can easily name two.  First, we should have provided strong file authentication – the ability to guarantee that the file that I receive is an unaltered copy of the file I requested. Second, we should have provided strong user authentication – the ability for a user to prove that he/she is whom they claim to be.

Had we done so, we should have turned off these capabilities in the early days (when false files were not being dispatched and when users were not falsifying their identities). However, as the dark side began to emerge, we could have then gradually turned on these protections to counteract the abuses at a level to match the extent of the abuse. Since we did not provide an easy way to provide these capabilities from the start, we suffer from the fact that it is problematic to do so for today’s vast legacy system we call the Internet.

A silhouette of a hacker with a black hat in a suit enters a hallway with walls textured with blue internet of things icons 3D illustration cybersecurity concept

Having come these 50 years since its birth, how is the Internet likely to evolve over the next 50? What will it look like?

That’s a foggy crystal ball. But we can foresee that it is fast on its way to becoming “invisible” (as I predicted 50 years ago) in the sense that it will and should disappear into the infrastructure.

It should be as simple and convenient to use as is electricity; electricity is straightforwardly available via a trivially simple interface by plugging it into the wall; you don’t know or care how it gets there or where it comes from, but it delivers its services on demand.

Sadly, the internet is far more complicated to access than that. When I walk into a room, the room should know I’m there and it should provide to me the services and applications that match my profile, privileges and preferences.  I should be able to interact with the system using the usual human communication methods of speech, gestures, haptics, etc.

We are rapidly moving into such a future as the Internet of Things pervades our environmental infrastructure with logic, memory, processors, cameras, microphones, speakers, displays, holograms, sensors. Such an invisible infrastructure coupled with intelligent software agents imbedded in the internet will seamlessly deliver such services. In a word, the internet will essentially be a pervasive global nervous system.

That is what I judge will be the likely essence of the future infrastructure. However, as I said above, the applications and services are extremely hard to predict as they come out of the blue as sudden, unanticipated, explosive surprises!  Indeed, we have created a global system for frequently shocking us with surprises – what an interesting world that could be!

Pre- and Post-Money SAFEs: Choosing the right one for your startup

With Y Combinator’s Demo Day taking place at Pier 48 in San Francisco next week, its largest batch of companies ever is getting ready to present to an audience of select investors. Having taken Atrium through Demo Day myself, I have first-hand knowledge of the process. When the founders have finished their pitches, the time to talk numbers will closely follow. Chief among the many decisions founders will face during this time is whether to opt for the Pre-Money SAFE or the new Post-Money SAFE, the two standardized legal documents that YC has introduced in recent years.

Both versions are meant to make the process fast, easy and fair for both parties in the early-stage fundraising process. But there are crucial differences between the two that founders should examine carefully.

Essentially, the Pre-Money SAFE is exceptionally favorable to founders because it gets them pre-valuation funding like a convertible note, but debt-free. The Post-Money SAFE sweetens some of the terms for investors, like locking in their percentage ownership in a priced round later on.

Overall, we expect the Post-Money version to become more common, especially if the company is raising a round above $1 million or $2 million, and the investors have more leverage to ask for it in the negotiation.

(Note: This article is aimed at giving founders a general understanding of the changes from Pre-Money SAFEs to Post-Money SAFEs. The information provided is based on my professional experience and opinions, and should not be used without careful consideration and advice by qualified advisors and legal counsel. Also, to learn more and ask questions about Pre and Post-Money SAFEs, join me on April 16th for a webinar where I’ll dive in a bit deeper.)

Two structures for raising startup investment

Today there are two general ways of structuring a startup fundraising round. The first can be called a “priced equity round,” and is characterized by the sale of preferred stock with a fixed valuation.

What to watch for in a VC term sheet

[Editor’s note: This is part of our ongoing series of guest articles from industry experts, covering the hot topics that founders are wrestling with every day as they build their companies.]

When startup founders review a VC term sheet, they are mostly only interested in the pre-money valuation and the board composition. They assume the rest of the language is “standard” and they don’t want to ruffle any feathers with their new VC partner by “nickel and diming the details.” But these details do matter.

VCs are savvy and experienced negotiators, and all of the language included in the term sheet is there because it is important to them. In the vast majority of cases, every benefit and protection a VC gets in a term sheet comes with some sort of loss or sacrifice on the part of the founders – either in transferring some control away from the founders to the VC, shifting risk from the VC to the founders, or providing economic benefits to the VC and away from the founders. And you probably have more leverage to get better terms than you may think. We are in an era of record levels of capital flowing into the venture industry and more and more firms targeting seed stage companies. This competition makes it harder for VCs to dictate terms the way they used to.

But like any negotiating partner, a VC will likely be evaluating how savvy you appear to be in approaching a proposed term sheet when deciding how hard they are going to push on terms. If the VC sees you as naïve or green, they can easily take advantage of that in negotiating beneficial terms for themselves. So what really matters when you are negotiating a term sheet? As a founder, you want to come out of the financing with as much overall control of the company and flexibility in shaping the future of the company as possible and as much of a share in the future economic prosperity of the company as possible. With these principles in mind, let’s take a look at four specific issues in a term sheet that are often overlooked by founders and company counsel:

  • What counts in pre-money capitalization
  • The CEO common director
  • Drag-along provisions
  • Liquidation preference.

What counts in pre-money capitalization

The inevitability of tokenized data

We’re reaching the endgame of an inevitable showdown between big tech and regulators with a ley battleground around consumer data. In many ways, the fact that things have gotten here reflects that the market has not yet developed an alternative to the data paradigm of Google and Facebook as sourcers and sellers and Amazon as host that today dominates.

The tokenization and decentralization of data offers such an alternative. While the first generation of “utility” tokens were backed by nothing more than dreams, a new generation of tokens, connected explicitly to the value of data, will arise.

The conversation around data has reached a new inflection point.

Presidential candidate, Sen. Elizabeth Warren has called for the breakup of technology giants including Amazon and Facebook. In many ways, the move feels like an inevitable culmination of the last few years in which public sentiment around the technology industry has shifted from overwhelmingly positive to increasingly skeptical.

One part of that growing skepticism has to do with the fact that when populist ideology rises, all institutions of power are subject to greater scrutiny. But when you hone in on specifics, it is clear that the issue underlying the loss of faith in technology companies is data: what is collected, how it is used, and who profits from it.

Facebook’s Cambridge Analytica scandal, in which a significant amount of user data was used to help Russian political actors sew discord and help Trump get elected in 2016, and Facebook CEO Mark Zuckerberg’s subsequent testimony in front of Congress were a watershed moment in this loss of faith around data.

Those who dismissed consumer outrage around the event by pointing out that barely anyone actually left the platform because of the event failed to recognize that the real impact was always more likely to be something like this – providing political cover for a call to break up the company.

Image courtesy of Bryce Durbin

Of course, not every 2020 Democratic candidate for the Presidency agrees with Warren’s call. In a response to Warren, Andrew Yang — the upstart candidate who has made waves with his focus on Universal Basic Income and after appearances on Joe Rogen’s popular podcast – wrote: “Agree there are fundamental issues with big tech. But we need to expand our toolset. For example, we should share in the profits from the use of our data. Better than simply regulating. Need a new legal regime that doesn’t rely on consumer prices for anti-trust.”

While one could suggest that Yang is biased, since he comes from the world of technology, he has been more vocal and articulate about the coming threat of displacement from automation than any candidate. His notion of a different arrangement of the economics of data between the people who produce it and the platforms who use (and sell advertising against) it are worth considering.

In fact, one could make an argument that not only is this sort of heavy-handed regulatory approach to data inevitable, but represents a fundamental market failure in the way the economics of data are organized.

Modern server room interior in datacenter

Data, it has been said, is the new oil. It is, in this analogy, the fuel by which the attention economy functions. Without data, there is no advertising; without advertising, there are none of the free services which have come to dominate our social lives.

Of course, the market for data has another aspect as well, which is where it lives. Investor (and former Facebook head of growth) Chamath Palihapitiya pointed out that 16% of the money he puts into companies goes directly into Amazon’s coffers for data hosting.

This fact shows that, while regulators – and even more, Presidential candidates looking to score points with a populist base – might think that all of technology is aligned around preserving today’s status quo – there are in fact big financial motivations for something different.

Enter ‘decentralization.

In his seminal essay “Why Decentralization Matters,” A16Z investor Chris Dixon explained how incentives diverge in networks. At the beginning of networks, the network owners and participants have the same incentive – to grow the number of nodes in the network. Inevitably, however, a threshold is reached where it pure growth in new participants isn’t achievable, and the network owner has to turn instead to extracting more from the existing participants.

Decentralization, in Dixon’s estimation, offers an alternative. In short, tokenization would allow all users to participate in the financial benefit and upside of the network, effectively eliminating the distinction between network owners and network users. When there is no distinct ownership class, there is no one who has the need (or power) to extract.

The essay was a brilliant articulation of an idealized state (reflected in its 50,000+ claps on Medium). In the ICO boom, however, things didn’t exactly work out the way Dixon had imagined.

The problem, on a fundamental level, was about what the token actually was. In almost every case, the “utility tokens” were simply payment tokens – an alternative money just for that service. Their value relied on speculation that they could achieve a certain monetary premium that allowed them to transcend utility for just that network – or enable that network to grow so large that that value could be sustained over time.

It’s not hard to understand why things were designed this way. For network builders, this sort of payment token allowed a totally non-dilutive form of capitalization that was global and instantaneous. For retail buyers, they offered a chance to participate in risk capital in a way they had been denied by accreditation laws.

At the end of the day, however, the simple truth was that these tokens weren’t backed by anything other than dreams.

When the market for these dream coins finally crashed, many decided to throw out the token baby with the ICO bathwater.

What if it prompted a question instead: what if the tokens in decentralized networks weren’t backed by nothing but dreams, but we’re instead backed by data? What if instead of dream coins, we had data coins?

Data is indeed the oil of the new economy. In the context of any given digital application, data is where the value resides: for the companies that are paid to host it; for the platforms that are able to sell advertising against it; and for the users who effectively trade their data for reduced priced services.

Data is, in other words, an asset. Like other assets, it can be tokenized and decentralized into a public blockchain. It’s not hard to imagine a future of every meaningful piece of data in the world will be represented by a private key. Tying tokens to data explicitly creates a world of new options to reconfigure how apps are built.

First, data tokenization could create an opportunity where nodes in a decentralized hosting network – i.e. a decentralized alternative to AWS – could effectively speculate on the future value of the data in the applications they provided hosting services for, creating financial incentive beyond simple service provision. When third parties like Google want to crawl, query, and access the data, they’ll pay the token representing the data (a datacoin) back to the miners securing and storing it as well as to the developers who acquire, structure, and label the data so that it’s valuable to third parties — especially machine learning and AI-driven organizations.

Second, app builders could not only harness the benefits of more fluid capitalization through tokens, but easily experiment with new ways to arrange value flows, such as cutting users in on the value of their own data and allowing them to benefit.

Third, users could start to have a tangible (and trackable) sense of the value of their data, and exert market pressure on platforms to be included in the upside, as well as exert more control over where and how their data was used.

Tokenized data, in other words, could create a market mechanism to redistribute the balance of power in technology networks without resorting to ham fisted (even if well meaning) regulation like GDPR, or even worse, the sort of break-up proposed by Warren.

Even after the implosion of the ICO phenomenon, there are many like Fred Wilson who believe that a shift to user control of data, facilitated by blockchains, is not just possible but inevitable.

Historically, technology has evolved from closed to open, back to closed, and then back to being open. We’re now in a closed phase where centralized apps and services own and control a vast majority of the access to data. Decentralized, p2p databases — public blockchains — will open up and tokenize data in a disruptive way that will change the flow of how value is captured and created on the internet.

Put simply, tokenized and open data can limit the control data monopolies have on future innovation while ushering in a new era of computing.

It’s how information can finally be set free.

The next frontier in real estate technology

From entertainment to transportation, technology has upended nearly every major industry — with one notable exception: real estate. Instead of disrupting the sector, the last generation of real estate technology companies primarily improved efficiencies of existing processes. Industry leaders Zillow/Trulia and LoopNet* helped us search for homes and commercial real estate better and faster, but they didn’t significantly change what we buy or lease or from whom or how.

The next generation of real estate technology companies is taking a more expansive approach, dismantling existing systems and reimagining entirely new ones that address our growing demand for affordability, community and flexibility.

The increasing need for affordability

Home ownership has long been integral to the American dream, but for many young Americans today it’s an unattainable dream. A third of millennials live at home, and as a cohort, they spend a greater share of their income on rent than previous generations did — about 45 percent during their first decade of work. This leaves little money left over for savings, much less for home ownership, the largest financial expenditure of most people’s lifetimes.

The increasing need for affordable housing is driving some creative tech-enabled solutions. One segment of startups is focused on making existing homes more affordable, especially in high-cost markets like New York and the Bay Area. Divvy helps consumers, many of them with low credit scores, rent-to-own homes, which are assessed for viability by a combination of contractors and machine learningLandedfunded by the Chan Zuckerberg Initiative, helps educators afford homes in the communities in which they teach. Homeshare divides luxury apartments into multiple more-affordable units, and Bungalow takes a similar approach with houses. Both companies have built technology platforms to manage their tenant listings and to allocate tenant expenses and streamline payments.

Consumers aren’t just craving affordability, they’re also seeking company.

Another segment of startups is aiming to reduce the costs of building new homes, such as with modular, prefab housing to reduce construction costs. Katerra, which just raised $865 million, is aiming to create a seamless, one-stop shop for commercial and residential development, managing the entire building process from design and sourcing through the completion of construction. Taking a “full stack” approach to every step of the building process should enable them to find efficiencies and reduce costs.

If the economy weakens, the need for more affordable housing will only grow, making these startups not only recession-proof but even recession-strong. Collectively, they’re helping Americans right-size their dreams to something more broadly attainable.

In search of community

Consumers aren’t just craving affordability, they’re also seeking company. More than half of Americans feel lonely, and the youngest cohort in their late teens and early-to-mid-twenties are the loneliest of the bunch (followed closely by millennials). Millennials are the first generation to enter the workforce in the era of smartphones and laptops. While 24/7 connectivity enables us to work anywhere, anytime, it also creates expectations of working anywhere, anytime — and so many people do, bleeding the lines between work life and personal life. Longer work hours make community harder to build organically, so many millennials place value on employers and landlords who facilitate it for them.

Airbnb and WeWork were early to capitalize on the demand for community, with one changing how we travel and the other redefining the modern office space. Co-working companies like WeWork, as well more targeted providers like The Assembly*, The Wing and The Riveter, offer speaker series, classes and other free member events aimed at building connections. Airbnb, once focused only on lodging, has broadened its platform to include community-building shared experiences.

Shared living and hospitality startups are also investing in community to attract and retain customers. StarCity provides dorms for adults, Common and HubHaus rent homes intended to be shared by roommates and Ollie offers luxury micro apartments in a co-living environment. These companies are leveraging technology to foster in-person connections. For example, Common uses Slack channels to communicate with and connect members, and HubHaus uses roommate matching algorithms.

Within the hospitality sector, Selina offers a blended travel lodge, wellness and co-working platform geared toward creating community for travelers and remote workers, complete with high-tech beachside and jungle-side office spaces. Meanwhile, experience-driven lifestyle hotel company Life House* connects guests through onsite locally rooted food and beverage destinations and direct app-based social introductions to other travelers.

Modern life requires flexibility

Life can be unpredictable, especially for young people who tend to change jobs frequently. Short job tenures are especially common within the growing gig economy workforce. People who don’t know how long their jobs will last don’t want to be burdened with long-term lease commitments or furniture that’s nearly as expensive to move as it is to buy.

The next frontier in real estate technology is as boundless as it is exciting.

Companies like FeatherFernish and CasaOne rent furniture to people seeking flexibility in their living environments. Among consumers ready to buy their homes but looking for some extra help, Knock, created by Trulia founding team members and which recently raised a $400 million Series B, provides an end-to-end platform to enable home buyers to buy a new home before selling their old one. Also emphasizing flexibility, OpenDoorvalued at more than $2 billion, pioneered “instant offers” for homeowners looking to sell their homes quickly, leveraging algorithms to determine how much specific houses are worth.

It’s not just residents who seek flexible leases; many companies do as well, particularly those accommodating distributed employees or experiencing periods of uncertainty or rapid growth. To enable flexibility, several commercial real estate technology companies have developed platforms that balance pricing, capacity and demand.

Knotel, a “headquarters as a service” for companies with 100-300 employees, builds out and manages office spaces at lower risk and with more flexibility than is typically possible through commercial real estate leases, enabling tenants to quickly add or shrink office space as needed. WeWork allows members to pay only for the time periods when they come in to work. Taking flexibility to an even greater level, Breather lets workers rent rooms by the hour, day or month.

The next frontier in real estate technology is as boundless as it is exciting. A whole new generation of startups is designing innovative solutions from the ground up to address our growing demands for affordability, community and flexibility. In the process, they’re fundamentally reimagining how we live, work and play by transforming the modern workplace, leisure space and even our definition of home. We look forward to seeing — and experiencing — what lies ahead.

*Trinity Ventures portfolio company.

The next frontier in real estate technology

From entertainment to transportation, technology has upended nearly every major industry — with one notable exception: real estate. Instead of disrupting the sector, the last generation of real estate technology companies primarily improved efficiencies of existing processes. Industry leaders Zillow/Trulia and LoopNet* helped us search for homes and commercial real estate better and faster, but they didn’t significantly change what we buy or lease or from whom or how.

The next generation of real estate technology companies is taking a more expansive approach, dismantling existing systems and reimagining entirely new ones that address our growing demand for affordability, community and flexibility.

The increasing need for affordability

Home ownership has long been integral to the American dream, but for many young Americans today it’s an unattainable dream. A third of millennials live at home, and as a cohort, they spend a greater share of their income on rent than previous generations did — about 45 percent during their first decade of work. This leaves little money left over for savings, much less for home ownership, the largest financial expenditure of most people’s lifetimes.

The increasing need for affordable housing is driving some creative tech-enabled solutions. One segment of startups is focused on making existing homes more affordable, especially in high-cost markets like New York and the Bay Area. Divvy helps consumers, many of them with low credit scores, rent-to-own homes, which are assessed for viability by a combination of contractors and machine learningLandedfunded by the Chan Zuckerberg Initiative, helps educators afford homes in the communities in which they teach. Homeshare divides luxury apartments into multiple more-affordable units, and Bungalow takes a similar approach with houses. Both companies have built technology platforms to manage their tenant listings and to allocate tenant expenses and streamline payments.

Consumers aren’t just craving affordability, they’re also seeking company.

Another segment of startups is aiming to reduce the costs of building new homes, such as with modular, prefab housing to reduce construction costs. Katerra, which just raised $865 million, is aiming to create a seamless, one-stop shop for commercial and residential development, managing the entire building process from design and sourcing through the completion of construction. Taking a “full stack” approach to every step of the building process should enable them to find efficiencies and reduce costs.

If the economy weakens, the need for more affordable housing will only grow, making these startups not only recession-proof but even recession-strong. Collectively, they’re helping Americans right-size their dreams to something more broadly attainable.

In search of community

Consumers aren’t just craving affordability, they’re also seeking company. More than half of Americans feel lonely, and the youngest cohort in their late teens and early-to-mid-twenties are the loneliest of the bunch (followed closely by millennials). Millennials are the first generation to enter the workforce in the era of smartphones and laptops. While 24/7 connectivity enables us to work anywhere, anytime, it also creates expectations of working anywhere, anytime — and so many people do, bleeding the lines between work life and personal life. Longer work hours make community harder to build organically, so many millennials place value on employers and landlords who facilitate it for them.

Airbnb and WeWork were early to capitalize on the demand for community, with one changing how we travel and the other redefining the modern office space. Co-working companies like WeWork, as well more targeted providers like The Assembly*, The Wing and The Riveter, offer speaker series, classes and other free member events aimed at building connections. Airbnb, once focused only on lodging, has broadened its platform to include community-building shared experiences.

Shared living and hospitality startups are also investing in community to attract and retain customers. StarCity provides dorms for adults, Common and HubHaus rent homes intended to be shared by roommates and Ollie offers luxury micro apartments in a co-living environment. These companies are leveraging technology to foster in-person connections. For example, Common uses Slack channels to communicate with and connect members, and HubHaus uses roommate matching algorithms.

Within the hospitality sector, Selina offers a blended travel lodge, wellness and co-working platform geared toward creating community for travelers and remote workers, complete with high-tech beachside and jungle-side office spaces. Meanwhile, experience-driven lifestyle hotel company Life House* connects guests through onsite locally rooted food and beverage destinations and direct app-based social introductions to other travelers.

Modern life requires flexibility

Life can be unpredictable, especially for young people who tend to change jobs frequently. Short job tenures are especially common within the growing gig economy workforce. People who don’t know how long their jobs will last don’t want to be burdened with long-term lease commitments or furniture that’s nearly as expensive to move as it is to buy.

The next frontier in real estate technology is as boundless as it is exciting.

Companies like FeatherFernish and CasaOne rent furniture to people seeking flexibility in their living environments. Among consumers ready to buy their homes but looking for some extra help, Knock, created by Trulia founding team members and which recently raised a $400 million Series B, provides an end-to-end platform to enable home buyers to buy a new home before selling their old one. Also emphasizing flexibility, OpenDoorvalued at more than $2 billion, pioneered “instant offers” for homeowners looking to sell their homes quickly, leveraging algorithms to determine how much specific houses are worth.

It’s not just residents who seek flexible leases; many companies do as well, particularly those accommodating distributed employees or experiencing periods of uncertainty or rapid growth. To enable flexibility, several commercial real estate technology companies have developed platforms that balance pricing, capacity and demand.

Knotel, a “headquarters as a service” for companies with 100-300 employees, builds out and manages office spaces at lower risk and with more flexibility than is typically possible through commercial real estate leases, enabling tenants to quickly add or shrink office space as needed. WeWork allows members to pay only for the time periods when they come in to work. Taking flexibility to an even greater level, Breather lets workers rent rooms by the hour, day or month.

The next frontier in real estate technology is as boundless as it is exciting. A whole new generation of startups is designing innovative solutions from the ground up to address our growing demands for affordability, community and flexibility. In the process, they’re fundamentally reimagining how we live, work and play by transforming the modern workplace, leisure space and even our definition of home. We look forward to seeing — and experiencing — what lies ahead.

*Trinity Ventures portfolio company.

The “splinternet” is already here

There is no question that the arrival of a fragmented and divided internet is now upon us. The “splinternet,” where cyberspace is controlled and regulated by different countries is no longer just a concept, but now a dangerous reality. With the future of the “World Wide Web” at stake, governments and advocates in support of a free and open internet have an obligation to stem the tide of authoritarian regimes isolating the web to control information and their populations.

Both China and Russia have been rapidly increasing their internet oversight, leading to increased digital authoritarianism. Earlier this month Russia announced a plan to disconnect the entire country from the internet to simulate an all-out cyberwar. And, last month China issued two new censorship rules, identifying 100 new categories of banned content and implementing mandatory reviews of all content posted on short video platforms.

While China and Russia may be two of the biggest internet disruptors, they are by no means the only ones. Cuban, Iranian and even Turkish politicians have begun pushing “information sovereignty,” a euphemism for replacing services provided by western internet companies with their own more limited but easier to control products. And a 2017 study found that numerous countries, including Saudi Arabia, Syria and Yemen have engaged in “substantial politically motivated filtering.”

This digital control has also spread beyond authoritarian regimes. Increasingly, there are more attempts to keep foreign nationals off certain web properties.

For example, digital content available to U.K. citizens via the BBC’s iPlayer is becoming increasingly unavailable to Germans. South Korea filters, censors and blocks news agencies belonging to North Korea. Never have so many governments, authoritarian and democratic, actively blocked internet access to their own nationals.

The consequences of the splinternet and digital authoritarianism stretch far beyond the populations of these individual countries.

Back in 2016, U.S. trade officials accused China’s Great Firewall of creating what foreign internet executives defined as a trade barrier. Through controlling the rules of the internet, the Chinese government has nurtured a trio of domestic internet giants, known as BAT (Baidu, Alibaba and Tencent), who are all in lock step with the government’s ultra-strict regime.

The super-apps that these internet giants produce, such as WeChat, are built for censorship. The result? According to former Google CEO Eric Schmidt, “the Chinese Firewall will lead to two distinct internets. The U.S. will dominate the western internet and China will dominate the internet for all of Asia.”

Surprisingly, U.S. companies are helping to facilitate this splinternet.

Google had spent decades attempting to break into the Chinese market but had difficulty coexisting with the Chinese government’s strict censorship and collection of data, so much so that in March 2010, Google chose to pull its search engines and other services out of China. However now, in 2019, Google has completely changed its tune.

Google has made censorship allowances through an entirely different Chinese internet platform called project Dragonfly . Dragonfly is a censored version of Google’s Western search platform, with the key difference being that it blocks results for sensitive public queries.

Sundar Pichai, chief executive officer of Google Inc., sits before the start of a House Judiciary Committee hearing in Washington, D.C., U.S., on Tuesday, Dec. 11, 2018. Pichai backed privacy legislation and denied the company is politically biased, according to a transcript of testimony he plans to deliver. Photographer: Andrew Harrer/Bloomberg via Getty Images

The Universal Declaration of Human Rights states that “people have the right to seek, receive, and impart information and ideas through any media and regardless of frontiers.”

Drafted in 1948, this declaration reflects the sentiment felt following World War II, when people worked to prevent authoritarian propaganda and censorship from ever taking hold the way it once did. And, while these words were written over 70 years ago, well before the age of the internet, this declaration challenges the very concept of the splinternet and the undemocratic digital boundaries we see developing today.

As the web becomes more splintered and information more controlled across the globe, we risk the deterioration of democratic systems, the corruption of free markets and further cyber misinformation campaigns. We must act now to save a free and open internet from censorship and international maneuvering before history is bound to repeat itself.

BRUSSELS, BELGIUM – MAY 22: An Avaaz activist attends an anti-Facebook demonstration with cardboard cutouts of Facebook chief Mark Zuckerberg, on which is written “Fix Fakebook”, in front of the Berlaymont, the EU Commission headquarter on May 22, 2018 in Brussels, Belgium. Avaaz.org is an international non-governmental cybermilitating organization, founded in 2007. Presenting itself as a “supranational democratic movement,” it says it empowers citizens around the world to mobilize on various international issues, such as human rights, corruption or poverty. (Photo by Thierry Monasse/Corbis via Getty Images)

The Ultimate Solution

Similar to the UDHR drafted in 1948, in 2016, the United Nations declared “online freedom” to be a fundamental human right that must be protected. While not legally binding, the motion passed with consensus, and therefore the UN was provided limited power to endorse an open internet (OI) system. Through selectively applying pressure on governments who are not compliant, the UN can now enforce digital human rights standards.

The first step would be to implement a transparent monitoring system which ensures that the full resources of the internet, and ability to operate on it, are easily accessible to all citizens. Countries such as North Korea, China, Iran and Syria, who block websites and filter email plus social media communication, would be encouraged to improve through the imposition of incentives and consequences.

All countries would be ranked on their achievement of multiple positive factors including open standards, lack of censorship, and low barriers to internet entry. A three tier open internet ranking system would divide all nations into Free, Partly Free or Not Free. The ultimate goal would be to have all countries gradually migrate towards the Free category, allowing all citizens full information across the WWW, equally free and open without constraints.

The second step would be for the UN to align itself much more closely with the largest western internet companies. Together they could jointly assemble detailed reports on each government’s efforts towards censorship creep and government overreach. The global tech companies are keenly aware of which specific countries are applying pressure for censorship and the restriction of digital speech. Together, the UN and global tech firms would prove strong adversaries, protecting the citizens of the world. Every individual in every country deserves to know what is truly happening in the world.

The Free countries with an open internet, zero undue regulation or censorship would have a clear path to tremendous economic prosperity. Countries who remain in the Not Free tier, attempting to impose their self-serving political and social values would find themselves completely isolated, visibly violating digital human rights law.

This is not a hollow threat. A completely closed off splinternet will inevitably lead a country to isolation, low growth rates, and stagnation.