Get your early-bird tickets to TC Sessions: Enterprise 2019

In a world where the enterprise market hovers around $500 billion in annual sales, is it any wonder that hundreds of enterprise startups launch into that fiercely competitive arena every year? It’s a thrilling, roller-coaster ride that’s seen it all: serious success, wild wealth and rapid failure.

That’s why we’re excited to host our inaugural TC Sessions Enterprise 2019 event on September 5 at the Yerba Buena Center for the Arts in San Francisco. Like TechCrunch’s other TC Sessions, this day-long intensive goes deep on one specific topic. Early-bird tickets are on sale now for $395 — and we have special pricing for MBA students and groups, too. Buy your tickets now and save.

Bonus ROI: For every ticket you buy to TC Sessions: Enterprise, we’ll register you for a free Expo Only pass to TechCrunch Disrupt SF on October 2-4. Sweet!

Expect a full day of programming featuring the people making it happen in enterprise today. We’re talking founders and leaders from established and emerging companies, plus proven enterprise-focused VCs. Discussions led by TechCrunch’s editors, including Connie Loizos, Frederic Lardinois and Ron Miller, will explore machine learning and AI, intelligent marketing automation and the inevitability of the cloud. We’ll even touch on topics like quantum computing and blockchain.

Tired of the hype and curious about what it really takes to build a successful enterprise company? We’ve got you. You’ll hear from proven serial entrepreneurs who’ve been there, done that and what they might like to build next.

We’re building the agenda of speakers, panelists and demos, and we have a limited number of speaking opportunities available. If you have someone in mind, submit your recommendation here.

This event is perfect for enterprise-minded founders, investors, MBA students, engineers, CTOs and CIOs. If you need four or more tickets, take advantage of our group rate and save 15% over the early-bird price when you buy in bulk. Are you an MBA student? Save your dough — buy a student ticket for $245.

TC Sessions: Enterprise 2019 takes place September 5 in San Francisco. Join us for actionable insights and world-class networking. Buy your early-bird tickets today.

Is your company interested in sponsoring or exhibiting at TC Sessions: Enterprise 2019? Contact our sponsorship sales team by filling out this form.

A.I.-based travel app Hopper expands price monitoring to hotels worldwide

Hopper, the handy travel app that uses A.I. to predict airfare pricing, is today expanding its service to include hotels. Like airfare, hotel pricing is also dynamic — it can fluctuate with traveler demand. During busy times, prices are higher. And when a hotel has rooms to fill, the prices drop. Now, Hopper is bringing its same prediction technology to over 270,000 hotels in more than 200 countries and territories worldwide.

The hotels, which can be booked directly within the Hopper app itself, span 1,600 major cities.

The company first began testing hotel prediction algorithms back in October 2017, but only in New York. At the time, Hopper CEO Frederic Lalonde explained how online hotel booking differed from airfare. With plane tickets, people are usually just in search of the best price. But with hotels, other factors come into play.

That’s why the new feature lets travelers narrow searches not only by city, but also by neighborhood and points of interest. Hotels can additionally be filtered by the neighborhood, rating, price and their amenities.

Hopper says people who book at the right time will save $62 per night, on average, off of the peak rates. But when people go shopping for hotels, they usually only spot check prices or wait until the last minute to book. To get a better deal, the company suggests you set the app to monitor hotel prices and send push notifications when the hotel prices drop or when private rates become available.

These private rates can include package deals, member rates, geofenced rates, closed user rates, pre-purchased inventory blocks, and more. Most people don’t know if these sort of rates are available to them, or if they would qualify. Hoppers says it will monitor these private rates, too, then alert you if you’re able to book a private deal.

In addition, the app offers a “Watch a Hotel” feature which will forecast and track the prices of specific hotels. That way, you can choose a handful of favorites places, then book when the prices become affordable to you — without having to waste time comparison shopping.

Over time, Hopper will learn more about your hotel preferences — for example, if you always book 4- or 5-star hotels or those with pools. It will then use this understanding combined with your usage of the app to make better recommendations in the future.

The company says it’s already seeing good results from using these sorts of A.I.-based recommendations for airfare, as now 25% of bookings are the result of people booking a trip they weren’t planning for, but Hopper knew to suggest.

Hopper will take a commission that varies by property, but that’s never more than 10% — anything higher is passed along to the customer in the form of savings. The company is using a number of different hotel data suppliers as well as working with hotels directly on the new feature, it says.

The expansion comes at a time when Hopper’s business is growing. Last fall, the company raised another $100 million, valuing its business at $780 million. Last year, it booked over a billion in sales, and users are today tracking $18 billion worth of flights and hotels per year in its app, which has passed 40 million downloads.

Since the beginning of 2019, Hopper’s sales are up 100% and conversion is up 80%, it says.

The hotel booking feature is live today in the Hopper app for iOS. It will roll out to Android in the next few weeks.

California has let two Chinese startups offer robotaxis to the public

China’s driverless cars are coming for passengers in the United States. AutoX and Pony.ai just became the first Chinese companies allowed to offer fully self-driving cars in the state of California, according to notices posted on the website of the California Public Utilities Commission this week.

Started in 2016 by Princeton University professor Jianxiong Xiao, called “Professor X” by his students, AutoX is now one of China’s most well-funded autonomous driving startups alongside Pony.ai, which was co-founded in 2016 by two former executives at Baidu’s self-driving department.

AutoX said in January that it was in talks with investors to raise a lofty $100 million. Pony.ai had banked at least $214 million in funding as of April.

While more than 62 companies hold the permits to test autonomous vehicles in California, very few are actually allowed to transport people in those cars. Zoox passed a new milestone when it received the first green light to provide robotaxi services in the state six months ago. Now AutoX and Pony.ai have joined the exclusive club, bringing the number of participants in the pilot program to three.

autonomous driving

Screenshot from the California Public Utilities Commission website

There are a few catches though. The type of permission granted to the three companies is for the “Drivered AV Passenger Service,” which forbids companies to charge passengers for test rides and requires a safety driver behind the wheel. No entity has so far been permitted to run real driverless passenger service in California, a sign that regulators aren’t quite ready to let tech companies transport the public without human oversight.

AutoX, which is already using self-driving vehicles to deliver groceries in San Jose, is getting a headstart by introducing California’s first robotaxi service. People living in north San Jose or Santa Clara can now apply to join its early rider program and give feedback, says an instruction on its website. A spokesperson for Pony.ai told TechCrunch that the company also began offering driverless passenger services as soon as it received the permit.

Alphabet’s Waymo launched a passenger service in Phoenix last December. Like California, Arizona demands a trained test driver to assist with operations if needed. While Waymo is allowed to charge passengers, it can only ferry a vetted group of people, so the program isn’t available to everyone.

These confinements seem sensible given legal and ethical concerns raised by critics. Last year, Uber’s self-driving test vehicle struck and killed a woman in Temple, prompting the transportation giant to suspend its test drives. The incident has become a cautionary tale for startups in the field. Take Momenta, the first Chinese autonomous driving startup to pass $1 billion in valuation. The CEO requires all executives to ride a minimum number of autonomous miles themselves, so the management would put passenger safety first.

Chinese startups covet the California license for a number of reasons. First, self-driving cars are by nature data-hungry. There are only a small handful of cities worldwide which allow robotaxis on public roads, so it always helps to collect more mileages whenever permissible.

Many of these Chinese companies have also set up research and development centers in California to tap the region’s tech talent. Pony.ai, for example, deploys R&D staff and offices across Silicon Valley, Beijing and Guangzhou. AutoX opened an R&D center in Shenzhen earlier this year but still keeps development teams in San Jose.

Self-driving car startup Argo AI is giving researchers free access to its HD maps

Argo AI is releasing curated data along with high-definition maps to researchers for free, the latest company in the autonomous vehicle industry to open-source some of the information it has captured while developing and testing self-driving cars.

The aim, the Ford Motor-backed company says, is to give academic researchers the ability to study the impact that HD maps have on perception and forecasting, such as identifying and tracking objects on the road, and predicting where those objects will move seconds into the future. In short, Argo sees this as a way to encourage more research and hopefully breakthroughs in autonomous vehicle technology.

Argo has branded this collection of data and maps Argoverse, which is being released for free. Argo isn’t releasing everything it has. This is a curated data set, after all. Still, it’s a large enough batch to give researchers something to dig into and model.

This “Argoverse” contains a selection of data, including two HD maps with lane centerlines, traffic direction and ground height collected on roads in Pittsburgh and in Miami.

For instance, Argoverse also has a motion forecasting data set with 3D tracking annotation for 113 scenes and more than 300,000 vehicle trajectories, including unprotected left turns and lane changes, and provides a benchmark to promote testing, teaching and learning, according to the website. There is also one API to connect the map data with sensor information.

Argo isn’t the first autonomous vehicle company to open-source its data or other tools. But the company says this batch of data is unique because it includes HD maps, which are considered one of the critical components for self-driving vehicles.

Much of the attention in the world of autonomous vehicles is on the “brain” of the car. But they also need maps to deliver information that helps these vehicles, whether they’re operating in a warehouse or on public roads, find the safest and most efficient route possible.

For researchers, access to these kinds of maps can be used to develop prediction and tracking methods.

Earlier this week, Cruise announced it would share a data visualization tool it created called Webviz. The web-based application takes raw data collected from all the sensors on a robot — a category that includes autonomous vehicles — and turns that binary code into visuals. Cruise’s tool lets users configure different layouts of panels, each one displaying information like text logs, 2D charts and 3D depictions of the AV’s environment.

And last year, Aptiv released nuScenes, a large-scale data set from an autonomous vehicle sensor suite.

Canada’s True North conference is not your typical tech event

From the venue and the flashy event website, Waterloo, Ontario’s True North conference (in its second year) doesn’t seem all that distinct from a laundry list of other major tech events that take place each year across North America. But from the moment its main stage programming kicked off on the first day, it was clear this wasn’t your typical gathering place for the tech industry faithful.

The main stage track kicked off with Communitech CEO Iain Klugman. The event is produced by Communitech, an entrepreneurial support and resource organization founded in 1997 to foster the Waterloo region’s technology industry. Communitech sprung out of BlackBerry and the University of Waterloo and the world-class innovation community that surrounds both.

Klugman, a former communications executive and current board member at a number of Communitech-fostered startups and academic institutions, sounded a cautionary and urgent note that continued throughout the day.

Tech conferences, in general, tend to dwell on optimism and enthusiasm, with brief forays into dark alleys of negative consequences. Not this one.

Communitech CEO Iain Klugman speaking at True North 2019 in Waterloo.

Klugman’s talk touched on opportunity, but it was the opportunity to discuss among a group of peers with influence in the technology industry how they should undertake together “to set things right.” Last year’s event had a similar outcome, resulting in the ‘Tech for Good Declaration,’ which True North describes as “the Canadian tech industry’s living document” and includes a number of principles designed to help guide technology development with community good in mind.

Rather than changing focus for year two, True North’s organizers seem to have doubled down: Klugman’s opening talk included references to surveillance capitalism and breaches of trust, and included this cheerful analogy: “Technology is like fuel. It can warm our homes or it can burn them to the ground, so we decide which one it will do.”

As a whole, the event is about the “tough choices” faced by the collective “we” of the tech industry, according to Klugman.

True North’s official keynote perfectly took the baton from the intro, as New York Times columnist and longtime political commentator Thomas Friedman took the stage. Friedman, a somewhat controversial figure owing to some of his past political stances, launched into a talk informed by his most recent book, Thank You for Being Late, and talked about what we’re seeing now in human history as a moment of intersection of three different forces accelerating in a ‘nonlinear manner’ all at once, including technological development outpacing humanity’s ability to adapt to those changes.

NYT columnist and author Thomas Friedman at True North 2019 in Waterloo.

Friedman’s talk ended with him positing that humans spend most of their time today in the essentially “god-less” realm of “cyberspace,” a realm “where we’re all connected but no one’s in charge,” while at the same time we’ve achieved better than ever ability to act with god-like power to control and manipulate our environment. He chided the essential disconnect of powerful forces that act with supreme mastery over technology but no grounding in sociopolitical understanding (specifically naming Mark Zuckerberg) and those who have the inverse problem (the U.S. Congress, in Friedman’s view).

Overall, Friedman’s views are grounded in what he describes as a place of optimism. But the takeaway is more that humanity is currently at a state where it’s overwhelmed on a number of fronts and out of its depth in terms of having a capacity to cope.

In the afternoon, Robert Mazur (longtime undercover agent and the subject of biopic The Infiltrator) discussed his experience tracking down and prosecuting money launderers operating more or less with the blessing of large financial institutions, precisely because their systems were designed around incentive systems that encouraged them but didn’t have protections in place to prevent bad actors from taking advantage. Mazur further elaborated that current telecom industry structure actually makes it even easier than ever to launder large sums relatively unchecked. In essence, it was a warning to be mindful of how the products you build can be exploited by the most malicious actors.

Former Information and Privacy Commissioner for Ontario and creator of the concept of ‘Privacy by Design’ Ann Cavoukian came next, decrying the current state of data “centralized in huge honeypots of information,” including Google (her example).

Former Ontario Information and Privacy Commissioner Ann Cavoukian.

This centralization, she noted is a huge risk in terms of presenting opportunities for tracking, misuse, leaks and more. It’s “taking away our agency as individuals,” she said, and the solution is moving to true decentralization of data.

“Privacy […] is freedom, and is about you making decisions relating to your personal information; not the state, not corporations – you,” she said. “It’s not about secrecy, it’s about control [and] privacy is a necessary condition for societal wellbeing.”

Cavoukian wrapped her talk by noting the sheer volume of privacy breaches that have leaked consumer information to date, and about the importance of encryption in keeping this safe. Overall, her talk was a blueprint for tech companies looking to incorporate data privacy and good stewardship into the DNA of their products from day one.

Kelsey Leonard, Tribal Co-Lead on the Mid-Atlantic Regional Planning Body of the U.S. National Ocean Council, provided a talk on the implications of digital rights and the continued digital divide as it pertains to Indigenous communities globally. Leonard pointed out that Indigenous nations in North America are the least connected in the world, something she noted continues to ongoing colonialism, and even can potentially contribute to “ongoing genocide of Indigenous peoples.”

Kelsey Leonard, advocate for Indigenous Data Governance and Sovereignty, speaks at True North 2019 in Waterloo.

Indigenous people are also systematically disenfranchised from data ownership and data control, by virtue of their being left out of advanced STEM education and formalized degrees, she said. Leonard also noted that platforms contain reinforcement of what she calls “digital colonialism,” in that Indigenous names are often flagged as fake by algorithms designed to enforce real name policies, and Indigenous languages are often mistranslated (specifically as Estonian, she said).

This worsens existing Indigenous language and culture erasure. Leonard said a language is lost every two weeks on average, according to recent research. What’s required then is to add protection measures specific to digital platforms to help counter this institutional digital colonization and enforce Indigenous Sovereign Data.

To close day one, Recode founder and legendary Silicon Valley reporter Kara Swisher summarized a lot of her recent work as a New York Times columnist. Basically, that means she called on the industry to stop messing around and start fixing stuff.

Kara Swisher speaks at the True North 2019 conference in Waterloo, Ontario.

Swisher said we’re coming to a “reckoning” for tech in terms of media coverage, and the overwhelmingly positive coverage it’s received over the past many years. She emphasized that we’re only at the beginning of the impact technology will have on society, and laid out a number of current areas of innovation and investment that will continue to upset societal norms, including autonomous driving, artificial intelligence and more.

Regarding media specifically, Swisher noted that she marked a significant shift when Buzzfeed started A/B testing to amplify and extend the attention-capture possible around specific “news” items, citing the famous Katy Perry Left Shark incident of 2015. This, combined with our “continuous partial attention,” which is tied to our inability to totally disengage from your smartphone anymore, are combining to have effects on how we think and work in the world, Swisher said.

She added that, today, many of her new big concerns are around AI, and that “everything that can be digitized will be digitized.” Not only that, she continued, but “almost everything can be,” which will be massively disruptive to peoples’ lives, with effects including a future where most people will have a very high number of different jobs over the course of their lives, requiring continuous education and retraining. “We have to think really hard about what good AI is and what problematic AI is,” she said.

Thompson Reuters Foundation CEO Antonio Zappulla discussed using technology to help fight human trafficking at True North 2019 in Waterloo.

Across other stages, too, the themes of technology’s dangers and how to avert it prevailed across programming. Take Some Risk founder Duane Brown gave a talk on opting out of the always-connected lifestyle and becoming “digitally exhausted”; MedStack founder and CEO Balaji Gopalan talked about the risks inherent in dealing with private patient data in healthcare; other topics included sustainable energy for Africa, using big data to counter human trafficking and ensuring we steer away from encouraging consumerization in this generation of connected kids.

The event’s central theme was the deceptively simple (and frankly over-uttered) phrase “tech for good,” but the programming and content revealed a level of sophistication and sincerity on the topic that exceeds the low bar often found in tech industry marketing materials and staged events. Overall, it felt introspective, contrite and contemplative – a self-reflection from a community genuine about shoring up its ethical shortcomings. In other words, refreshing.

Humanising Autonomy pulls in $5M to help self-driving cars keep an eye on pedestrians

Pretty much everything about making a self-driving car is difficult, but among the most difficult parts is making sure the vehicles know what pedestrians are doing — and what they’re about to do. Humanising Autonomy specializes this, and hopes to become a ubiquitous part of people-focused computer vision systems worldwide.

The company has raised a $5.3 million seed round from an international group of investors on the strength of its AI system, which it claims outperforms humans and works on images from practically any camera you might find in a car these days.

HA’s tech is a set of machine learning modules trained to identify different pedestrian behaviors — is this person about to step into the street? Are they paying attention? Have they made eye contact with the driver? Are they on the phone? Things like that.

The company credits the robustness of its models to two main things. First, the variety of its data sources.

“Since day one collected data from any type of source — CCTV cameras, dash cams of all resolutions, but also autonomous vehicle sensors,” said co-founder and CEO Maya Pindeus. “We’ve also built data partnerships and collaborated with different institutions, so we’ve been able to build a robust data set across different cities with different camera types, different resolutions and so on. That’s really benefited the system, so it works in nighttime, rainy Michigan situations, etc.”

Notably their models rely only on RGB data, forgoing any depth information that might come from lidar, another common sensor type. But Pindeus said that type of data isn’t by any means incompatible, it just isn’t as plentiful or relevant as real-world, visual-light footage.

In particular HA was careful to acquire and analyze footage of accidents, since these are especially informative cases of failure of AVs or human drivers to read pedestrian intentions, or vice versa.

The second advantage Pindeus claimed is the modular nature of the models the company has created. There isn’t one single “what is that pedestrian doing” model, but a set of them that can be individually selected and tuned according to the autonomous agent’s or hardware’s needs.

“For instance, if you want to know if someone is distracted as they’re crossing the street. There’s a lot of things that we do as humans to tell if someone is distracted,” she said. “We have all these different modules that kind of come together to predict whether someone’s distracted, at risk, etc. This allows us to tune it to different environments, for instance London and Tokyo – people behave differently in different environments.”

“The other thing is processing requirements; Autonomous vehicles have a very strong GPU requirement,” she continued. “But because we build in these modules, we can adapt it to different processing requirements. Our software will run on a standard GPU when we integrate with level 4 or 5 vehicles, but then we work with aftermarket, retrofitting applications that don’t have as much power available, but the models still work with that. So we can also work across levels of automation.”

The idea is that it makes little sense to aim only for the top levels of autonomy when really there are almost no such cars on the road, and mass deployment may not happen for years. In the meantime, however, there are plenty of opportunities in the sensing stack for a system that can simply tell the driver that there’s a danger behind the car, or activate automatic emergency braking a second earlier than existing systems.

While there are lots of papers published about detecting pedestrian behavior or predicting what a person in an image is going to do, there are few companies working specifically on that task. A full stack sensing company focusing on lidar and RGB cameras needs to complete dozens or hundreds of tasks, depending on how you define them: object characterizations and tracking, watching for signs, monitoring nearby and distant cars, and so on. It may be simpler for them and for manufacturers to license HA’s functioning and highly specific solution rather than build their own or rely on more generalized object tracking.

“There are also opportunities adjacent to autonomous vehicles,” pointed out Pineus. Warehouses and manufacturing facilities use robots and other autonomous machines that would work better if they knew what workers around them were doing. Here the modular nature of the HA system works in its favor again — retraining only the parts that need to be retrained is a smaller task than building a new system from scratch.

Currently the company is working with mobility providers in Europe, the U.S., and Japan, including Daimler Mercedes Benz and Airbus. It’s got a few case studies in the works to show how its system can help in a variety of situations, from warning vehicles and pedestrians about each other at popular pedestrian crossings to improving path planning by autonomous vehicles on the road. The system can also look over reams of past footage and produce risk assessments of an area or time of day given the number and behaviors of pedestrians there.

The $5M seed round, led by Anthemis, with Japan’s Global Brain, Germany’s Amplifer, and SV’s Synapse Partners, will mostly be dedicated to commercializing the product, Pineus said.

“The tech is ready, now it’s about getting it into as many stacks as possible, and strengthening those tier 1 relationships,” she said.

Obviously it’s a rich field to enter, but still quite a new one. The tech may be ready to deploy but the industry won’t stand still, so you can be sure that Humanising Autonomy will move with it.

TextIQ, a machine learning platform for parsing sensitive corporate data, raises $12.6M

TextIQ, a machine learning system that parses and understands sensitive corporate data, has raised $12.6 million in Series A funding led by FirstMark Capital, with participation from Sierra Ventures.

TextIQ started as cofounder Apoorv Agarwal’s Columbia thesis project titled “Social Network Extraction From Text.” The algorithm he built was able to read a novel, like Jane Austen’s Emma, for example, and understand the social hierarchy and interactions between characters.

This people-centric approach to parsing unstructured data eventually became the kernel of TextIQ, which helps corporations find what they’re looking for in a sea of unstructured, and highly sensitive, data.

The platform started out as a tool used by corporate legal teams. Lawyers often have to manually look through troves of documents and conversations (text messages, emails, Slack, etc.) to find specific evidence or information. Even using search, these teams spend loads of time and resources looking through the search results, which usually aren’t as accurate as they should be.

“The status quo for this is to use search terms and hire hundreds of humans, if not thousands, to look for things that match their search terms,” said Agarwal. “It’s super expensive, and it can take months to go through millions of documents. And it’s still risky, because they could be missing sensitive information. Compared to the status quo, TextIQ is not only cheaper and faster but, most interestingly, it’s much more accurate.”

Following success with legal teams, TextIQ expanded into HR/compliance, giving companies the ability to retrieve sensitive information about internal compliance issues without a manual search. Because TextIQ understands who a person is relative to the rest of the organization, and learns that organization’s ‘language’, it can more thoroughly extract what’s relevant to the inquiry from all that unstructured data in Slack, email, etc.

More recently, in the wake of GDPR, TextIQ has expanded its product suite to work in the privacy realm. When a company is asked by a customer to get access to all their data, or to be forgotten, the process can take an enormous amount of resources. Even then, bits of data might fall through the cracks.

For example, if a customer emailed Customer Service years ago, that might not come up in the company’s manual search efforts to find all of that customer’s data. But since TextIQ understands this unstructured data with a person-centric approach, that email wouldn’t slip by its system, according to Agarwal.

Given the sensitivity of the data, TextIQ functions behind a corporation’s firewall, meaning that TextIQ simply provides the software to parse the data rather than taking on any liability for the data itself. In other words, the technology comes to the data, and not the other way around.

TextIQ operates on a tiered subscription model, and offers the product for a fraction of the value they provide in savings when clients switch over from a manual search. The company declined to share any further details on pricing.

Former Apple and Oracle General Counsel Dan Cooperman, former Verizon General Counsel Randal Milch, former Baxter International Global General Counsel Marla Persky, and former Nationwide Insurance Chief Legal and Governance Officer Patricia Hatler are on the advisory board for TextIQ.

The company has plans to go on a hiring spree following the new funding, looking to fill positions in R&D, engineering, product development, finance, and sales. Cofounder and COO Omar Haroun added that the company achieved profitability in its first quarter entering the market and has been profitable for eight consecutive quarters.

Tally’s Jason Brown on fintech’s first debt roboadvisor and an automated financial future

Yesterday, Tally, the startup looking to automate consumers financial lives, announced it had raised a $50 million Series C round led by Andreessen Horowitz and with participation from Valley heavy hitters Kleiner Perkins, Shasta Ventures, Cowboy Ventures and Sway Ventures.

On the back of the announcement, TechCrunch’s fintech contributor Gregg Schoenberg sat down with Tally’s founder and CEO Jason Brown to discuss the round, Tally’s growth strategy and the company’s vision for an automated financial future.

Gregg Schoenberg: I never like to congratulate people when they raise a big load of capital, because if anything, the pressure is on even more. But just to level set real quickly, are there any numbers you can share that Andreessen Horowitz and the other investors saw that underscored your traction?

Jason Brown: So I agree with you. Internally, the metaphor I use is that it’s kind of like going on a long road trip where you’re stopping in the gas station to get more fuel so you can make it to your destination. You should really celebrate when you’re delivering value to customers.

Schoenberg: In terms of total credit card debt you’re managing, you were at $250mm towards the end of last year.

Brown: Yes. Now, we’re getting close to $400mm.

Schoenberg: And the savings vehicle – it’s new and totally free?

Brown: Yes, it’s completely free and just to recap, it takes 35-45 seconds to set-up, it automates the process of setting money aside every week and it gives you points. It’s still in beta, but we’re getting close to the end of beta, and have over 30,000 people on the waitlist.

AI is a non-technical term, right? I like to use the word automation because it means things are being done for you.

Schoenberg: With respect to the fundraise you just announced, the big takeaway I got was your aspiration to automate people’s entire financial lives. That’s big talk.

Brown: That is big talk.

Schoenberg: You obviously knew what you were doing when you decided to frame it that way. Where do you go from here? Obviously, credit card payments and the savings vehicle are good, but there are many other financial services out there that you’ll need to tackle.

Brown: Well one of the key portions of the investment thesis for Andreessen Horowitz is actually what’s under the hood. So we actually took three years to build the underlying infrastructure to automate the pay off my cards job. And there are two fundamental layers to the tech.

There’s the “decide what’s best for me,” which addresses the complexity of ingesting data across your entire financial life, and being able to validate that it’s accurate and consistent, and then having algorithms that can make sense of it and figure out what’s best for you. The next layer is actually doing what’s best for you, which involves being able to move money around and lend money.

Cruise is sharing its data visualization tool with robotics geeks everywhere

Cruise is sharing a software platform with roboticists that was initially created to give its own engineers a better understanding of the petabytes of data generated every month from its fleet of autonomous vehicles.

The platform is a data visualization tool called Webviz, a web-based application aimed at anyone working in robotics, a field that includes autonomous vehicles. Researchers, students and engineers can now access the tool and get a visual insight into their data by dragging their robotics data into a ROS bag file.

Robots and, specifically autonomous vehicles, capture loads of data from various sensors like lidar, radar and cameras. The tool is supposed to make it easier to take that data and turn it from binary code into something visual. The tool lets users configure different layouts of panels, each one displaying information like text logs, 2D charts and 3D depictions of the AV’s environment.

The tool is a product of a Cruise hackathon that was held a couple of years ago. It was apparently such a hit that engineers at the self-driving car company now use it daily to calibrate lidar sensors, verify machine learning models and debug test rides. Webviz now has 1,000 monthly active users within the company, according to Cruise.

As engineers developed Webviz they found it could have applications outside of Cruise. The company decided to open source it as general robotics data inspection tool. For this initial release, Cruise settled on a suite of general panels that any robotics developer can leverage to explore their own data, with minimal setup, the company said in a Medium post Tuesday.

A demo video provided by Cruise is posted below.

Prior to Webviz, Cruise engineers who wanted to turn binary AV data into something more visual would have to access a suite of tools within the ROS open source community. While the system worked well, setting up the platform and then replicating it on a co-worker’s machine was time consuming effort. It also required manually positioning windows running separate tools such as logging message or viewing camera images.

The tool created out of the hackathon essentially helped lower the barrier to entry for engineers to explore and understand its autonomous vehicle data.

Cruise shared a piece, or an application, of Webviz earlier this year called Worldview — a library that can turn data into 3D scenes. Cruise has also developed and open sourced rosbag.js, a JavaScript library for reading ROS bag files. Both of these projects were developed as engineers created and built out Webviz, according to Cruise.

Cruise isn’t the only robotics-focused company (or autonomous vehicle company for that matter) to open source datasets or other tools. For instance, Aptiv released last year nuScenes, a large-scale dataset from an autonomous vehicle sensor suite.

And it likely won’t be the last. Not only are moves like this part of the engineering culture, there are other benefits as well, including recruitment. Plus, by releasing it into the world, it’s likely that other outsiders will build upon the tool and improve it, or use it to make engineering breakthroughs in robotics.

SafeAI raises $5M to develop and deploy autonomy for mining and construction vehicles

Startup SafeAI, powered by a founding talent  team with experience across Apple, Ford and Caterpillar, is emerging from stealth today with a $5 million funding announcement. The company’s focus is on autonomous vehicle technology, designed and built specifically for heavy equipment used in the mining and construction industries.

Out the gate, SafeAI is working with Doosan Bobcat, the South Korean equipment company that makes Bobcat loaders and excavators, and it’s already demonstrating and testing its software on a Bobcat skid loader at the SafeAI testing ground in San Jose . The startup believes that applying advances in autonomy and artificial intelligence to mining and construction can do a lot to not only make work sites safer, but also increase efficiencies and boost productivity – building on what’s already been made possible with even the most basic levels of autonomy currently available on the market.

What SafeAI hopes to add is an underlying architecture that acts as a fully autonomous (Level 4 by SAE standards, so no human driver) platform for a variety of equipment. Said platform is designed with openness, modularity and upgradeability in mind to help ensure that its clients can take advantage of new advances in autonomy and AI as they become available.

“We have seen and experienced deploying autonomous mining truck in production for last 10 years,” explained Safe AI Founder and CEO, Bibhrajit Halder in an email. “Now it’s time to take it to next level. At SafeAI, we are super excited to built the future of autonomous mine by creating autonomous mining equipment that just works.”

While SafeAI doesn’t have product in market yet, it is running its software on actual construction hardware at its proving ground, as mentioned, and it’s working with an as-yet unnamed large global mining company to deploy SafeAI in a mining truck, according to Halder. The company’s plan is to focus its efforts entirely on deploying fully, Level 4 autonomy as its first available commercial product, with a vision of a future where multiple pieces of mining equipment are working together “seamlessly,” the CEO says.

Today’s $5 million round includes investment led by Autotech Ventures, and including participation from Brick & Mortart Ventures, Embark Ventures and existing investor Month Vista Capital.