Brexit blow for UK’s hopes of helping set AI rules in Europe

The UK’s hopes of retaining an influential role for its data protection agency in shaping European Union regulations post-Brexit — including helping to set any new Europe-wide rules around artificial intelligence — look well and truly dashed.

In a speech at the weekend in front of the International Federation for European Law, the EU’s chief Brexit negotiator, Michel Barnier, shot down the notion of anything other than a so-called ‘adequacy decision’ being on the table for the UK after it exits the bloc.

If granted, an adequacy decision is an EU mechanism for enabling citizens’ personal data to more easily flow from the bloc to third countries — as the UK will be after Brexit.

Such decisions are only granted by the European Commission after a review of a third country’s privacy standards that’s intended to determine that they offer essentially equivalent protections as EU rules.

But the mechanism does not allow for the third country to be involved, in any shape or form, in discussions around forming and shaping the EU’s rules themselves. So, in the UK’s case, the country would be going from having a seat at the rule-making table to being shut out of the process entirely — at time when the EU is really setting the global agenda on digital regulations.

“The United Kingdom decided to leave our harmonised system of decision-making and enforcement. It must respect the fact that the European Union will continue to work on the basis of this system, which has allowed us to build a single market, and which allows us to deepen our single market in response to new challenges,” said Barnier in Lisbon on Saturday.

“And, as indicated in the European Council guidelines, the UK must understand that the only possibility for the EU to protect personal data is through an adequacy decision. It is one thing to be inside the Union, and another to be outside.”

“Brexit is not, and never will be, in the interest of EU businesses,” he added. “And it will especially run counter to the interests of our businesses if we abandon our decision-making autonomy. This autonomy allows us to set standards for the whole of the EU, but also to see these standards being replicated around the world. This is the normative power of the Union, or what is often called ‘the Brussels effect’.

“And we cannot, and will not, share this decision-making autonomy with a third country, including a former Member State who does not want to be part of the same legal ecosystem as us.”

Earlier this month the UK’s Information Commissioner, Elizabeth Denham, told MPs on the UK parliament’s committee for exiting the European Union that a bespoke data agreement that gave the ICO a continued role after Brexit would be a far superior option to an adequacy agreement — pointing out that the UK stands to lose influence at a time when the EU is setting global privacy standards via the General Data Protection Regulation (GDPR), which came into full force last Friday.

“At this time when the GDPR is in its infancy, participating in shaping and interpreting the law I think is really important. And the group of regulators that sit around the table at the EU are the most influential blocs of regulators — and if we’re outside of that group and we’re an observer we’re not going to have the kind of effect that we need to have with big tech companies. Because that’s all going to be decided by that group of regulators,” she warned.

“The European Data Protection Board will set the weather when it comes to standards for artificial intelligence, for technologies, for regulating big tech. So we will be a less influential regulator, we will continue to regulate the law and protect UK citizens as we do now, but we won’t be at the leading edge of interpreting the GDPR — and we won’t be bringing British values to that table if we’re not at the table.”

She also pointed out that without a bespoke arrangement to accommodate the ICO her office would also be shut out of participating in the GDPR’s one-stop shop, which allows EU data protection agencies to work together and co-ordinate regulatory actions, and which she said “would bring huge advantages to both sides and also to British businesses”.

Huge advantages that the UK stands to lose as a result of Brexit.

With the ICO being excluded from participating in GDPR’s one-stop shop mechanism, it also means UK businesses will have to choose an alternative data protection agency within the EU to act as their lead regulator after Brexit — putting yet another burden on startups as they will need to build new relationships with a regulator in the EU.

The Irish Data Protection Commission seems the likely candidate for UK companies to look to after Brexit, when the ICO is on the side lines of GDPR, given shared language and proximity. (And Ireland’s DPC has been ramping up its headcount in anticipation of handling more investigations as a result of the new regulation.)

But UK businesses would clearly prefer to be able to continue working with their domestic regulator. Unfortunately, though, Brexit closes the door on that option.

We’ve reached out to the ICO for comment and will update this story with any response.

The UK government has committed to aligning the country with GDPR regardless of Brexit — as it seeks to avoid the economic threat of EU-UK data flows being cut off if it’s not judged to be providing adequate data protection.

Looking ahead that also essentially means the UK will need to keep its regulatory regime aligned with the EU’s in perpetuity — or risk being deemed inadequate, with, once again, the risk of data flows being cut of (or at very least businesses scrambling to put in place alternative legal arrangements to authorize their data flows, and saddled with the expense of doing so, as happened when Safe Harbor was struck down in 2015).

So, thanks to Brexit, it will be the rest of Europe setting the agenda on regulating AI — with the UK bound to follow.

To truly protect citizens, lawmakers need to restructure their regulatory oversight of big tech

If members of the European Parliament thought they could bring Mark Zuckerberg to heel with his recent appearance, they underestimated the enormous gulf between 21st century companies and their last-century regulators.

Zuckerberg himself reiterated that regulation is necessary, provided it is the “right regulation.”

But anyone who thinks that our existing regulatory tools can reign in our digital behemoths is engaging in magical thinking. Getting to “right regulation” will require us to think very differently.

The challenge goes far beyond Facebook and other social media: the use and abuse of data is going to be the defining feature of just about every company on the planet as we enter the age of machine learning and autonomous systems.

So far, Europe has taken a much more aggressive regulatory approach than anything the US was contemplating before or since Zuckerberg’s testimony.

The European Parliament’s Global Data Protection Regulation (GDPR) is now in force, which extends data privacy rights to all European citizens regardless of whether their data is processed by companies within the EU or beyond.

But I’m not holding my breath that the GDPR will get us very far on the massive regulatory challenge we face. It is just more of the same when it comes to regulation in the modern economy: a lot of ambiguous costly-to-interpret words and procedures on paper that are outmatched by rapidly evolving digital global technologies.

Crucially, the GDPR still relies heavily on the outmoded technology of user choice and consent, the main result of which has seen almost everyone in Europe (and beyond) inundated with emails asking them to reconfirm permission to keep their data. But this is an illusion of choice, just as it is when we are ostensibly given the option to decide whether to agree to terms set by large corporations in standardized take-it-or-leave-it click-to-agree documents.  

There’s also the problem of actually tracking whether companies are complying. It is likely that the regulation of online activity requires yet more technology, such as blockchain and AI-powered monitoring systems, to track data usage and implement smart contract terms.

As the EU has already discovered with the right to be forgotten, however, governments lack the technological resources needed to enforce these rights. Search engines are required to serve as their own judge and jury in the first instance; Google at last count was doing 500 a day.  

The fundamental challenge we face, here and throughout the modern economy, is not: “what should the rules for Facebook be?” but rather, “how can we can innovate new ways to regulate effectively in the global digital age?”

The answer is that we need to find ways to harness the same ingenuity and drive that built Facebook to build the regulatory systems of the digital age. One way to do this is with what I call “super-regulation” which involves developing a market for licensed private regulators that serve two masters: achieving regulatory targets set by governments but also facing the market incentive to compete for business by innovating more cost-effective ways to do that.  

Imagine, for example, if instead of drafting a detailed 261-page law like the EU did, a government instead settled on the principles of data protection, based on core values, such as privacy and user control.

Private entities, profit and non-profit, could apply to a government oversight agency for a license to provide data regulatory services to companies like Facebook, showing that their regulatory approach is effective in achieving these legislative principles.  

These private regulators might use technology, big-data analysis, and machine learning to do that. They might also figure out how to communicate simple options to people, in the same way that the developers of our smartphone figured that out. They might develop effective schemes to audit and test whether their systems are working—on pain of losing their license to regulate.

There could be many such regulators among which both consumers and Facebook could choose: some could even specialize in offering packages of data management attributes that would appeal to certain demographics – from the people who want to be invisible online, to those who want their every move documented on social media.

The key here is competition: for-profit and non-profit private regulators compete to attract money and brains the problem of how to regulate complex systems like data creation and processing.

Zuckerberg thinks there’s some kind of “right” regulation possible for the digital world. I believe him; I just don’t think governments alone can invent it. Ideally, some next generation college kid would be staying up late trying to invent it in his or her dorm room.

The challenge we face is not how to get governments to write better laws; it’s how to get them to create the right conditions for the continued innovation necessary for new and effective regulatory systems.

Riminder raises $2.3 million for its AI recruitment service

French startup Riminder recently raised a $2.3 million funding round from various business angels, such as Xavier Niel, Jean-Baptiste Rudelle, Romain Niccoli, Franck Le Ouay, Dominique Vidal, Thibaud Elzière and Fred Potter. The company has been building a deep learning-powered tool to sort applications and resumes so you don’t have to. Riminder participated in TechCrunch’s Startup Battlefield.

Riminder won’t replace your HR department altogether, but it can help you save a ton of time when you’re a popular company. Let’s say you are looking for a mobile designer and you usually get hundreds or thousands of applications.

You can then integrate Riminder with your various channels to collect resumes from various sources. The startup then uses optical character recognition to turn PDFs, images, Word documents and more into text. Riminder then tries to understand all your job positions and turn raw text into useful data.

Finally, the service will rank the applications based on public data and internal data. The company has scraped the web and LinkedIn to understand usual career paths.

Existing HR solutions can integrate with Riminder using an API. This way, you could potentially use the same HR platform, but with Riminder’s smart filtering features.

With this initial sorting, your HR team can more easily get straight to the point and interview the top candidates on the list.

While it’s hard to evaluate algorithm bias, Riminder thinks that leveraging artificial intelligence for recruitment can help surface unusual candidates. You could come from a different country and have a different profile, but maybe you have the perfect past experience for a particular job. Riminder isn’t going to overlook those applications.

With today’s funding round, the company is opening an office in San Francisco to get some clients in the U.S.

Snips announces an ICO and its own voice assistant device

French startup Snips has been working on voice assistant technology that respects your privacy. And the company is going to use its own voice assistant for a set of consumer devices. As part of this consumer push, the company is also announcing an initial coin offering.

Yes, it sounds a bit like Snips is playing a game of buzzword bingo. Anyone can currently download the open source Snips SDK and play with it with a Raspberry Pi, a microphone and a speaker. It’s private by design, you can even make it work without any internet connection. Companies can partner with Snips to embed a voice assistant in their own devices too.

But Snips is adding a B2C element to its business. This time, the company is going to compete directly with Amazon Echo and Google Home speakers. You’ll be able to buy the Snips AIR Base and Snips AIR Satellites.

The base will be a good old smart speaker, while satellites will be tiny portable speakers that you can put in all your rooms. The company plans to launch those devices in 18 months.

[gallery ids="1646039,1646040,1646041,1646042,1646043,1646044"]

By default, Snips devices will come with basic skills to control your smart home devices, get the weather, control music, timers, alarms, calendars and reminders. Unlike the Amazon Echo or Google Home, voice commands won’t be sent to Google’s or Amazon’s servers.

Developers will be able to create skills and publish them on a marketplace. That marketplace will run on a new blockchain — the AIR blockchain.

And that’s where the ICO comes along. The marketplace will accept AIR tokens to buy more skills. You’ll also be able to generate training data for voice commands using AIR tokens. To be honest, I’m not sure why good old credit card transactions weren’t enough. But I guess that’s a good way to raise money.

Eric Schmidt says Elon Musk is ‘exactly wrong’ about AI

When former Google CEO Eric Schmidt was asked about Elon Musk’s warnings about AI, he had a succinct answer: “I think Elon is exactly wrong.”

“He doesn’t understand the benefits that this technology will provide to making every human being smarter,” Schmidt said. “The fact of the matter is that AI and machine learning are so fundamentally good for humanity.”

He acknowledged that there are risks around how the technology might be misused, but he said they’re outweighed by the benefits: “The example I would offer is, would you not invent the telephone because of the possible misuse of the telephone by evil people? No, you would build the telephone and you would try to find a way to police the misuse of the telephone.”

Schmidt, who has pushed back in the past against AI naysaying from Musk and scientist Stephen Hawking, was interviewed on-stage today at the VivaTech conference in Paris.

While he stepped down as executive chairman of Google’s parent company Alphabet in December, Schmidt remains involved as a technical advisor, and he said today that his work is now focused on new applications of machine learning and artificial intelligence.

Elon Musk speaks onstage at Elon Musk Answers Your Questions! during SXSW at ACL Live on March 11, 2018 in Austin, Texas. (Photo by Chris Saucedo/Getty Images for SXSW)

After wryly observing that he had just given the journalists in the audience their headlines, interviewer (and former Publicis CEO) Maurice Lévy asked how AI and public policy can be developed so that some groups aren’t “left behind.” Schmidt replied that government should fund research and education around these technologies.

“As [these new solutions] emerge, they will benefit all of us, and I mean the people who think they’re in trouble, too,” he said. He added that data shows “workers who work in jobs where the job gets more complicated get higher wages — if they can be helped to do it.”

Schmidt also argued that contrary to concerns that automation and technology will eliminate jobs, “The embracement of AI is net positive for jobs.” In fact, he said there will be “too many jobs” — because as society ages, there won’t be enough people working and paying taxes to fund crucial services. So AI is “the best way to make them more productive, to make them smarter, more scalable, quicker and so forth.”

While AI and machine learning were the official topics of the interview, Levy also asked how Google is adapting to Europe’s GDPR regulations around data and privacy, which take effect today.

“From our perspective, GDPR is the law of the land and we have complied with it,” Schmidt said.

Speaking more generally, he suggested that governments need to “find the balance” between regulation and innovation, because “the regulations tend to benefit the current incumbents.”

What about the argument that users should get some monetary benefit when companies like Google build enormous businesses that rely on users’ personal data?

“I’m perfectly happy to redistribute the money — that’s what taxes are for, that’s what regulation is for,” Schmidt said. But he argued that consumers are already benefiting from these business models because they’re getting access to free services.

“The real value is not the data but in the industrial construction of the firm which uses the data to solve a problem to make money,” he said. “That’s capitalism.”

The AI in your non-autonomous car

Sorry. Your next car probably won’t be autonomous. But, it will still have artificial intelligence (AI).

While most of the attention has been on advanced driver assistance systems (ADAS) and autonomous driving, AI will penetrate far deeper into the car. These overlooked areas offer fertile ground for incumbents and startups alike. Where is the fertile ground for these features? And where is the opportunity for startups?

Inside the cabin

Inward-facing AI cameras can be used to prevent accidents before they occur. These are currently widely deployed in commercial vehicles and trucks to monitor drivers to detect inebriation, distraction, drowsiness and fatigue to alert the driver. ADAS, inward-facing cameras and coaching have shown to drastically decrease insurance costs for commercial vehicle fleets.

The same technology is beginning to penetrate personal vehicles to monitor driver-related behavior for safety purposes. AI-powered cameras also can identify when children and pets are left in the vehicle to prevent heat-related deaths (on average, 37 children die from heat-related vehicle deaths in the U.S. each year).

Autonomous ridesharing will need to detect passenger occupancy and seat belt engagement, so that an autonomous vehicle can ensure passengers are safely on board a vehicle before driving off. They’ll also need to identify that items such as purses or cellphones are not left in the vehicle upon departure.

AI also can help reduce crash severity in the event of an accident. Computer vision and sensor fusion will detect whether seat belts are fastened and estimate body size to calibrate airbag deployment. Real-time passenger tracking and calibration of airbags and other safety features will become a critical design consideration for the cabin of the future.

Beyond safety, AI also will improve the user experience. Vehicles as a consumer product have lagged far behind laptops, tablets, TVs and mobile phones. Gesture recognition and natural language processing make perfect sense in the vehicle, and will make it easier for drivers and passengers to adjust driving settings, control the stereo and navigate.

Under the hood

AI also can be used to help diagnose and even predict maintenance events. Currently, vehicle sensors produce a huge amount of data, but only spit out simple codes that a mechanic can use for diagnosis. Machine learning may be able to make sense of widely disparate signals from all the various sensors for predictive maintenance and to prevent mechanical issues. This type of technology will be increasingly valuable for autonomous vehicles, which will not have access to hands-on interaction and interpretation.

AI also can be used to detect software anomalies and cybersecurity attacks. Whether the anomaly is malicious or just buggy code, it may have the same effect. Vehicles will need to identify problems quickly before they can propagate on the network.

Cars as mobile probes

In addition to providing ADAS and self-driving features, AI can be deployed on vision systems (e.g. cameras, radar, lidar) to turn the vehicle into a mobile probe. AI can be used to create high-definition maps that can be used for vehicle localization, identifying road locations and facades of addresses to supplement in-dash navigation systems, monitoring traffic and pedestrian movements and monitoring crime, as well as a variety of new emerging use cases.

Efficient AI will win

Automakers and suppliers are experimenting to see which features are technologically possible and commercially feasible. Many startups are tackling niche problems, and some of these solutions will prove their value. In the longer-term, there will be so many features that are possible (some cataloged here and some yet unknown) that they will compete for space on cost-constrained hardware.

Making a car is not cheap, and consumers are price-sensitive. Hardware tends to be the cost driver, so these piecewise AI solutions will need to be deployed simultaneously on the same hardware. The power requirements will add up quickly, and even contribute significantly to the total energy consumption of the vehicle.

It has been shown that for some computations, algorithmic advances have outpaced Moore’s Law for hardware. Several companies have started building processors designed for AI, but these won’t be cheap. Algorithmic development in AI will go a long way to enabling the intelligent car of the future. Fast, accurate, low-memory, low-power algorithms, like XNOR.ai* will be required to “stack” these features on low-cost, automotive-grade hardware.

Your next car will likely have several embedded AI features, even if it doesn’t drive itself.

* Full disclosure: XNOR.ai is an Autotech Ventures portfolio company.

Navigating the risks of artificial intelligence and machine learning in low-income countries

On a recent work trip, I found myself in a swanky-but-still-hip office of a private tech firm. I was drinking a freshly frothed cappuccino, eyeing a mini-fridge stocked with local beer, and standing amidst a group of hoodie-clad software developers typing away diligently at their laptops against a backdrop of Star Wars and xkcd comic wallpaper.

I wasn’t in Silicon Valley: I was in Johannesburg, South Africa, meeting with a firm that is designing machine learning (ML) tools for a local project backed by the U.S. Agency for International Development.

Around the world, tech startups are partnering with NGOs to bring machine learning and artificial intelligence (AI) to bear on problems that the international aid sector has wrestled with for decades. ML is uncovering new ways to increase crop yields for rural farmers. Computer vision lets us leverage aerial imagery to improve crisis relief efforts. Natural language processing helps usgauge community sentiment in poorly connected areas. I’m excited about what might come from all of this. I’m also worried.

AI and ML have huge promise, but they also have limitations. By nature, they learn from and mimic the status quo–whether or not that status quo is fair or just. We’ve seen AI or ML’s potential to hard-wire or amplify discrimination, exclude minorities, or just be rolled out without appropriate safeguards–so we know we should approach these tools with caution. Otherwise, we risk these technologies harming local communities, instead of being engines of progress.

Seemingly benign technical design choices can have far-reaching consequences. In model development, tradeoffs are everywhere. Some are obvious and easily quantifiable — like choosing to optimize a model for speed vs. precision. Sometimes it’s less clear. How you segment data or choose an output variable, for example, may affect predictive fairness across different sub-populations. You could end up tuning a model to excel for the majority while failing for a minority group.

Image courtesy of Getty Images

These issues matter whether you’re working in Silicon Valley or South Africa, but they’re exacerbated in low-income countries. There is often limited local AI expertise to tap into, and the tools’ more troubling aspects can be compounded by histories of ethnic conflict or systemic exclusion. Based on ongoing research and interviews with aid workers and technology firms, we’ve learned five basic things to keep in mind when applying AI and ML in low-income countries:

  1. Ask who’s not at the table. Often, the people who build the technology are culturally or geographically removed from their customers. This can lead to user-experience failures like Alexa misunderstanding a person’s accent. Or worse. Distant designers may be ill-equipped to spot problems with fairness or representation. A good rule of thumb: if everyone involved in your project has a lot in common with you, then you should probably work hard to bring in new, local voices.
  2. Let other people check your work. Not everyone defines fairness the same way, and even really smart people have blind spots. If you share your training data, design to enable external auditing, or plan for online testing, you’ll help advance the field by providing an example of how to do things right. You’ll also share risk more broadly and better manage your own ignorance. In the end, you’ll probably end up building something that works better.
  3. Doubt your data. A lot of AI conversations assume that we’re swimming in data. In places like the U.S., this might be true. In other countries, it isn’t even close. As of 2017, less than a third of Africa’s 1.25 billion people were online. If you want to use online behavior to learn about Africans’ political views or tastes in cinema, your sample will be disproportionately urban, male, and wealthy. Generalize from there and you’re likely to run into trouble.
  4. Respect context. A model developed for a particular application may fail catastrophically when taken out of its original context. So pay attention to how things change in different use cases or regions. That may just mean retraining a classifier to recognize new types of buildings, or it could mean challenging ingrained assumptions about human behavior.
  5. Automate with care. Keeping humans ‘in the loop’ can slow things down, but their mental models are more nuanced and flexible than your algorithm. Especially when deploying in an unfamiliar environment, it’s safer to take baby steps and make sure things are working the way you thought they would. A poorly-vetted tool can do real harm to real people.

AI and ML are still finding their footing in emerging markets. We have the chance to thoughtfully construct how we build these tools into our work so that fairness, transparency, and a recognition of our own ignorance are part of our process from day one. Otherwise, we may ultimately alienate or harm people who are already at the margins.

The developers I met in South Africa have embraced these concepts. Their work with the non-profit Harambee Youth Employment Accelerator has been structured to balance the perspectives of both the coders and those with deep local expertise in youth unemployment; the software developers are even foregoing time at their hip offices to code alongside Harambee’s team. They’ve prioritized inclusivity and context, and they’re approaching the tools with healthy, methodical skepticism. Harambee clearly recognizes the potential of machine learning to help address youth unemployment in South Africa–and they also recognize how critical it is to ‘get it right’. Here’s hoping that trend catches on with other global startups too.

Uber in fatal crash detected pedestrian but had emergency braking disabled

The initial report by the National Transportation Safety Board on the fatal self-driving Uber crash in March confirms that the car detected the pedestrian as early as 6 seconds before the crash, but did not slow or stop because its emergency braking systems were deliberately disabled.

Uber told the NTSB that “emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior,” in other words to ensure a smooth ride. “The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.” It’s not clear why the emergency braking capability even exists if it is disabled while the car is in operation. The Volvo model’s built-in safety systems — collision avoidance and emergency braking, among other things — are also disabled while in autonomous mode.

It appears that in an emergency situation like this this “self-driving car” is no better, or substantially worse, than many normal cars already on the road.

It’s hard to understand the logic of this decision. An emergency is exactly the situation when the self-driving car, and not the driver, should be taking action. Its long-range sensors can detect problems accurately from much further away, while its 360-degree awareness and route planning allow it to make safe maneuvers that a human would not be able to do in time. Humans, even when their full attention is on the road, are not the best at catching these things; relying only on them in the most dire circumstances that require quick response times and precise maneuvering seems an incomprehensible and deeply irresponsible decision.

According to the NTSB report, the vehicle first registered Elaine Herzberg on lidar 6 seconds before the crash — at the speed it was traveling, that puts first contact at about 378 feet away. She was first identified as an unknown object, then a vehicle, then a bicycle, over the next few seconds (it isn’t stated when these classifications took place exactly).

The car following the collision.

During these 6 seconds, the driver could and should have been alerted of an anomalous object ahead on the left — whether it was a deer, a car, or a bike, it was entering or could enter the road and should be attended to. But the system did not warn the driver and apparently had no way to.

1.3 seconds before impact, which is to say about 80 feet away, the Uber system decided that an emergency braking procedure would be necessary to avoid Herzberg. But it did not hit the brakes, as the emergency braking system had been disabled, nor did it warn the driver because, again, it couldn’t.

It was only when, less than a second before impact, the driver happened to look up from whatever it was she was doing, and saw Herzberg, whom the car had known about in some way for 5 long seconds by then. It struck and killed her.

It reflects extremely poorly on Uber that it had disabled the car’s ability to respond in an emergency — though it was authorized to speed at night — and no method for the system to alert the driver should it detect something important. This isn’t just a safety issue, like going on the road with a sub-par lidar system or without checking the headlights — it’s a failure of judgement by Uber, and one that cost a person’s life.

Arizona, where the crash took place, barred Uber from further autonomous testing, and Uber yesterday ended its program in the state.

Uber offered the following statement on the report:

Over the course of the last two months, we’ve worked closely with the NTSB. As their investigation continues, we’ve initiated our own safety review of our self-driving vehicles program. We’ve also brought on former NTSB Chair Christopher Hart to advise us on our overall safety culture, and we look forward to sharing more on the changes we’ll make in the coming weeks.

WorkFusion adds $50 million from strategic investors as it bulks up for acquisitions

WorkFusion, a business process automation software developer, has raised $50 million in a new, strategic round of funding as it prepares to start adding new verticals to its product suite.

The company’s new cash came from the large insurance company, Guardian; health care services provider New York-Presbyterian, and the commercial bank, PNC Bank. Venture investor Alpha Intelligence Capital, which specializes in backing artificial intelligence-enabled companies also participated in the new financing.

Certainly WorkFusion seems to have come a long way since its days hiring crowdsourced workers to train algorithms how to automate the workflows that used to be done manually. The company has raised a lot of money — roughly $121 million, according to Crunchbase — which is some kind of validation, and in its core markets of financial services and insurance it’s attracted some real fans.

“Guardian uses data to better understand and serve customers, and WorkFusion will bring new data-driven intelligence capabilities into the company,” said Dean Del Vecchio, Executive Vice President, Chief Information Officer & Head of Enterprise Shared Services at Guardian, in a statement. “We look to invest in and deploy RPA and AI technology that can help us leap forward in operations and improve outcomes– WorkFusion has that potential.”

According to chief executive, Alex Lyashok, the company now intends to begin looking at acquisition opportunities that can “compliment our technology,” he said. “WorkFusion today is focused on banking, financial services and insurance. This problem [of automation] is not endemic to those industries.”

Particularly of interest to the New York-based company are those industries that missed out on the first wave of automation and digitization. “Industries that have already invested in digitization are being very aggressive, but companies that have bene very manual and then have not developed a technology program internally,” also represent a big opportunity, Lyashok said.

 

Ring’s Jamie Siminoff and Clinc’s Jason Mars to join us at Disrupt SF

Disrupt SF is set to be the biggest tech conference that TechCrunch has ever hosted. So it only makes sense that we plan an agenda fit for the occasion.

That’s why we’re absolutely thrilled to announce that Ring’s Jamie Siminoff will join us on stage for a fireside chat and Jason Mars from Clinc will be demo-ing first-of-its-kind technology on the Disrupt SF stage.

Jamie Siminoff – Ring

Earlier this year, Ring became Amazon’s second largest acquisition ever, selling to the behemoth for a reported $1 billion.

But the story begins long ago, with Jamie Siminoff building a WiFi-connected video doorbell in his garage in 2011. Back then it was called DoorBot. Now, it’s called Ring, and it’s an essential piece of the overall evolution of e-commerce.

As giants like Amazon move to make purchasing and receiving goods as simple as ever, safe and reliable entry into the home becomes critical to the mission. Ring, which has made neighborhood safety and home security its main priority since inception, is a capable partner in that mission.

Of course, one doesn’t often build a successful company and sell for $1 billion on their first go. Prior to Ring, Siminoff founded PhoneTag, the world’s first voicemail-to-text company and Unsubscribe.com. Both of those companies were sold. Based on his founding portfolio alone, it’s clear that part of Siminoff’s success can be attributed to understanding what consumers need and executing on a solution.

Dr. Jason Mars – Clinc

AI has the potential to change everything, but there is a fundamental disconnect between what AI is capable of and how we interface with it. Clinc has tried to close that gap with its conversational AI, emulating human intelligence to interpret unstructured, unconstrained speech.

Clinc is currently targeting the financial market, letting users converse with their bank account using natural language without any pre-defined templates or hierarchical voice menus.

But there are far more applications for this kind of conversational tech. As voice interfaces like Alexa and Google Assistant pick up steam, there is clearly an opportunity to bring this kind of technology to all facets of our lives.

At Disrupt SF, Clinc’s founder and CEO Dr. Jason Mars plans to do just that, debuting other ways that Clinc’s conversational AI can be applied. Without ruining the surprise, let me just say that this is going to be a demo you won’t want to miss.

Tickets to Disrupt are available here.