Krisp snags $5M A round as demand grows for its voice-isolating algorithm

Krisp’s smart noise suppression tech, which silences ambient sounds and isolates your voice for calls, arrived just in time. The company got out in front of the global shift to virtual presence, turning early niche traction has into real customers and attracting a shiny new $5 million series A funding round to expand and diversify its timely offering.

We first met Krisp back in 2018 when it emerged from UC Berkeley’s Skydeck accelerator. The company was an early one in the big surge of AI startups, but with a straightforward use case and obviously effective tech it was hard to be skeptical about.

Krisp applies a machine learning system to audio in real time that has been trained on what is and isn’t the human voice. What isn’t a voice gets carefully removed even during speech, and what remains sounds clearer. That’s pretty much it! There’s very little latency (15 milliseconds is the claim) and a modest computational overhead, meaning it can work on practically any device, especially ones with AI acceleration units like most modern smartphones.

The company began by offering its standalone software for free, with paid tier that removed time limits. It also shipped integrated into popular social chat app Discord. But the real business is, unsurprisingly, in enterprise.

“Early on our revenue was all pro, but in December we started onboarding enterprises. COVID has really accelerated that plan,” explained Davit Baghdasaryan, co-founder and CEO of Krisp. “In March, our biggest customer was a large tech company with 2,000 employees — and they bought 2,000 licenses, because everyone is remote. Gradually enterprise is taking over, because we’re signing up banks, call centers and so on. But we think Krisp will still be consumer-first, because everyone needs that, right?”

Now even more large companies have signed on, including one call center with some 40,000 employees. Baghdasaryan says the company went from 0 to 600 paying enterprises, and $0 to $4M annual recurring revenue in a single year, which probably makes the investment — by Storm Ventures, Sierra Ventures, TechNexus and Hive Ventures — look like a pretty safe one.

It’s a big win for the Krisp team, which is split between the U.S. and Armenia, where the company was founded, and a validation of a global approach to staffing — world-class talent isn’t just to be found in California, New York, Berlin and other tech centers, but in smaller countries that don’t have the benefit of local hype and investment infrastructure.

Funding is another story, of course, but having raised money the company is now working to expand its products and team. Krisp’s next move is essentially to monitor and present the metadata of conversation.

“The next iteration will tell you not just about noise, but give you real time feedback on how you are performing as a speaker,” Baghdasaryan explained. Not in the toastmasters sense, exactly, but haven’t you ever wondered about how much you actually spoke during some call, or whether you interrupted or were interrupted by others, and so on?

“Speaking is a skill that people can improve. Think Grammar.ly for voice and video,” Baghdasaryan ventured. “It’s going to be subtle about how it gives that feedback to you. When someone is speaking they may not necessarily want to see that. But over time we’ll analyze what you say, give you hints about vocabulary, how to improve your speaking abilities.”

Since architecturally Krisp is privy to all audio going in and out, it can fairly easily collect this data. But don’t worry — like the company’s other products, this will be entirely private and on-device. No cloud required.

“We’re very opinionated here: Ours is a company that never sends data to its servers,” said Baghdasaryan. “We’re never exposed to it. We take extra steps to create and optimize our tech so the audio never leaves the device.”

That should be reassuring for privacy wonks who are suspicious of sending all their conversations through a third party to  be analyzed. But after all, the type of advice Krisp is considering can be done without really “understanding” what is said, which also limits its scope. It won’t be coaching you into a modern Cicero, but it might help you speak more consistently or let you know when you’re taking up too much time.

For the immediate future, though, Krisp is still focused on improving its noise-suppression software, which you can download for free here.

Google updates G Suite for mobile with dark mode support, Smart Compose for Docs and more

Google today announced a major update to its mobile G Suite productivity apps.

Among these updates are the addition of a dark theme for Docs, Sheets and Slides, as well as the addition of Google’s Smart Compose technology to Docs on mobile and the ability to edit Microsoft Office documents without having to covert them. Other updates include a new vertically scrollable slide viewing experience in Slides, link previews and a new user interface for comments and action items. You can now also respond to comment on your documents directly from Gmail.

For the most part, these new features are now available on Android (or will be in the next few weeks) and then coming to iOS later, though Smart Compose is immediately available for both, while link previews are actually making their debut on iOS, with Android coming later.

Most of these additions simply bring existing desktop features to mobile, which has generally been the way Google has been rolling out new G Suite tools.

The new dark theme will surely get some attention, given that it has been a long time coming and that users now essentially expect this in their mobile apps. Google argues that it won’t just be easier on your eyes but that it can also “keep your battery alive longer” (though only phones with an OLED display will really see a difference there).

Image Credits: Google

You’re likely familiar with Smart Compose at this time, which is already available in Gmail and Docs on the web. Like everywhere else, it’ll try to finish your sentence for you, though given that typing is still more of a hassle on mobile, it’s surely a welcome addition for those who regularly have to write or edit documents on the go.

Even if your business is fully betting on G Suite, chances are somebody will still send you an Office document. On the web, G Suite could already handle these documents without any conversion. This same technology is now coming to mobile as well. It’s a handle feature, though I’m mostly surprised this wasn’t available on mobile before.

As for the rest of the new feature, the one worth calling out is the ability to respond to comments directly from Gmail. Last year, Google rolled out dynamic email on the web. I’m not sure I’ve really seen too many of these dynamic emails — which use AMP to bring dynamic content to your inbox — in the wild, but Google is now using this feature for Docs. “Instead of receiving individual email notifications when you’re mentioned in a comment in Docs, Sheets, or Slides, you’ll now see an up-to-date comment thread in Gmail, and you’ll be able to reply or resolve the comment, directly within the message,” the company explains.

 

Sight Diagnostics raises $71M Series D for its blood analyzer

Sight Diagnostics, the Israel-based health-tech company behind the FDA-cleared OLO blood analyzer, today announced that it has raised a $71 million Series D round with participation from Koch Disruptive Technologies, Longliv Ventures (which led its Series C round)and crowd-funding platform OurCrowd. With this, the company has now raised a total of $124 million, though the company declined to share its current valuation.

With a founding team that used to work at Mobileye, among other companies, Sight made an early bet on using machine vision to analyze blood samples and provide a full blood count comparable to existing lab tests within minutes. The company received FDA clearance late last year, something that surely helped clear the way for this additional round of funding.

Image Credits: Sight Diagnostics

“Historically, blood tests were done by humans observing blood under a microscope. That was the case for maybe 200 years,” Sight CEO and co-founder Yossi Pollak told me. “About 60 years ago, a new technology called FCM — or flow cytometry — started to be used on large volume of blood from venous samples to do it automatically. In a sense, we are going back to the first approach, we just replaced the human eye behind the microscope with machine vision.”

Pollak noted that the tests generate about 60 gigabytes of information (a lot of that is the images, of course) and that he believes that the complete blood count is only a first step. One of the diseases it is looking to diagnose is COVID-19. To do so, the company has placed devices in hospitals around the world to see if it can gather the data to detect anomalies that may indicate the severity of some of the aspects of the disease.

“We just kind of scratched the surface of the ability of AI to help with with a wish with blood diagnostics,” said Pollak. “Specifically now, there’s so much value around COVID in decentralizing diagnostics and blood tests. Think keeping people — COVID-negative or -positive —  outside of hospitals to reduce the busyness of hospitals and reduce the risk for contamination for cancer patients and a lot of other populations that require constant complete blood counts. I think there’s a lot of potential and a lot of a value that we can bring specifically now to different markets and we are definitely looking into additional applications beyond [complate blood count] and also perfecting our product.”

So far, Sight Diagnostics has applied for 20 patents and eight have been issued so far. And while machine learning is obviously at the core of what the company does — with the models running on the OLO machine and not in the cloud — Pollak also stressed that the team has made breakthroughs around the sample preparation to allow it to automatically prepare the sample for analysis.

Image Credits: Sight Diagnostics

Pollok stressed that the company focused on the U.S. market with this funding round, which makes sense, given that it was still looking for its FDA clearance. He also noted that this marks Koch Disrupt Technologies’ third investment in Israel, with the other two also being healthcare startups.

“KDT’s investment in Sight is a testament to the company’s disruptive technology that we believe will fundamentally change the way blood diagnostic work is done,’ said Chase Koch, President of Koch Disruptive Technologies . “We’re proud to partner with the Sight team, which has done incredible work innovating this technology to transform modern healthcare and provide greater efficiency and safety for patients, healthcare workers, and hospitals worldwide.”

The company now has about 100 employees, mostly in R&D, with offices in London and New York.

Microsoft, Amazon back a SoCal company making microchips specifically for voice-based apps

Microsoft’s venture capital fund, M12 Ventures has led a slew of strategic corporate investors backing a new chip developer out of Southern California called Syntiant, which makes semiconductors for voice recognition and speech-based applications.

“We started out to build a new type of processor for machine learning, and voice is our first application,” says Syntiant chief executive Kurt Busch. “We decided to build a chip for alwyas-on battery powered devices.”

Those chips need a different kind of processor than traditional chipsets, says Busch. Traditional compute is about logic, and deep learning is about memory access, traditional microchip designs also don’t perform as well when it comes to parallel processing of information.

Syntiant claims that its chips are two orders of magnitude more efficient, thanks to its data flow architecture that was built for deep learning, according to Busch.

It’s that efficiency that attracted investors including M12, Microsoft Corp.’s venture fund; the Amazon Alexa Fund; Applied Ventures, the investment arm of Applied Materials; Intel Capital, Motorola Solutions Venture Capital and Robert Bosch Venture Capital.

These investment firms represent some of the technology industry’s top chip makers and software developers, and they’re pooling their resources to support Syntiant’s Irvine, Calif.-based operations.

smart speakers

Image Credits: Bryce Durbin / TechCrunch

“Syntiant aligns perfectly with our mission to support companies that fuel voice technology innovation,” said Paul Bernard, director of the Alexa Fund at Amazon. “Its technology has enormous potential to drive continued adoption of voice services like Alexa, especially in mobile scenarios that require devices to balance low power with continuous, high-accuracy voice recognition. We look forward to working with Syntiant to extend its speech technology to new devices and environments.” 

Syntiant’s first device measures 1.4 by 1.8 millimeters and draws 140 microwatts of power. In some applications, Syntiant’s chips can run for a year on a single coin cell.

“Syntiant’s neural network technology and its memory-centric architecture fits well with Applied Materials’ core expertise in materials engineering as we enable radical leaps in device performance and novel materials-enabled memory technologies,” said Michael Stewart, principal at Applied Ventures, the venture capital arm of Applied Materials, Inc. “Syntiant’s ultra-low-power neural decision processors have the potential to create growth in the chip marketplace and provide an effective solution for today’s demanding voice and video applications.” 

So far, 80 customers are working with Syntiant to integrate the company’s chips into their products. There are a few dozen companies in the design stage and the company has already notched design wins for products ranging from cell phones and smart speakers to remote controls, hearing aids, laptops and monitors. Already the company has shipped its first million units.  

“We expect to scale that by 10 x by the end of this year,” says Busch. 

Syntiant’s chipsets are designed specifically to handle wakes and commands, which means that users can add voice recognition features and commands unique to their particular voice, Busch says.

Initially backed by venture firms including Atlantic Bridge, Miramar and Alpha Edison, Syntiant raised its first round of funding in October 2017. The company has raised a total of $65 million to date, according to Busch.

“Syntiant’s architecture is well suited for the computational patterns and inherent parallelism of deep neural networks,” said Samir Kumar, an investor with M12 and new director on the Syntiant board. “We see great potential in its ability to enable breakthroughs in power performance for AI processing in IoT [Internet of things].” 

 

Autonomous vehicle reporting data is driving AV innovation right off the road

At the end of every calendar year, the complaints from autonomous vehicle companies start piling up. This annual tradition is the result of a requirement by the California Department of Motor Vehicles that AV companies deliver “disengagement reports” by January 1 of each year showing the number of times an AV operator had to disengage the vehicle’s autonomous driving function while testing the vehicle.

However, all disengagement reports have one thing in common: their usefulness is ubiquitously criticized by those who have to submit them. The CEO and founder of a San Francisco-based self-driving car company publicly stated that disengagement reporting is “woefully inadequate … to give a meaningful signal about whether an AV is ready for commercial deployment.” The CEO of a self-driving technology startup called the metrics “misguided.” Waymo stated in a tweet that the metric “does not provide relevant insights” into its self-driving technology or “distinguish its performance from others in the self-driving space.”

Why do AV companies object so strongly to California’s disengagement reports? They argue the metric is misleading based on lack of context due to the AV companies’ varied testing strategies. I would argue that a lack of guidance regarding the language used to describe the disengagements also makes the data misleading. Furthermore, the metric incentivizes testing in less difficult circumstances and favors real-world testing over more insightful virtual testing.

Understanding California reporting metrics

To test an autonomous vehicle on public roads in California, an AV company must obtain an AV Testing Permit. As of June 22, 2020, there were 66 Autonomous Vehicle Testing Permit holders in California and 36 of those companies reported autonomous vehicle testing in California in 2019. Only five of those companies have permits to transport passengers.

To operate on California public roads, each permitted company must report any collision that results in property damage, bodily injury, or death within 10 days of the incident.

There have been 24 autonomous vehicle collision reports in 2020 thus far. However, though the majority of those incidents occurred in autonomous mode, accidents were almost exclusively the result of the autonomous vehicle being rear-ended. In California, rear-end collisions are almost always deemed the fault of the rear-ending driver.

The usefulness of collision data is evident — consumers and regulators are most concerned with the safety of autonomous vehicles for pedestrians and passengers. If an AV company reports even one accident resulting in substantial damage to the vehicle or harm to a pedestrian or passenger while the vehicle operates in autonomous mode, the implications and repercussions for the company (and potentially the entire AV industry) are substantial.

However, the usefulness of disengagement reporting data is much more questionable. The California DMV requires AV operators to report the number and details of disengagements while testing on California public roads by January 1 of each year. The DMV defines this as “how often their vehicles disengaged from autonomous mode during tests (whether because of technical failure or situations requiring the test driver/operator to take manual control of the vehicle to operate safely).”

Operators must also track how often their vehicles disengaged from autonomous mode, and whether that disengagement was the result of software malfunction, human error, or at the option of the vehicle operator.

AV companies have kept a tight lid on measurable metrics, often only sharing limited footage of demonstrations performed under controlled settings and very little data, if any. Some companies have shared the occasional “annual safety report,” which reads more like a promotional deck than a source of data on AV performance. Furthermore, there are almost no reporting requirements for companies doing public testing in any other state. California’s disengagement reports are the exception.

This AV information desert means that disengagement reporting in California has often been treated as our only source of information on AVs. The public is forced to judge AV readiness and relative performance based on this disengagement data, which is incomplete at best and misleading at worst.

Disengagement reporting data offers no context

Most AV companies claim that disengagement reporting data is a poor metric for judging advancement in the AV industry due to a lack of context for the numbers: knowing where those miles were driven and the purpose of those trips is essential to understanding the data in disengagement reports.

Some in the AV industry have complained that miles driven in sparsely populated areas with arid climates and few intersections are miles dissimilar from miles driven in a city like San Francisco, Pittsburgh, or Atlanta. As a result, the number of disengagements reported by companies that test in the former versus the latter geography are incomparable.

It’s also important to understand that disengagement reporting requirements influence AV companies’ decisions on where and how to test. A test that requires substantial disengagements, even while safe, would be discouraged, as it would make the company look less ready for commercial deployment than its competitors. In reality, such testing may result in the most commercially ready vehicle. Indeed, some in the AV industry have accused competitors of manipulating disengagement reporting metrics by easing the difficulty of miles driven over time to look like real progress.

Furthermore, while data can look particularly good when manipulated by easy drives and clear roads, data can look particularly bad when it’s being used strategically to improve AV software.

Let’s consider an example provided by Jack Stewart, a reporter for NPR’s Marketplace covering transportation:

“Say a company rolls out a brand-new build of their software, and they’re testing that in California because it’s near their headquarters. That software could be extra buggy at the beginning, and you could see a bunch of disengagements, but that same company could be running a commercial service somewhere like Arizona, where they don’t have to collect these reports.

That service could be running super smoothly. You don’t really get a picture of a company’s overall performance just by looking at this one really tight little metric. It was a nice idea of California some years ago to start collecting some information, but it’s not really doing what it was originally intended to do nowadays.”

Disengagement reports lack prescriptive language

The disengagement reports are also misleading due to a lack of guidance and uniformity in the language used to describe the disengagements. For example, while AV companies used a variety of language, “perception discrepancies” was the most common term used to describe the reason for a disengagement — however, it’s not clear that the term “perception discrepancies” has a set meaning.

Several operators used the phrase “perception discrepancy” to describe a failure to detect an object correctly. Valeo North America described a similar error as “false detection of object.” Toyota Research Institute almost exclusively described their disengagements vaguely as “Safety Driver proactive disengagement,” the meaning of which is “any kind of disengagement.” Whereas, Pony.ai described each instance of disengagement with particularity.

Many other operators reported disengagements that were “planned testing disengagements” or that were described with such insufficient particularity as to be virtually meaningless.

For example, “planned disengagements” could mean the testing of intentionally created malfunctions, or it could simply mean the software is so nascent and unsophisticated that the company expected the disengagement. Similarly, “perception discrepancy” could mean anything from precautionary disengagements to disengagements due to extremely hazardous software malfunctions. “Perception discrepancy,” “planned disengagement” or any number of other vague descriptions of disengagements make comparisons across AV operators virtually impossible.

So, for example, while it appears that a San Francisco-based AV company’s disengagements were exclusively precautionary, the lack of guidance on how to describe disengagements and the many vague descriptions provided by AV companies have cast a shadow over disengagement descriptions, calling them all into question.

Regulations discourage virtual testing

Today, the software of AV companies is the real product. The hardware and physical components — lidar, sensors, etc. — of AV vehicles have become so uniform, they’re practically off-the-shelf. The real component that is being tested is software. It’s well known that software bugs are best found by running the software as often as possible; road testing simply can’t reach the sheer numbers necessary to find all the bugs. What can reach those numbers is virtual testing.

However, the regulations discourage virtual testing as the lower reported road miles would seem to imply that a company is not road-ready.

Jack Stewart of NPR’s Marketplace expressed a similar point of view:

“There are things that can be relatively bought off the shelf and, more so these days, there are just a few companies that you can go to and pick up the hardware that you need. It’s the software, and it’s how many miles that software has driven both in simulation and on the real roads without any incident.”

So, where can we find the real data we need to compare AV companies? One company runs over 30,000 instances daily through its end-to-end, three-dimensional simulation environment. Another company runs millions of off-road tests a day through its internal simulation tool, running driving models that include scenarios that it can’t test on roads involving pedestrians, lane merging, and parked cars. Waymo drives 20 million miles a day in its Carcraft simulation platform — the equivalent of over 100 years of real-world driving on public roads.

One CEO estimated that a single virtual mile can be just as insightful as 1,000 miles collected on the open road.

Jonathan Karmel, Waymo’s product lead for simulation and automation, similarly explained that Carcraft provides “the most interesting miles and useful information.”

Where we go from here

Clearly there are issues with disengagement reports — both in relying on the data therein and in the negative incentives they create for AV companies. However, there are voluntary steps that the AV industry can take to combat some of these issues:

  1. Prioritize and invest in virtual testing. Developing and operating a robust system of virtual testing may present a high expense to AV companies, but it also presents the opportunity to dramatically shorten the pathway to commercial deployment through the ability to test more complex, higher risk, and higher number scenarios.
  2. Share data from virtual testing. Voluntary disclosure of virtual testing data will reduce reliance on disengagement reports by the public. Commercial readiness will be pointless unless AV companies have provided the public with reliable data on AV readiness for a sustained period.
  3. Seek the greatest value from on-road miles. AV companies should continue using on-road testing in California, but they should use those miles to fill in the gaps from virtual testing. They should seek the greatest value possible out of those slower miles, accept the higher percentage of disengagements they will be required to report, and when reporting on those miles, describe their context in particularity.

With these steps, AV companies can lessen the pain of California’s disengagement reporting data and advance more quickly to an AV-ready future.

UK commits to redesign visa streaming algorithm after challenge to ‘racist’ tool

The UK government is suspending the use of an algorithm used to stream visa applications after concerns were raised the technology bakes in unconscious bias and racism.

The tool had been the target of a legal challenge. The Joint Council for the Welfare of Immigrants (JCWI) and campaigning law firm Foxglove had asked a court to declare the visa application streaming algorithm unlawful and order a halt to its use, pending a judicial review.

The legal action had not run its full course but appears to have forced the Home Office’s hand as it has committed to a redesign of the system.

A Home Office spokesperson confirmed to us that from August 7 the algorithm’s use will be suspended, sending us this statement via email: “We have been reviewing how the visa application streaming tool operates and will be redesigning our processes to make them even more streamlined and secure.”

Although the government has not accepted the allegations of bias, writing in a letter to the law firm: “The fact of the redesign does not mean that the [Secretary of State] accepts the allegations in your claim form [i.e. around unconscious bias and the use of nationality as a criteria in the streaming process].”

The Home Office letter also claims the department had already moved away from use of the streaming tool “in many application types”. But it adds that it will approach the redesign “with an open mind in considering the concerns you have raised”.

The redesign is slated to be completed by the autumn, and the Home Office says an interim process will be put in place in the meanwhile, excluding the use of nationality as a sorting criteria.

The JCWI has claimed a win against what it describes as a “shadowy, computer-driven” people sifting system — writing on its website: “Today’s win represents the UK’s first successful court challenge to an algorithmic decision system. We had asked the Court to declare the streaming algorithm unlawful, and to order a halt to its use to assess visa applications, pending a review. The Home Office’s decision effectively concedes the claim.”

The department did not respond to a number of questions we put to it regarding the algorithm and its design processes — including whether or not it sought legal advice ahead of implementing the technology in order to determine whether it complied with the UK’s Equality Act.

“We do not accept the allegations Joint Council for the Welfare of Immigrants made in their Judicial Review claim and whilst litigation is still on-going it would not be appropriate for the Department to comment any further,” the Home Office statement added.

The JCWI’s complaint centered on the use, since 2015, of an algorithm with a “traffic-light system” to grade every entry visa application to the UK.

“The tool, which the Home Office described as a digital ‘streaming tool’, assigns a Red, Amber or Green risk rating to applicants. Once assigned by the algorithm, this rating plays a major role in determining the outcome of the visa application,” it writes, dubbing the technology “racist” and discriminatory by design, given its treatment of certain nationalities.

“The visa algorithm discriminated on the basis of nationality — by design. Applications made by people holding ‘suspect’ nationalities received a higher risk score. Their applications received intensive scrutiny by Home Office officials, were approached with more scepticism, took longer to determine, and were much more likely to be refused.

“We argued this was racial discrimination and breached the Equality Act 2010,” it adds. “The streaming tool was opaque. Aside from admitting the existence of a secret list of suspect nationalities, the Home Office refused to provide meaningful information about the algorithm. It remains unclear what other factors were used to grade applications.”

Since 2012 the Home Office has openly operated an immigration policy known as the ‘hostile environment’ — applying administrative and legislative processes that are intended to make it as hard as possible for people to stay in the UK.

The policy has led to a number of human rights scandals. (We also covered the impact on the local tech sector by telling the story of one UK startup’s visa nightmare last year.) So applying automation atop an already highly problematic policy does look like a formula for being taken to court.

The JCWI’s concern around the streaming tool was exactly that it was being used to automate the racism and discrimination many argue underpin the Home Office’s ‘hostile environment’ policy. In other words, if the policy itself is racist any algorithm is going to pick up and reflect that.

“The Home Office’s own independent review of the Windrush scandal, found that it was oblivious to the racist assumptions and systems it operates,” said Chai Patel, legal policy director of the JCWI, in a statement. “This streaming tool took decades of institutionally racist practices, such as targeting particular nationalities for immigration raids, and turned them into software. The immigration system needs to be rebuilt from the ground up to monitor for such bias and to root it out.”

“We’re delighted the Home Office has seen sense and scrapped the streaming tool. Racist feedback loops meant that what should have been a fair migration process was, in practice, just ‘speedy boarding for white people.’ What we need is democracy, not government by algorithm,” added Cori Crider, founder and director of Foxglove. “Before any further systems get rolled out, let’s ask experts and the public whether automation is appropriate at all, and how historic biases can be spotted and dug out at the roots.”

In its letter to Foxglove, the government has committed to undertaking Equality Impact Assessments and Data Protection Impact Assessments for the interim process it will switch to from August 7 — when it writes that it will use “person-centric attributes (such as evidence of previous travel”, to help sift some visa applications, further committing that “nationality will not be used”.

Some types of applications will be removed from the sifting process altogether, during this period.

“The intent is that the redesign will be completed as quickly as possible and at the latest by October 30, 2020,” it adds.

Asked for thoughts on what a legally acceptable visa streaming algorithm might look like, Internet law expert Lilian Edwards told TechCrunch: “It’s a tough one… I am not enough of an immigration lawyer to know if the original criteria applied re suspect nationalities would have been illegal by judicial review standard anyway even if not implemented in a sorting algorithm. If yes then clearly a next generation algorithm should aspire only to discriminate on legally acceptable grounds.

“The problem as we all know is that machine learning can reconstruct illegal criteria — though there are now well known techniques for evading that.”

“You could say the algorithmic system did us a favour by confronting illegal criteria being used which could have remained buried at individual immigration officer informal level. And indeed one argument for such systems used to be ‘consistency and non-arbitrary’ nature. It’s a tough one,” she added.

Earlier this year the Dutch government was ordered to halt use of an algorithmic risk scoring system for predicting the likelihood social security claimants would commit benefits or tax fraud — after a local court found it breached human rights law.

In another interesting case, a group of UK Uber drives are challenging the legality of the gig platform’s algorithmic management of them under Europe’s data protection framework — which bakes in data access rights, including provisions attached to legally significant automated decisions.

Machine Learning for Product Managers – A Quick Primer

Currently, there are thousands of products, apps, and services driven by machine learning (ML) that we use every day. As was reported by Crunchbase, in 2019 there were 8,705 companies and startups that rely on this technology. According to PWC’s research, it’s predicted that ML and AI technologies will contribute about $15.7 trillion to global GDP by 2030. It’s obvious [...]

Read More...

The post Machine Learning for Product Managers – A Quick Primer appeared first on Mind the Product.

Machine Learning for Product Managers – A Quick Primer

Currently, there are thousands of products, apps, and services driven by machine learning (ML) that we use every day. As was reported by Crunchbase, in 2019 there were 8,705 companies and startups that rely on this technology. According to PWC’s research, it’s predicted that ML and AI technologies will contribute about $15.7 trillion to global GDP by 2030. It’s obvious [...]

Read More...

The post Machine Learning for Product Managers – A Quick Primer appeared first on Mind the Product.

Announcing Sight Tech Global, an event on the future of AI and accessibility for people who are blind or visually impaired

Few challenges have excited technologists more than building tools to help people who are blind or visually impaired. It was Silicon Valley legend Ray Kurzweil, for example, who in 1976 launched the first commercially available text-to-speech reading device. He unveiled the $50,000 Kurzweil Reading Machine, a boxy device that covered a tabletop, at a press conference hosted by the National Federation of the Blind

The early work of Kurzweil and many others has rippled across the commerce and technology world in stunning ways. Today’s equivalent of Kurzweil’s machine is Microsoft’s Seeing AI app, which uses AI-based image recognition to “see” and “read” in ways that Kurzweil could only have dreamed of. And it’s free to anyone with a mobile phone. 

Remarkable leaps forward like that are the foundation for Sight Tech Global, a new, virtual event slated for Dec 2-3, that will bring together many of the world’s top technology and accessibility experts to discuss how rapid advances in AI and related technologies will shape assistive technology and accessibility in the years ahead.

The technologies behind Microsoft’s Seeing AI are on the same evolutionary tree as the ones that enable cars to be autonomous and robots to interact safely with humans. Much of our most advanced technology today stems from that early, challenging mission that top Silicon Valley engineers embraced to teach machines to “see” on behalf of humans.

From the standpoint of people who suffer vision loss, the technology available today is astonishing, far beyond what anyone anticipated even ten years ago. Purpose-built products like Seeing AI and computer screen readers like JAWS are remarkable tools. At the same time, consumer products including mobile phones, mapping apps, and smart voice assistants are game changers for everyone, those with sight loss not the least. And yet, that tech bonanza has not come close to breaking down the barriers in the lives of people who still mostly navigate with canes or dogs or sighted assistance, depend on haphazard compliance with accessibility standards to use websites, and can feel as isolated as ever in a room full of people. 

A computer can drive a car at 70 MPH without human assistance but there is not yet any comparable device to help a blind person walk down a sidewalk at 3 MPH.

In other words, we live in a world where a computer can drive a car at 70 MPH without human assistance but there is not yet any comparable device to help a blind person walk down a sidewalk at 3 MPH. A social media site can identify billions of people in an instant but a blind person can’t readily identify the person standing in front of them. Today’s powerful technologies, many of them grounded in AI, have yet to be milled into next-generation tools that are truly useful, happily embraced and widely affordable. The work is underway at big tech companies like Apple and Microsoft, at startups, and in university labs, but no one would dispute that the work is as slow as it is difficult. People who are blind or visually impaired live in a world where, as the science fiction author William Gibson once remarked, “The future is already here — it’s just not very evenly distributed.”

That state of affairs is the inspiration for Sight Tech Global. The event will convene the top technologists, human-computer interaction specialists, product designers, researchers, entrepreneurs and advocates to discuss the future of assistive technology as well as accessibility in general. Many of those experts and technologists are blind or visually impaired, and the event programming will stand firmly on the ground that no discussion or new product development is meaningful without the direct involvement of that community. Silicon Valley has great technologies but does not, on its own, have the answers.

The two days of programming on the virtual main stage will be free and available on a global basis both live and on-demand. There will also be a $25 Pro Pass  for those who want to participate in specialized breakout sessions, Q&A with speakers, and virtual networking. Registration for the show opens soon; in the meantime anyone interested may request email updates here

It’s important to note that there are many excellent events every year that focus on accessibility, and we respect their many abiding contributions and steady commitment. Sight Tech Global aims to complement the existing event line-up by focusing on hard questions about advanced technologies and the products and experiences they will drive in the years ahead – assuming they are developed hand-in-hand with their intended audience and with affordability, training and other social factors in mind. 

In many respects, Sight Tech Global is taking a page from TechCrunch’s approach to its AI and robotics events over the past four years, which were in partnership with MIT and UC Berkeley.  The concept was to have TechCrunch editors ask top experts in AI and related fields tough questions across the full spectrum of issues around these powerful technologies, from the promise of automation and machine autonomy to the downsides of job elimination and bias in AI-based systems. TechCrunch’s editors will be a part of this show, along with other expert moderators.  

As the founder of Sight Tech Global, I am drawing on my extensive event experience at TechCrunch over eight years to produce this event. Both TechCrunch and its parent company, Verizon Media, are lending a hand in important ways. My own connection to the community is through my wife, Joan Desmond, who is legally blind. 

The proceeds from sponsorships and ticket sales will go to the non-profit Vista Center for Blind and Visually Impaired, which has been serving Silicon Valley area for 75 years. The Vista Center owns the Sight Tech Global event and its executive director, Karae Lisle is the event’s chair. We have assembled a highly experienced team of volunteers to program and produce a rich, world-class virtual event on December 2-3.

Sponsors are welcome, and we have opportunities available ranging from branding support to content integration. Please email sponsor@sighttechglobal.com for more information.

Our programming work is under way and we will announce speakers and sessions over the coming weeks. The programming committee includes Jim Fruchterman (Benetech / TechMatters) Larry Goldberg (Verizon Media), Matt King (Facebook) and Professor Roberto Manduchi (UC Santa Cruz). We welcome ideas and can be reached via programming@sighttechglobal.com

For general inquiries, including collaborations on promoting the event, please contact info@sighttechglobal.com.

The essential revenue software stack

From working with our 90+ portfolio companies and their customers, as well as from frequent conversations with enterprise leaders, we have observed a set of software services emerge and evolve to become best practice for revenue teams. This set of services — call it the “revenue stack” — is used by sales, marketing and growth teams to identify and manage their prospects and revenue.

The evolution of this revenue stack started long before anyone had ever heard the word coronavirus, but now the stakes are even higher as the pandemic has accelerated this evolution into a race. Revenue teams across the country have been forced to change their tactics and tools in the blink of an eye in order to adapt to this new normal — one in which they needed to learn how to sell in not only an all-digital world but also an all-remote one where teams are dispersed more than ever before. The modern “remote-virtual-digital”-enabled revenue team has a new urgency for modern technology that equips them to be just as — and perhaps even more — productive than their pre-coronavirus baseline. We have seen a core combination of solutions emerge as best-in-class to help these virtual teams be most successful. Winners are being made by the directors of revenue operations, VPs of revenue operations, and chief revenue officers (CROs) who are fast adopters of what we like to call the essential revenue software stack.

In this stack, we see four necessary core capabilities, all critically interconnected. The four core capabilities are:

  1. Revenue enablement.
  2. Sales engagement.
  3. Conversational intelligence.
  4. Revenue operations.

These capabilities run on top of three foundational technologies that most growth-oriented companies already use — agreement management, CRM and communications. We will dive into these core capabilities, the emerging leaders in each and provide general guidance on how to get started.

Revenue enablement