TikTok claims it’s not collecting U.S. users’ biometric data, despite what privacy policy says

Last year, TikTok quietly updated its privacy policy to allow the app to collect biometric data on U.S. users, including “faceprints and voiceprints” — a concerning change that the company declined to detail at the time, or during a subsequent Senate hearing held last October. Today, the tech company was again asked about its intentions regarding this data collection practice during a Senate hearing focused on social media’s impact on homeland security. 

TikTok’s earlier privacy policy change had introduced a new section called “Image and Audio Information” under the section “Information we collect automatically.” Here, it detailed the types of images and audio that could be collected, including: “biometric identifiers and biometric information as defined under U.S. laws, such as faceprints and voiceprints.”

The policy language was vague as it didn’t clarify whether it was referring to federal law, state laws, or both, nor did it explain why, exactly, this information was being collected, or how it might be shared.

To learn more, Senator Kyrsten Sinema (D-AZ) today asked TikTok’s representative for the hearing, its Chief Operating Officer Vanessa Pappas, if the biometric data of Americans had ever been accessed by or provided to any person located in China.

She also wanted to know if it was possible for this biometric data to be be accessed by anyone in China. 

Pappas didn’t directly answer the question with a simple yes or no, but rather went on to clarify how TikTok defines biometric data. 

Noting that everyone has their own definition of what “biometrics” means, Pappas claimed TikTok did not use “any sort of facial, voice or audio, or body recognition that would identify an individual.”

She further explained that such data collection was only used for video effects and stored locally on users’ devices, where it’s subsequently deleted.

“…the way that we use facial recognition, for example, would be is if we’re putting an effect on the creator’s video — so, you were uploading a video and you wanted to put sunglasses or dog ears on your video — that’s when we do facial recognition. All of that information is stored only in your device. And as soon as it’s applied — like that filter is applied and posted — that data is deleted,” Pappas said. “So we don’t have that data.”

In other words, the TikTok exec saying that ByteDance employees in China would have no way of collecting this data from TikTok’s U.S. users in the first place, because of how this process works at a technical level. (TikTok, of course, has hundreds of filters and effects in its app, so analyzing how each one works independently would take technical expertise and time.)

Notably, this is the first time the company has responded to U.S. Senators’ inquiries about the app’s use of biometrics, as the question brought up during the October 2021 hearing was essentially dodged at the time. When Senator Marsha Blackburn (R-TN) followed up with TikTok for more information after that hearing, the question about facial recognition and voiceprints hadn’t been included on the list of questions TikTok returned to her office later that year in December.

The biometrics issue also didn’t come up in the letter TikTok sent to a group of U.S. senators in June 2022, to answer follow-up questions about Chinese ByteDance employees’ access to TikTok U.S. users’ data, after BuzzFeed News’ damning report on the matter. Instead, that letter was focused more on how TikTok had been working to move its U.S. users’ data to Oracle’s cloud to further limit access from staff in China.

The lack of understanding about TikTok’s use of biometrics aspect raised further concerns in April 2022, when the ACLU pointed out that a new TikTok trend involved having users film their eyes up close, then using a high-resolution filter to show the details, patterns and colors of their irises. At the time of its report, over 700,000 videos had been created using the filter within a month’s time, it said. (Today, TikTok’s app reports only 533,000+ videos.) In an email to TechCrunch, the ACLU had also suggested taking a look at Oracle’s biometric technology, given its plans to host TikTok user data.

In addition to questions about biometric data collection, TikTok was also asked in today’s hearing whether or not it was tracking users’ keystrokes.

This related to an independent privacy researcher’s finding, released in August, which claimed the TikTok iOS app had been injecting code that could allow it to essentially perform keylogging. Ireland’s Data Protection Commission also requested a meeting with TikTok after this research was released.

At the time, TikTok explained the report was misleading, as the app’s code was not doing anything malicious, but was rather used for things like debugging, troubleshooting and performance monitoring. The company also said that it used keystroke information to detect unusual patterns to protect against fake logging, spam comments and other behavior that could threaten its platform.

At today’s hearing, Pappas again stressed that TikTok was never collecting the content of what was being typed, and that, to her knowledge, this had been “an anti-spam measure.”

 

 

TikTok claims it’s not collecting U.S. users’ biometric data, despite what privacy policy says by Sarah Perez originally published on TechCrunch

ConductorOne is bringing automation to identity and access management with $15M investment

The founders of ConductorOne, an identity and access control startup, both came from Okta, which is itself a single sign-on vendor based on the zero trust model. In fact, they were in charge of authentication and zero trust products, and saw firsthand how companies were struggling to control permissions and access across a complex environment that often included not just cloud applications, but also on-premises pieces mixed in as well.

They decided to move on and start a company to help solve that particular set of problems with the goal of automating a lot of the access control activities that up to this point have been done manually, or worse, not at all.

Today the company announced a $15 million Series A.

CTO and co-founder Paul Querna said they were keenly aware of the pain points companies were facing around these issues. “Permissions and access management is still very painful to end users, and IT teams or the engineering team managing all of this,” he told TechCrunch. That’s because with a malfunctioning permissions system, you can under provision, keeping people waiting to use the tools they need to do their job, or over provision such as maintaining permission for users who are no longer working at your company. “I think a lot of us have seen firsthand these kinds of experiences,” Querna said.

His co-founder and CEO Alex Bovee adds that they wanted to make it easier for companies to control these access management tasks and bring the principle of least privilege to the solution. “We started ConductorOne to really automate as much as possible from an identity security perspective how people get access, retain access and revoke access to help companies achieve more of a least privilege level of access control,” Bovee told me.

The former Okta employees see their company solving a distinctly different problem than their former employer around securing identity. “They do a great job of centralizing some of your corporate users into a central directory. I think when you think about identity from a security perspective, it’s fundamentally about understanding all the identities in your environment, whether or not they’re connected to your SSO solutions,” he said.

He adds, “It’s also about understanding the permissions, the roles, the data that those different identities can access. So we are taking much more of an orchestration centric view. Frankly, it’s just a different architecture, more of an orchestration view and visibility first view across your environment to be able to give you that as a security and GRC (governance, risk, compliance) team, and then building the workflows on top of that to execute it,”

Part of the way it works is through out-of-the-box integrations to popular services like Okta, GitHub, Slack, Datadog, Jira and so forth to understand what’s happening across the company, and what actions could be having an impact on someone’s permission to access a program. It’s worth noting, however, that they can work with any corporate directory solution beyond Okta.

Today, the startup has 17 employees with plans to double that by year’s end. Bovee says that building a diverse workforce is written into the company’s original values documents. “We posted our company values very early on. It’s one of our first blog posts, and I think one of the mechanisms to attract that talent, especially early in the sourcing funnel, is to be public and transparent about how you want to run the company, and emphasize that you believe in diversity and you want that as part of your company culture,” he said.

Today’s $15M Series A investment was led by Accel with participation from existing investors Fuel Capital, Fathom Capital and Active Capital along with several prominent industry angels. The company raised a $5 million seed round last year, which was also led by Accel.

The new funding should help them to start rounding out the longer term vision for the company. “Our vision and strategy for the product long term is to automate that full lifecycle across access control. So not only the on boarding process, but eventually the off boarding process, and handling things like time-based access control, so it’s not even an issue in the first place because you grant the access for a period of time and then remove it,” Bovee explained.

MIT researchers uncover ‘unpatchable’ flaw in Apple M1 chips

Apple’s M1 chips have an “unpatchable” hardware vulnerability that could allow attackers to break through its last line of security defenses, MIT researchers have discovered.

The vulnerability lies in a hardware-level security mechanism utilized in Apple M1 chips called pointer authentication codes, or PAC. This feature makes it much harder for an attacker to inject malicious code into a device’s memory and provides a level of defense against buffer overflow exploits, a type of attack that forces memory to spill out to other locations on the chip.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory, however, have created a novel hardware attack, which combines memory corruption and speculative execution attacks to sidestep the security feature. The attack shows that pointer authentication can be defeated without leaving a trace, and as it utilizes a hardware mechanism, no software patch can fix it.

The attack, appropriately called “Pacman,” works by “guessing” a pointer authentication code (PAC), a cryptographic signature that confirms that an app hasn’t been maliciously altered. This is done using speculative execution — a technique used by modern computer processors to speed up performance by speculatively guessing various lines of computation — to leak PAC verification results, while a hardware side-channel reveals whether or not the guess was correct.

What’s more, since there are only so many possible values for the PAC, the researchers found that it’s possible to try them all to find the right one.

In a proof of concept, the researchers demonstrated that the attack even works against the kernel — the software core of a device’s operating system — which has “massive implications for future security work on all ARM systems with pointer authentication enabled,” says Joseph Ravichandran, a Ph.D. student at MIT CSAIL and co-lead author of the research paper.

“The idea behind pointer authentication is that if all else has failed, you still can rely on it to prevent attackers from gaining control of your system,” Ravichandran added. “We’ve shown that pointer authentication as a last line of defense isn’t as absolute as we once thought it was.”

Apple has implemented pointer authentication on all of its custom ARM-based silicon so far including the M1, M1 Pro, and M1 Max, and a number of other chip manufacturers including Qualcomm and Samsung have either announced or are expected to ship new processors supporting the hardware-level security feature. MIT said it has not yet tested the attack on Apple’s unreleased M2 chip, which also supports pointer authentication.

“If not mitigated, our attack will affect the majority of mobile devices, and likely even desktop devices in the coming years,” MIT said in the research paper.

The researchers — which presented their findings to Apple — noted that the Pacman attack isn’t a “magic bypass” for all security on the M1 chip, and can only take an existing bug that pointer authentication protects against. When reached, Apple did not comment on the record.

In May last year, a developer discovered an unfixable flaw in Apple’s M1 chip that creates a covert channel that two or more already-installed malicious apps could use to transmit information to each other. But the bug was ultimately deemed “harmless” as malware can’t use it to steal or interfere with data that’s on a Mac.

Clearview AI banned from selling its facial recognition software to most U.S. companies

A company that gained notoriety for selling access to billions of facial photos, many culled from social media without the knowledge of the individuals depicted, faces major new restrictions to its controversial business model.

On Monday, Clearview AI agreed to settle a 2020 lawsuit from the ACLU that accused the company of running afoul of an Illinois law banning the use of individuals’ biometric data without consent.

That law, the Biometric Information Privacy Act (BIPA), protects the privacy of Illinois residents, but the Clearview settlement is a clear blueprint for how the law can be leveraged to bolster consumer protections on the national stage.

“By requiring Clearview to comply with Illinois’ pathbreaking biometric privacy law not just in the state, but across the country, this settlement demonstrates that strong privacy laws can provide real protections against abuse,” Deputy Director of ACLU’s Speech, Privacy, and Technology Project Nathan Freed Wessler said.

“Clearview can no longer treat people’s unique biometric identifiers as an unrestricted source of profit. Other companies would be wise to take note, and other states should follow Illinois’ lead in enacting strong biometric privacy laws.”

Clearview isn’t the only company to get tangled up in the trailblazing Illinois privacy law. Last year, Facebook was ordered to pay $650 million for violating BIPA by automatically tagging people in photos with the use of facial recognition tech.

According to the terms of the Clearview settlement, which is still in the process of being finalized by the court, the company will be nationally banned from selling or giving away access to its facial recognition database to private companies and individuals.

While there is an exception made for government contractors — Clearview works with government agencies, including Homeland Security and the FBI in the U.S. — the company can’t provide its software to any government contractors or state or local government entities in Illinois for five years.

Clearview will also be forced to maintain an opt-out system to allow any Illinois residents to block their likeness from the company’s facial search results, a mechanism it must spend $50,000 to publicize online. The company must also end its controversial practice of providing free trials to police officers if those individuals don’t get approval through their departments to test the software.

The sweeping restrictions will have a huge impact on Clearview’s ability to do business in the U.S, but the company is also facing privacy headwinds in its business abroad. Last November, Britain’s Information Commissioner’s Office hit Clearview with a $22.6 million fine for failing to obtain consent from British residents before sweeping their photos into its massive database. Clearview has also run afoul of privacy laws in Canada, France and Australia, with some countries ordering the company to delete all data that was obtained without their residents’ consent.

Apple, Google, and Microsoft team up on passwordless logins

In a rare show of alliance, Apple, Google and Microsoft have joined forces to expand support for passwordless logins across mobile, desktop, and browsers.

Passwords are notoriously insecure, with weak and easily-guessable credentials accounting for more than 80% of all data breaches, per Verizon’s annual data breach report. While password managers and multi-factor technologies offer incremental improvements, Apple, Google and Microsoft are working together to create sign-in technology that is more convenient and more secure.

The tech giants announced on Thursday that they are expanding support for a password-free sign-in standard from the FIDO Alliance and the World Wide Web Consortium, which means you’ll soon be able to use your smartphone to sign-in to an app or website on a nearby device, regardless of the operating system or browser you’re using. You’ll use the same action that you take multiple times each day to unlock your smartphone, such as with a verification of your fingerprint, face scan, or a device PIN.

Users will also be able to automatically access their FiDO sign-in credentials, or “passkeys,” across multiple devices — including new ones — without having to re-enroll every account.

While the three companies have long supported the passwordless sign-in standard created by the FIDO Alliance, users are still forced to sign into each website or app with each device before they can use the passwordless feature. Over the next year, the three tech giants will implement passwordless FIDO sign-in standards across macOS and Safari; Android and Chrome; and Windows and Edge. This means that, for example, users will be able to sign in on a Google Chrome browser that’s running on Microsoft Windows, using a passkey on an Apple device.

This will make it much more difficult for hackers to compromise login details remotely since signing in requires access to a physical device.

“Working with the industry to establish new, more secure sign-in methods that offer better protection and eliminate the vulnerabilities of passwords is central to our commitment to building products that offer maximum security and a transparent user experience — all with the goal of keeping users’ personal information safe,” said Kurt Knight, Apple’s senior director of platform product marketing, in a press release.

This new collective commitment was commended by Jen Easterly, director of the U.S. Cybersecurity and Infrastructure Security Agency (CISA), who called it “the type of forward-leaning thinking that will ultimately keep the American people safer online.”

“At CISA, we are working to raise the cybersecurity baseline for all Americans,” Easterly added. “Today is an important milestone in the security journey to encourage built-in security best practices and help us move beyond passwords. Cyber is a team sport, and we’re pleased to continue our collaboration.”

While the password has so far survived many attempts to kill them for good, this could be one of the final nails in the password’s casket.

How a simple security bug became a university campus ‘master key’

When Erik Johnson couldn’t get his university’s mobile student ID app to reliably work, he sought to find a workaround.

The app is fairly important, since it allows him and every other student at his university to pay for meals, get into events and even unlock doors to dorm rooms, labs and other facilities across campus. The app is called GET Mobile, and it’s developed by CBORD, a technology company that brings access control and payment systems to hospitals and universities. But Johnson — and the many who left the app one-star reviews in frustration — said the app was slow and would take too long to load. There had to be a better way.

And so by analyzing the app’s network data at the same time he unlocked his dorm room door, Johnson found a way to replicate the network request and unlock the door by using a one-tap Shortcut button on his iPhone. For it to work, the Shortcut has to first send his precise location along with the door unlock request or his door won’t open. Johnson said as a security measure students have to be physically in proximity to unlock doors using the app, seen as a measure aimed at preventing accidental door openings across campus.

It worked, but why stop there? If he could unlock a door without needing the app, what other tasks could he replicate?

Johnson didn’t have to look far for help. CBORD publishes a list of commands available through its API, which can be controlled using a student’s credentials, like his. (An API allows two things to talk to each other over the internet, in this case a mobile app and a university’s servers storing students’ data.)

But he soon found a problem: The API was not checking if a student’s credentials were valid. That meant Johnson, or anyone else on the internet, could communicate with the API and take over another student’s account without having to know their password. Johnson said the API only checked the student’s unique ID, but warned that these are sometimes the same as a university-issued student username or student ID number, which some schools publicly list on their online student directories, and as such cannot be considered a secret.

Johnson described the password bug as a “master key” to his university — at least to the doors that are controlled by CBORD. As for needing to be in close proximity to a door to unlock it, Johnson said the bug allowed him to trick the API into thinking he was physically present — simply by sending back the approximate coordinates of the lock itself.

Read more on TechCrunch

Since the bug was found in the API, Johnson said the bug could affect other universities, though he didn’t check to see if he was right, fearing that would exceed the bounds of his account access. Instead he looked for a way to report the bug to CBORD, but couldn’t find a dedicated security email on its website. He called the phone support line to disclose the vulnerability but a support representative said they didn’t have a security contact and was told to report the bug through his school.

Assuming that the bug could be easily exploited, if not already, Johnson asked TechCrunch to share details of the vulnerability with CBORD.

The vulnerability was resolved a short time after we contacted the company on February 12. In an email, CBORD chief information officer Josh Elder confirmed the vulnerability is now fixed and session keys were invalidated, effectively closing off any remaining unauthenticated access to the API. Elder said that CBORD’s customers were notified, but Elder declined to share the correspondence with TechCrunch. One security executive, whose organization is also a CBORD customer, told TechCrunch that they had not received any notice from CBORD about the vulnerability. It’s unclear if CBORD ever plans to notify users and account holders — including students like Johnson.

Elder did not dispute Johnson’s findings, but declined to comment further when asked if the company stores logs or has the ability to detect malicious exploitation of its API. TechCrunch did not hear back after we requested to speak with a company spokesperson to answer our further questions.

It’s not the first time that CBORD had to fix a vulnerability that could have remotely unlocked doors. Wired reported in 2009 that it was possible to intercept a door unlock command and guess the next sequence number, defeating the need for an ID card.

The IRS won’t make you verify your identity with facial recognition after all

The IRS announced plans Monday to back away from a third-party facial recognition system that collects biometric data from U.S. taxpayers who want to log in to the agency’s online portal.

The IRS says it will abandon the technology, built by a contractor called ID.me, in the coming weeks. The agency says it will instead swap in an “additional authentication process” that doesn’t collect facial images or video. The two-year contract was worth $86 million.

“The IRS takes taxpayer privacy and security seriously, and we understand the concerns that have been raised,” IRS Commissioner Chuck Rettig said. “Everyone should feel comfortable with how their personal information is secured, and we are quickly pursuing short-term options that do not involve facial recognition.”

The update to the U.S. tax collection agency’s online verification system, set for a full roll-out over the summer, was roundly criticized for collecting sensitive biometric data on Americans.

Many tax filers already encountered the ID.me system live on IRS.gov, where they were required to submit facial videos to create an online login. If that system failed, tax filers were put into lengthy queues to have their identities manually verified in video calls with a third-party company.

In a letter to Rettig, Reps. Ted Lieu (D-CA), Anna Eshoo (D-CA), Pramila Jayapal (D-WA) and Yvette Clarke (D-NY) raised concerns that allowing a private company to collect face data from millions of Americans posed a cybersecurity risk. The lawmakers also pointed to the body of research demonstrating that facial recognition systems are often built with inherent racial bias that makes the technology far more accurate for non-white faces.

“To be clear, Americans will not have the option of providing their biometric data to a private contractor as an alternative way to access the IRS website,” the lawmakers wrote.

In choosing to roll out the facial recognition technology, the IRS ran afoul of privacy hawks but also the federal government’s own General Services Administration, which has publicly committed to not implement facial recognition tech unless such a system undergoes “rigorous review” to evaluate if it will cause unforeseen harm. The GSA’s existing identity verification methods eschew the need for biometric data, relying instead on scans of government records and credit reports.

iProov snaps up $70M for its facial verification technology, already in use by Homeland Security, the NHS and others

Biometrics, and specifically facial recognition, have seen a surge of usage in the last several years, first as a tool to help organizations verify identities digitally against rising waves of fraud and cybercrime; and second as a way to help enable that process even further in our socially-distanced, pandemic-punctuated times. Today, a startup called iProov, which provides face authentication and verification technology to a number of governments and other big organizations — attracting some controversy in the process — is announcing $70 million in funding to keep up its growth momentum.

The funding is coming from a single investor, Sumeru Equity Partners out of the Bay Area, which originally started life as a part of Silver Lake before spinning off as an independent operation in 2014. Valuation is not being disclosed, nor is the total raised by the company to date.

London-based iProov has seen a lot of business traction so far in its home market of the U.K., and now it plans to use capital specifically to continue building out its present in the U.S. and other international markets where it’s already started to get a foothold. iProov works at the large enterprise level and its customer base currently includes U.S. Department of Homeland Security, the U.K.’s Home Office and National Health Services (NHS), the Australian Taxation Office, GovTech Singapore and banks Rabobank and ING. iProov said 2021 was a bumper year for the company: it tripled its revenues over the year before (although it’s not disclosing how much that works out to in actual terms).

As a measure of how much, and how, iProov is getting used, the NHS says that as of September 2021, usage of its NHS app — — which uses iProov to power the facial verification that is used to register for the app, which then lets you check and show your vaccination status; book doctor appointments; re-order prescriptions; view your medical records; get advice; and more — ballooned to 16 million users, from just 4 million in May 2021 (now it’s January and there are likely more).

To be clear, this isn’t facial recognition — which founder and CEO Andrew Bud describes as a mere “commodity” these days — but technology, sold as Genuine Presence Assurance and Liveness Assurance by iProov — that lets an organization capture an image of an individual, verify that it’s real against another piece of ID and not a deepfake or other counterfeit image, and proceed with whatever transaction is going on, all by way of cloud-based remote, virtual mobile technology.

Its peak usage last year last year typically would see iProov getting pinged for more than 1 million facial verifications per day.

But that growth has not come without scrutiny and other controversial attention.

Critics have slammed iProov and the UK government for a lack of transparency over how user data is handled in process of capturing and authenticating images for biometric verification, particularly given that iProov is a private company working for a public organization; related to that there have been other ethical questions raised between the links between some of the startup’s earliest backers and the Tory Party (which is currently in power in the UK).

And as of this week (timed to coincide with the funding news?) iProov has also been the subject of a patent lawsuit from a U.S. rival called FaceTec, which claims that iProov has copied parts of its technology and is demanding an injunction (something that could be tricky as iProov increases its focus on the U.S.).

Meanwhile, iProov has also been involved in early work to see how and if its facial authentication technology might be applied in other use cases, such these trials to speed up Covid vaccination certification, another potential avenue for scrutiny.

In an interview, Bud was quick to counter the controversial currents that have swirled around his company and the technology that it’s built.

On the issue of privacy and security, Bud is a longtime veteran of the telecoms and mobile worlds, initially as an engineer and then an executive, who said that his interest in biometrics was sparked after being burned at his previous company, mBlox, where malicious hackers exploited the company’s SMS infrastructure and stole millions of dollars from customers.

The experience made him realize how critical security needed to be both at the end of the provider, but as something that was easy for consumers to engage with too. “It needed to be ultra-inclusive and simple,” Bud said. “How can we ensure something like that would never happen again? I had to solve that problem.” That, he said, was what spurred him to start looking at biometrics, which he believes is the best answer to that question. And from that he built his next company, which became iProov.

“These are fair questions,” he said in response to me raising the issue of privacy and data protection at iProov and its work with public and private institutions. “Privacy is extremely important to iProov and our systems are built to protect users.” Everything is compliant with GDPR or other government-mandated data protection rules, related to data and how it may or may not be used, he added, and the methods that iProov uses to process user data are built to keep customers and their identities safe from being compromised. He also confirmed that none of the data that passes through its system is used for commercial purposes. iProov runs a policy of not knowing the identities or other personal information realted to any photos, but it does store imagery, specifically to help track and block malicious actors and to track anomalies.

On the subject of the patent infringement lawsuit from FaceTec, Bud dismissed it as “completely unfounded,” with a spokesperson sending me a more complete statement after my interview (as well as asking we keep this part out of the story altogether…):

“All of our products have been developed in-house and are covered by granted patents. Accusations that we have used [FaceTec] technology in our products are completely unfounded, and iProov will take all appropriate actions to defend itself and its customers.”

And as for future applications, although the UK government hasn’t yet shown a willingness to mandate so-called “Covid passports” widely — where people have to provide quick verification of their vaccination status to gain entry to events, public venues, workplaces and more — the basics of that technology are already there and being used by a number of other customers, Bud said. These include a recent launch from Eurostar (which runs the train under the English Channel between London and cities on the European continent) for passengers to authenticate their various credentials at home, to reduce the amount of dwell time at check in, where they then can walk through simply by showing their faces to a screen.

Facial-related biometrics, Bud said, are likely to remain the mainstay of what iProov and others will develop going forward for these and similar use cases, although the company also offers a palm-based identification method, too. Primarily, however, iProov and others will have to follow the lead of the organizations they work for: their tech will only be as useful as whatn ever biometric information the original organization collects. (And these days, government-issued IDs, with photos, remain the main source of that data.)

As we move ever more processes to digital and cloud-based platforms, finding ever more watertight methods of verifying identities of users, while evading the increasingly sophisticated approaches of fraudsters and malicious hackers, will continue to be a huge priority. Investors seem willing to place bets on iProov being one of the strong players in keeping those services working as they should.

“We see iProov as becoming the industry standard to establish the genuine presence of anything (a person, a document etc),” said Kyle Ryland, a managing partner at Sumeru, in a statement to TechCrunch. “We hope that iProov will be used not only to accelerate digital onboarding and verification for both online and physical experiences, but also to replace the use of insecure passwords for frictionless authentication and much more. We have a platform that is constantly learning and allows us to remain at the forefront of emerging technologies and new security threats.”

Stytch, an API-first passwordless startup, raises $90M Series B at $1B valuation

Stytch, an API-first passwordless authentication startup, has secured $90 million in Series B funding, pushing the company over the $1 billion valuation line.

The investment, led by Coatue Management LLC with participation from existing investors Benchmark Capital, Thrive Capital and Index Ventures, comes just four months after Stytch raised $30M Series A at a valuation somewhere around $200 million. Since then, the startup has seen an almost 1,000% increase in developers using its passwordless authentication platform, rising from 350 developers in July to about 4,000 in November.

This, the company’s CEO tells us, is because of its API-first approach. “If you think about other passwordless startups, they’re very widget focused,” Reed McGinley-Stempel, CEO of Stytch and former Plaid employee tells TechCrunch. “We’ve had enough experience with non-API-first products that we knew there were a lot of limitations to what you can do.”

“For example, one of the common use cases we’ve seen that we would never have anticipated is checkout flows – adding the ability to create a new account at checkout with an SMS password or email verification, rather than guest checkout,” said McGinley-Stempel.

The company has also launched a number of new products since its Series A raise in July, including support for Sign In with Apple, Google and Microsoft, embeddable magic links, and one-time passcodes by email. This week it’s also adding support for WebAuthn, allowing Stytch to support hardware-based authentication keys like Yubico, as well as biometric-based Face ID and fingerprint logins.

Stytch said it plans to further expand its platform over the coming months, too. As part of its Series B raise, it acquired Cotter, a no-code passwordless authentication platform backed by Y Combinator that allows users to add one-tap login to websites and apps. Stytch said it will make it easier for developers to adopt passwordless technologies.

The company will also use the funding to expand its 30-person team and to build out its infrastructure.

Stytch isn’t the only startup that’s on a mission to kill off the password. In June, Transmit Security raised $543 million, in what was believed to be the largest Series A investment in cybersecurity history, for its natively passwordless identity and risk management solution. The following month Magic, a San Francisco-based startup that builds “plug and play” passwordless authentication technology, announced it had raised $27 million in Series A funding.

Yubico’s new hardware key features a fingerprint reader for passwordless logins

Yubico, the maker of hardware security keys, has unveiled the YubiKey Bio, its first key to support biometric authentication for passwordless logins and multi-factor authentication (2FA)

The launch of the YubiKey Bio comes as tech giants transition away from traditional password-based logins, a prime target for cyberattacks. Microsoft recently rolled out a passwordless sign-in option to all consumer accounts, and Google this year announced plans to eventually eliminate the password.

The YubiKey Bio — first teased almost two years ago at Microsoft Ignite in November 2019 — jumps on the passwordless bandwagon by embedding a built-in fingerprint reader to the key. Yubico describes this as the “next logical step” for the company, particularly as most modern laptops still don’t feature built-in biometric security.

Once a fingerprint is enrolled on the device, the data is stored in a secure element, while the biometric subsystem runs independently of the key’s core security functionality. All communication between the secure element and the rest of the key is encrypted to help thwart replay attacks.

“With the launch of the YubiKey Bio Series, we are proud to raise the standard for biometric security keys, enabling simple and strong passwordless authentication for our enterprise customers and everyday YubiKey users,” said Stina Ehrensvärd, CEO and co-founder of Yubico.

YubiKey Bio

YubiKey Bio (Image: Yubico/supplied)

The company also notes that while no security system is foolproof, a hacker with a fake print would need to get their hands on the user’s physical device to take advantage of that print, which dramatically reduces the threat to the average YubiKey customer. Yubico will allow users to use their PINs to login in case biometric authentication does not work due to variables in skin texture related to moisture or temperature.

Like Yubico’s other hardware security keys, such as the YubiKey 5C NFC, the YubiKey Bio is a plug-and-play device; it doesn’t require any batteries, drivers, or associated software, and it will work across macOS, Windows and Linux machines. YubiKeys also support open security and authentication standards — FIDO U2F, FIDO2, and WebAuthn protocols — so they work on Macs, iPhones, Windows PCs, and Android devices, as well as apps and services that support FIDO protocols out-of-the-box.

The YubiKey Bio is out now in both USB-A ($80) and USB-C ($85) versions.