A security bug in Google’s Android app put users’ data at risk

Until recently, Google’s namesake Android app, which has more than five billion installs to date, had a vulnerability that could have allowed an attacker to quietly steal personal data from a victim’s device.

Sergey Toshin, founder of mobile app security startup Oversecured, said in a blog post that the vulnerability has to do with how the Google app relies on code that is not bundled with the app itself. Many Android apps, including the Google app, reduce their download size and the storage space needed to run by relying on code libraries that are already installed on Android phones.

But the flaw in the Google app’s code meant it could be tricked into pulling a code library from a malicious app on the same device instead of the legitimate code library, allowing the malicious app to inherit the Google app’s permissions and granting it near-complete access to a user’s data. That access includes access to a user’s Google accounts, search history, email, text messages, contacts and call history, as well as being able to trigger the microphone and camera, and access the user’s location.

The malicious app would have to be launched once for the attack to work, Toshin said, but that the attack happens without the victim’s knowledge or consent. Deleting the malicious app would not remove the malicious components from the Google app, he said.

A Google spokesperson told TechCrunch that the company fixed the vulnerability last month and it had no evidence that the flaw has been exploited by attackers. Android’s in-built malware scanner, Google Play Protect, is meant to stop malicious apps from installing. But no security feature is perfect, and malicious apps have slipped through its net before.

Toshin said the Google app vulnerability is similar to another bug discovered by the startup in TikTok earlier this year, which if exploited could have allowed an attacker to steal a TikTok user’s session tokens to take control of their account.

Oversecured has found several other similar vulnerabilities, including Android’s Google Play app and, more recently, apps pre-installed on Samsung phones.

 

Ukrainian police arrest multiple Clop ransomware gang suspects

Multiple suspects believed to be linked to the Clop ransomware gang have been detained in Ukraine after a joint operation from law enforcement agencies in Ukraine, South Korea, and the United States.

The Cyber Police Department of the National Police of Ukraine confirmed that six arrests were made after searches at 21 residences in the capital Kyiv and nearby regions. While it’s unclear whether the defendants are affiliates or core developers of the ransomware operation, they are accused of running a “double extortion” scheme, in which victims who refuse to pay the ransom are threatened with the leak of data stolen from their networks prior to their files being encrypted.

“It was established that six defendants carried out attacks of malicious software such as ‘ransomware’ on the servers of American and [South] Korean companies,” alleged Ukraine’s national police force in a statement.

The police also seized equipment from the alleged Clop ransomware gang, said to behind total financial damages of about $500 million. This includes computer equipment, several cars — including a Tesla and Mercedes, and 5 million Ukrainian Hryvnia (around $185,000) in cash. The authorities also claim to have successfully shut down the server infrastructure used by the gang members to launch previous attacks.

“Together, law enforcement has managed to shut down the infrastructure from which the virus spreads and block channels for legalizing criminally acquired cryptocurrencies,” the statement added.

These attacks first began in February 2019, when the group attacked four Korean companies and encrypted 810 internal services and personal computers. Since, Clop — often styled as “Cl0p” — has been linked to a number of high-profile ransomware attacks. These include the breach of U.S. pharmaceutical giant ExecuPharm in April 2020 and the attack on South Korean e-commerce giant E-Land in November that forced the retailer to close almost half of its stores.

Clop is also linked to the ransomware attack and data breach at Accellion, which saw hackers exploit flaws in the IT provider’s File Transfer Appliance (FTA) software to steal data from dozens of its customers. Victims of this breach include Singaporean telecom Singtel, law firm Jones Day, grocery store chain Kroger, and cybersecurity firm Qualys.

At the time of writing, the dark web portal that Clop uses to share stolen data is still up and running, although it hasn’t been updated for several weeks. However, law enforcement typically replaces the targets’ website with their own logo in the event of a successful takedown, which suggests that members of the gang could still be active.

“The Cl0p operation has been used to disrupt and extort organizations globally in a variety of sectors including telecommunications, pharmaceuticals, oil and gas, aerospace, and technology,” said John Hultquist, vice president of analysis at Mandiant’s threat intelligence unit. “The actor FIN11 has been strongly associated with this operation, which has included both ransomware and extortion, but it is unclear if the arrests included FIN11 actors or others who may also be associated with the operation.”

Hultquist said the efforts of the Ukrainian police “are a reminder that the country is a strong partner for the U.S. in the fight against cybercrime and authorities there are making the effort to deny criminals a safe harbor.”

The alleged perpetrators face up to eight years in prison on charges of unauthorized interference in the work of computers, automated systems, computer networks, or telecommunications networks and laundering property obtained by criminal means.

News of the arrests comes as international law enforcement turns up the heat on ransomware gangs. Last week, the U.S. Department of Justice announced that it had seized most of the ransom paid to members of DarkSide by Colonial Pipeline.

Your boss might tell you the office is more secure, but it isn’t

For the past 18 months, employees have enjoyed increased flexibility, and ultimately a better work-life balance, as a result of the mass shift to remote working necessitated by the pandemic. Most don’t want this arrangement, which brought an end to extensive commutes and superfluous meetings, to end: Buffer’s 2021 State of Remote Work report shows over 97% of employees would like to continue working remotely at least some of the time.

Companies, including some of the biggest names in tech, appear to have a different outlook and are beginning to demand that staff start to return to the workplace.

While most of the reasoning around this shift back to the office centers around the need for collaboration and socialization, another reason your employer might say is that the office is more secure. After all, we’ve seen an unprecedented rise in cybersecurity threats during the pandemic, from phishing attacks using Covid as bait to ransomware attacks that have crippled entire organizations.

Tessian research shared with TechCrunch shows that while none of the attacks have been linked to staff working remotely, 56% of IT leaders believe their employees have picked up bad cybersecurity behaviors since working from home. Similarly, 70% of IT leaders believe staff will be more likely to follow company security policies around data protection and data privacy while working in the office.

“Despite the fact that this was an emerging issue prior to the pandemic I do believe many organizations will use security as an excuse to get people back into the office, and in doing so actually ignore the cyber risks they are already exposed to,” Matthew Gribben, a cybersecurity expert, and former GCHQ consultant, told TechCrunch.

“As we’ve just seen with the Colonial Pipeline attack, all it takes is one user account without MFA enabled to bring down your business, regardless of where the user is sat.”

Will Emmerson, CIO at Claromentis, has already witnessed some companies using cybersecurity as a ploy to accelerate the shift to in-person working. “Some organizations are already using cybersecurity as an excuse to get team members to get back into the office,” he says. “Often it’s large firms with legacy infrastructure that relies on a secure perimeter and that haven’t adopted a cloud-first approach.”

“All it takes is one user account without MFA enabled to bring down your business, regardless of where the user is sat.”
Matthew Gribben, former GCHQ consultant

The bigger companies can try to argue for a return to the traditional 9-to-5, but we’ve already seen a bunch of smaller startups embrace remote working as a permanent arrangement. Rather, it will be larger and more risk-averse companies, says Craig Hattersley, CTO of cybersecurity startup SOC.OC, a BAE Systems spin-off, tells TechCrunch, who “begrudgingly let their staff work at home throughout the pandemic, so will seize any opportunity to reverse their new policies.”

“Although I agree that some companies will use the increase of cybersecurity threats to demand their employees go back to the office, I think the size and type of organization will determine their approach,” he says. “A lack of direct visibility of individuals by senior management could lead to a fear that staff are not fully managed.”

While some organizations will use cybersecurity as an excuse to get employees back into the workplace, many believe the traditional office is no longer the most secure option. After all, not only have businesses overhauled cybersecurity measures to cater to dispersed workforces over the past year, but we’ve already seen hackers start to refocus their attention on those returning to the post-COVID office.

“There is no guarantee that where a person is physically located will change the trajectory of increasingly complex cybersecurity attacks, or that employees will show a reduction in mistakes because they are sitting within the walls of an office building,” says Dr. Margaret Cunningham, principal research scientist at Forcepoint.

Some businesses will attempt to get all staff back into the workplace, but this is simply no longer viable: as a result of 18 months of home-working, many employees have moved away from their employer, while others, having found themselves more productive and less distracted, will push back against five days of commutes every week. In fact, a recent study shows that almost 40% of U.S. workers would consider quitting if their bosses made them return to the office full time.

That means most employers will have to, whether they like it or not, embrace a hybrid approach going forward, whereby employees work from the office three days a week and spend two days at home, or vice versa.

This, in itself, makes the cybersecurity argument far less viable. Sam Curry, chief security officer at Cybereason, tells TechCrunch: “The new hybrid phase getting underway is unlike the other risks companies encountered.

“We went from working in the office to working from home and now it will be work-from-anywhere. Assume that all networks are compromised and take a least-trust perspective, constantly reducing inherent trust and incrementally improving. To paraphrase Voltaire, perfection is the enemy of good.”

Elisity raises $26M Series A to scale its AI cybersecurity platform

Elisity, a self-styled innovator that provides behavior-based enterprise cybersecurity, has raised $26 million in Series A funding.

The funding round was co-led by Two Bear Capital and AllegisCyber Capital, the latter of which has invested in a number of cybersecurity startups including Panaseer, with previous seed investor Atlantic Bridge also participating.

Elisity, which is led by industry veterans from Cisco, Qualys, and Viptela, says the funding will help it meet growing enterprise demand for its cloud-delivered Cognitive Trust platform, which it claims is the only platform intelligent enough to understand how assets and people connect beyond corporate perimeters.

The platform looks to help organizations transition from legacy access approaches to zero trust, a security model based on maintaining strict access controls and not trusting anyone — even employees — by default, across their entire digital footprint. This enables organizations to adopt a ‘work-from-anywhere’ model, according to the company, which notes that most companies today continue to rely on security and policies based on physical location or low-level networking constructs, such as VLAN, IP and MAC addresses, and VPNs.

Cognitive Trust, the company claims, can analyze the uniquely identify and context of people, apps and devices, including Internet of Things (IoT) and operational technology (OT), wherever they’re working. The company says its AI-driven behavioral intelligence, the platform can also continuously assess risk and instantly optimize access, connectivity and protection policies.

“CISOs are facing ever increasing attack surfaces caused by the shift to remote work, reliance on cloud-based services (and often multi-cloud), and the convergence of IT/OT networks,” said Mike Goguen, founder and managing partner at Two Bear Capital. “Elisity addresses all of these problems by not only enacting a zero trust model, but by doing so at the edge and within the behavioral context of each interaction. We are excited to partner with the CEO, James Winebrenner, and his team as they expand the reach of their revolutionary approach to enterprise security.”

Founded in 2018, Elisity — whose competitors include the likes of Vectra AI and Lastline closed a $7.5 million seed round in August that same year, led by Atlantic Bridge. With its seed round, Elisity began scaling its engineering, sales and marketing teams to ramp up ahead of the platform’s launch. 

Now it’s looking to scale in order to meet growing enterprise demand, which comes as many organizations move to a hybrid working model and seek the tools to help them secure distributed workforces. 

“When the security perimeter is no longer the network, we see an incredible opportunity to evolve the way enterprises connect and protect their people and their assets, moving away from strict network constructs to identity and context as the basis for secure access,” said Winebrenner. 

“With Elisity, customers can dispense with the complexity, cost and protracted timeline enterprises usually encounter. We can onboard a new customer in as little as 45 minutes, rather than months or years, moving them to an identity-based access policy, and expanding to their cloud and on-prem[ise] footprints over time without having to rip and replace existing identity providers and network infrastructure investments. We do this without making tradeoffs between productivity for employees and the network security posture.”

Elisity, which is based in California, currently employs around 30 staff. However, it currently has no women in its leadership team, nor on its board of directors. 

Supreme Court revives LinkedIn case to protect user data from web scrapers

The Supreme Court has given LinkedIn another chance to stop a rival company from scraping personal information from users’ public profiles, a practice LinkedIn says should be illegal but one that could have broad ramifications for internet researchers and archivists.

LinkedIn lost its case against Hiq Labs in 2019 after the U.S. Ninth Circuit Court of Appeals ruled that the CFAA does not prohibit a company from scraping data that is publicly accessible on the internet.

The Microsoft-owned social network argued that the mass scraping of its users’ profiles was in violation of the Computer Fraud and Abuse Act, or CFAA, which prohibits accessing a computer without authorization.

Hiq Labs, which uses public data to analyze employee attrition, argued at the time that a ruling in LinkedIn’s favor “could profoundly impact open access to the Internet, a result that Congress could not have intended when it enacted the CFAA over three decades ago.” (Hiq Labs has also been sued by Facebook, which it claims scraped public data across Facebook and Instagram, but also Amazon Twitter, and YouTube.)

The Supreme Court said it would not take on the case, but instead ordered the appeal’s court to hear the case again in light of its recent ruling, which found that a person cannot violate the CFAA if they improperly access data on a computer they have permission to use.

The CFAA was once dubbed the “worst law” in the technology law books by critics who have long argued that its outdated and vague language failed to keep up with the pace of the modern internet.

Journalists and archivists have long scraped public data as a way to save and archive copies of old or defunct websites before they shut down. But other cases of web scraping have sparked anger and concerns over privacy and civil liberties. In 2019, a security researcher scraped millions of Venmo transactions, which the company does not make private by default. Clearview AI, a controversial facial recognition startup, claimed it scraped over 3 billion profile photos from social networks without their permission.

 

Fraud protection startup nSure AI raises $6.8M in seed funding

Fraud protection startup nSure AI has raised $6.8 million in seed funding, led by DisruptiveAI, Phoenix Insurance, AXA-backed venture builder Kamet, Moneta Seeds and private investors.

The round will help the company bolster the predictive AI and machine learning algorithms that power nSure AI’s “first of its kind” fraud protection platform. Prior to this round, the company received $550,000 in pre-seed funding from Kamet in March 2019.

The Tel Aviv-headquartered startup, which currently has 16 employees, provides fraud detection for high-risk digital goods, such as electronic gift cards, airline tickets, software, and games. While most fraud detection tools analyze each online transaction in an attempt to decide which purchases to approve and decline, nSure AI’s risk engine leverages deep learning techniques to accurately identify fraudulent transactions.

nSure AI, which is backed by insurance company AXA, said it has a 98% approval rating on average for purchases, compared to an industry average of 80%, allowing retailers to recapture nearly $100 billion a year in revenue lost by declining legitimate customers. The company is so confident in its technology that it will accept liability for any fraudulent transaction allowed by the platform.

nSure AI’s founders Alex Zeltcer and Ziv Isaiah started the company after experiencing the unique challenges faced by retailers of digital assets. The first week of their online gift card business found that 40% of sales were fraudulent, resulting in chargebacks. The founders began to develop their own platform for supporting the sale of high-risk digital goods after no other fraud detection service met their needs.

Alex Zeltcer, co-founder and chief executive, said the investment “enables us to register thousands of new merchants, who can feel confident selling higher-risk digital goods, without accepting fraud as a part of business.”

nSure AI, which currently monitors and manages millions of transactions every month, has approved close to $1 billion in volume since going live in 2019.

Google will let enterprises store their Google Workspace encryption keys

As ubiquitous as Google Docs has become in the last year alone, a major criticism often overlooked by the countless workplaces who use it is that it isn’t end-to-end encrypted, allowing Google — or any requesting government agency — access to a company’s files. But Google is finally addressing that key complaint with a round of updates that will let customers shield their data by storing their own encryption keys.

Google Workspace, the company’s enterprise offering that includes Google Docs, Slides and Sheets, is adding client-side encryption so that a company’s data will be indecipherable to Google.

Companies using Google Workspace can store their encryption keys with one of four partners for now: Flowcrypt, Futurex, Thales, or Virtru, which are compatible with Google’s specifications. The move is largely aimed at regulated industries — like finance, healthcare, and defense — where intellectual property and sensitive data are subject to intense privacy and compliance rules.

(Image: Google / supplied)

The real magic lands later in the year when Google will publish details of an API that will let enterprise customers build their own in-house key service, allowing workplaces to retain direct control of their encryption keys. That means if the government wants that company’s data, they have to knock on their front door — and not sneak around the back by serving the key holder with a legal demand.

Google published technical details of how the client-side encryption feature works, and will roll out as a beta in the coming weeks.

Tech companies giving their corporate customers control of their own encryption keys has been a growing trend in recent years. Slack and cloud vendor Egnyte bucked the trend by allowing their enterprise users to store their own encryption keys, effectively cutting themselves out of the surveillance loop. But Google has dragged its feet on encryption for so long that startups are working to build alternatives that bake in encryption from the ground up.

Google said it’s also pushing out new trust rules for how files are shared in Google Drive to give administrators more granularity on how different levels of sensitive files can be shared, and new data classification labels to mark documents with a level of sensitivity such as “secret” or “internal”.

The company said it’s improving its malware protection efforts by now blocking phishing and malware shared from within organizations. The aim is to help cut down on employees mistakenly sharing malicious documents.

7 new security features Apple quietly announced at WWDC

Apple went big on privacy during its Worldwide Developer Conference (WWDC) keynote this week, showcasing features from on-device Siri audio processing to a new privacy dashboard for iOS that makes it easier than ever to see which apps are collecting your data and when.

While typically vocal about security during the Memoji-filled, two-hour-long(!) keynote, the company also quietly introduced several new security and privacy-focused features during its WWDC developer sessions. We’ve rounded up some of the most interesting — and important.

Passwordless login with iCloud Keychain

Apple is the latest tech company taking steps to ditch the password. During its “Move beyond passwords” developer session, it previewed Passkeys in iCloud Keychain, a method of passwordless authentication powered by WebAuthn, and Face ID and Touch ID.

The feature, which will ultimately be available in both iOS 15 and macOS Monterey, means you no longer have to set a password when creating an account or a website or app. Instead, you’ll simply pick a username, and then use Face ID or Touch ID to confirm it’s you. The passkey is then stored in your keychain and then synced across your Apple devices using iCloud — so you don’t have to remember it, nor do you have to carry around a hardware authenticator key.

“Because it’s just a single tap to sign in, it’s simultaneously easier, faster and more secure than almost all common forms of authentication today,” said Garrett Davidson, an Apple authentication experience engineer. 

While it’s unlikely to be available on your iPhone or Mac any time soon — Apple says the feature is still in its ‘early stages’ and it’s currently disabled by default — the move is another sign of the growing momentum behind eliminating passwords, which are prone to being forgotten, reused across multiple services, and — ultimately — phishing attacks. Microsoft previously announced plans to make Windows 10 password-free, and Google recently confirmed that it’s working towards “creating a future where one day you won’t need a password at all”.

Microphone indicator in macOS

macOS has a new indicator to tell you when the microhpone is on. (Image: Apple)

Since the introduction of iOS 14, iPhone users have been able to keep an eye on which apps are accessing their microphone via a green or orange dot in the status bar. Now it’s coming to the desktop too.

In macOS Monterey, users will be able to see which apps are accessing their Mac’s microphone in Control Center, MacRumors reports, which will complement the existing hardware-based green light that appears next to a Mac’s webcam when the camera is in use.

Secure paste

iOS 15, which will include a bunch of privacy-bolstering tools from Mail Privacy Protection to App Privacy Reports, is also getting a feature called Secure Paste that will help to shield your clipboard data from other apps.

This feature will enable users to paste content from one app to another, without the second app being able to access the information on the clipboard until you paste it. This is a significant improvement over iOS 14, which would notify when an app took data from the clipboard but did nothing to prevent it from happening.

With secure paste, developers can let users paste from a different app without having access to what was copied until the user takes action to paste it into their app,” Apple explains. “When developers use secure paste, users will be able to paste without being alerted via the [clipboard] transparency notification, helping give them peace of mind.”

While this feature sounds somewhat insignificant, it’s being introduced following a major privacy issue that came to light last year. In March 2020, security researchers revealed that dozens of popular iOS apps — including TikTok — were “snooping” on users’ clipboard without their consent, potentially accessing highly sensitive data.

Advanced Fraud Protection for Apple Card

Payments fraud is more prevalent than ever as a result of the pandemic, and Apple is looking to do something about it. As first reported by 9to5Mac, the company has previewed Advanced Fraud Protection, a feature that will let Apple Card users generate new card numbers in the Wallet app.

While details remain thin — the feature isn’t live in the first iOS 15 developer beta — Apple’s explanation suggests that Advanced Fraud Protection will make it possible to generate new security codes — the three-digit number you enter at checkout – when making online purchases. 

“With Advanced Fraud Protection, Apple Card users can have a security code that changes regularly to make online Card Number transactions even more secure,” the brief explainer reads. We’ve asked Apple for some more information. 

‘Unlock with Apple Watch’ for Siri requests

As a result of the widespread mask-wearing necessitated by the pandemic, Apple introduced an ‘Unlock with Apple Watch’ in iOS 14.5 that let enabled users to unlock their iPhone and authenticate Apple Pay payments using an Apple Watch instead of Face ID.

The scope of this feature is expanding with iOS 15, as the company has confirmed that users will soon be able to use this alternative authentication method for Siri requests, such as adjusting phone settings or reading messages. Currently, users have to enter a PIN, password or use Face ID to do so.

“Use the secure connection to your Apple Watch for Siri requests or to unlock your iPhone when an obstruction, like a mask, prevents Face ID from recognizing your Face,” Apple explains. Your watch must be passcode protected, unlocked, and on your wrist close by.”

Standalone security patches

To ensure iPhone users who don’t want to upgrade to iOS 15 straight away are up to date with security updates, Apple is going to start decoupling patches from feature updates. When iOS 15 lands later this year, users will be given the option to update to the latest version of iOS or to stick with iOS 14 and simply install the latest security fixes. 

“iOS now offers a choice between two software update versions in the Settings app,” Apple explains (via MacRumors). “You can update to the latest version of iOS 15 as soon as it’s released for the latest features and most complete set of security updates. Or continue on ‌iOS 14‌ and still get important security updates until you’re ready to upgrade to the next major version.”

This feature sees Apple following in the footsteps of Google, which has long rolled out monthly security patches to Android users.

‘Erase all contents and settings’ for Mac

Wiping a Mac has been a laborious task that has required you to erase your device completely then reinstall macOS. Thankfully, that’s going to change. Apple is bringing the “erase all contents and settings” option that’s been on iPhones and iPads for years to macOS Monterey.

The option will let you factory reset your MacBook with just a click. “System Preferences now offers an option to erase all user data and user-installed apps from the system, while maintaining the operating system currently installed,” Apple says. “Because storage is always encrypted on Mac systems with Apple Silicon or the T2 chip, the system is instantly and securely ‘erased’ by destroying the encryption keys.”

Volkswagen says a vendor’s security lapse exposed 3.3 million drivers’ details

Volkswagen says more than 3.3 million customers had their information exposed after one of its vendors left a cache of customer data unsecured on the internet.

The car maker said in a letter that the vendor, used by Volkswagen, its subsidiary Audi, and authorized dealers in the U.S. and Canada, left the customer data spanning 2014 to 2019 unprotected over a two-year window between August 2019 and May 2021.

The data, which Volkswagen said was gathered for sales and marketing, contained personal information about customers and prospective buyers, including their name, postal and email addresses, and phone number.

But more than 90,000 customers across the U.S. and Canada also had more sensitive data exposed, including information relating to loan eligibility. The letter said most of the sensitive data was driver’s license numbers, but that a “small” number of records also included a customer’s date of birth and Social Security numbers.

Volkswagen did not name the vendor, and a company spokesperson did not immediately comment.

It’s the latest security incident involving driver’s license numbers in recent months. Insurance giants Metromile and Geico admitted earlier this year that their quote forms had been abused by scammers trying to obtain driver’s license numbers. Several other car insurance companies have also reported similar incidents involving the theft of driver’s license numbers. Geico said it was likely an effort by scammers to file and cash fraudulent unemployment benefits in another person’s name.

Volkswagen’s letter, however, did not say if the company had evidence that the data exposed by the vendor was misused.

 

Security flaws found in Samsung’s stock mobile apps

A mobile security startup has found seven security flaws in Samsung’s pre-installed mobile apps, which it says if abused could have allowed attackers broad access to a victim’s personal data.

Oversecured said the vulnerabilities were found in several apps and components bundled with Samsung phones and tablets. Oversecured founder Sergey Toshin told TechCrunch that the vulnerabilities were verified on a Samsung Galaxy S10+ but that all Samsung devices could be potentially affected because the baked-in apps are responsible for system functionality.

Toshin said the vulnerabilities could have allowed a malicious app on the same device to steal a victim’s photos, videos, contacts, call records and messages, and change settings “without any user consent or notice” by hijacking the permissions from Samsung’s stock apps.

One of the flaws could have allowed the theft of data by exploiting a vulnerability in Samsung’s Secure Folder app, which has a “large set” of rights across the device. In a proof-of-concept, Toshin showed the bug could be used to steal contacts data. Another bug in Samsung’s Knox security software could have been abused to install other malicious apps, while a bug in Samsung Dex could have been used to scrape data from user notifications from apps, email inboxes and messages.

Oversecured published technical details of the vulnerabilities in a blog post, and said it reported the bugs to Samsung, which fixed the flaws.

Samsung confirmed the flaws affected “selected” Galaxy devices but would not provide a list of specific devices. “There have been no known reported issues globally and users should be assured that their sensitive information was not at risk,” but provided no evidence for this claim. “We addressed the potential vulnerability by developing and issuing security patches via software update in April and May, 2021 as soon as we identified this issue.”

The startup, which launched earlier this year after self-funding $1 million in bug bounty payouts, uses automation to search for vulnerabilities in Android code. Toshin has found similar security flaws in TikTok and Android’s Google Play app.